A multimodal measurement of the impact of deepfakes on the ethical reasoning and affective reactions of students
Deepfakes - synthetic videos generated by machine learning models - are becoming increasingly sophisticated. While they have several positive use cases, their potential for harm is also high. Deepfake production involves input from multiple engineers, making it challenging to assign individual responsibility for their creation. The separation between engineers and consumers may also contribute to a lack of empathy on the part of the former towards the latter. At present, engineering ethics education appears inadequate to address these issues. Indeed, the ethics of artificial intelligence is often taught as a stand-alone course or a separate module at the end of a course. This approach does not afford time for students to critically engage with the technology and consider its possible harmful effects on users. Thus, this experimental study aims to investigate the effects of the use of deepfakes on engineering students’ moral sensitivity and reasoning. First, students are instructed about how to evaluate the technical proficiency of deepfakes and about the ethical issues associated with them. Then, they watch three videos: an authentic video and two deepfake videos featuring the same person. While watching these videos, the data related to their attentional (eye tracking) and emotional (self-reports, facial emotion recognition) engagement is collected. Finally, they are interviewed using a protocol modelled on Kohlberg’s ‘Moral Judgement Interview’. The findings can have significant implications for how technology-specific ethics can be taught to engineers, while providing them space to engage and empathise with potential stakeholders as part of their decision-making process.
Article - SEFI 2023.pdf
postprint
openaccess
CC BY-NC-ND
2.78 MB
Adobe PDF
b8ecfc32a8d165fcae2290ca4d7330bc