Since the day of their birth in 2017 on Reddit, the rise of deepfake videos in the media landscape has generated new techno-phobias. They are algorithmic images, digital faces manipulated through artificial intelligence, the result of automatic processes involving GANs (Adversarial Generative Networks) and software such as DeepFaceLab or FakeApp. Their producers, the deepfake designers, have several times been accused of spreading misinformation, encouraging and facilitating the production process of audiovisual fake news, so undermining the already fragile ecosystem of digital information. In her pioneering book Deep Fakes and the infocalypse (2020) Nina Schick spoke of “infocalypse” in relation to the informational short-circuit that could be generated by the advent of deepfakes. Following Schick, Michael Grothaus has provocatively titled his essay dedicated to the birth and prospects of the practice Trust No One (2021). At the same time, deepfake videos are fostering the emergence of new forms of audiovisual storytelling within the sphere of auteur cinema, advertising, pornography and videoclips, giving birth to a new aesthetics and re-actualizing the ancient debate around mimesis. Adopting a multidisciplinary approach, and through a series of case studies from the porn, cinematic and videoclip fields this dissertation tries to present a taxonomy and a theoretical framework in which moving the future debate around deepfakes, the risks they pose and the chances they pave the way for.
It's not me, it's a deepfake. Un percorso d’orientamento tra i media manipolati attraverso l’intelligenza artificiale
LAFIANDRA, PIETRO
2023
Abstract
Since the day of their birth in 2017 on Reddit, the rise of deepfake videos in the media landscape has generated new techno-phobias. They are algorithmic images, digital faces manipulated through artificial intelligence, the result of automatic processes involving GANs (Adversarial Generative Networks) and software such as DeepFaceLab or FakeApp. Their producers, the deepfake designers, have several times been accused of spreading misinformation, encouraging and facilitating the production process of audiovisual fake news, so undermining the already fragile ecosystem of digital information. In her pioneering book Deep Fakes and the infocalypse (2020) Nina Schick spoke of “infocalypse” in relation to the informational short-circuit that could be generated by the advent of deepfakes. Following Schick, Michael Grothaus has provocatively titled his essay dedicated to the birth and prospects of the practice Trust No One (2021). At the same time, deepfake videos are fostering the emergence of new forms of audiovisual storytelling within the sphere of auteur cinema, advertising, pornography and videoclips, giving birth to a new aesthetics and re-actualizing the ancient debate around mimesis. Adopting a multidisciplinary approach, and through a series of case studies from the porn, cinematic and videoclip fields this dissertation tries to present a taxonomy and a theoretical framework in which moving the future debate around deepfakes, the risks they pose and the chances they pave the way for.File | Dimensione | Formato | |
---|---|---|---|
Pietro Lafiandra_1012115.pdf
accesso aperto
Dimensione
2.55 MB
Formato
Adobe PDF
|
2.55 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/62018
URN:NBN:IT:IULM-62018