From deepfake to shallow truths: algorithmic subjectivation and the uncanny self

Paper presented at Canadian Communication Association (CCA) at York University, Toronto, Canada, 2023.

Deepfake is a new form of synthetic media that merges, replaces, and superimposes images and video clips onto a video, creating fake content that appears authentic. Since Samantha Cole’s (2017) revelatory reporting on deepfakes and non-consensual pornography, the technology has become deeply controversial. Almost exclusively targeting women, deepfake porn follows historical patterns of gendered abuse in cyberspace, from rape in early Internet chat rooms and Metaverse worlds (Dibbell, 1993) to death threats experienced by Noelle Martin (Scott, 2020). However, the issue only received public attention when deepfakes started to be used as a political weapon to target high-profile politicians and spread disinformation, causing content moderation problems in digital platforms (Gallagher, 2019).

In its current state, deepfake detection is often referred to as a “cat-and-mouse” game between deepfake generators and the detectors designed to identify them. In 2019, a pool of big tech companies led by Facebook organized a US$ 1 million Deepfake Detection Challenge (DFDC) on Kaggle, a Google-subsidiary popular in the AI community where teams compete to develop machine learning algorithms. The competition attracted more than 2,000 participants among the top data scientists in the world, who collectively produced over 35,000 predictive models for deepfake detection. While just a few of these models were capable of producing reliable results, little is known about how deepfake detectors are developed, how they are calibrated, and in which conditions they are used.

In this paper, I discuss how the DFDC translated deepfakes’ controversial social implications into a technical challenge that rendered unthinkable the political and instead focused on rules and optimalities. Using Digital Methods to examine the DFDC’s forum, I show how Facebook articulated the challenge, the social and cultural biases inherent in the solutions proposed for the competition, and the lack of ethics regarding privacy protection. By crowdsourcing the problem, big tech corporations mobilize free labour, algorithms, and extensive user-generated datasets to produce predictive models for detecting and asserting what is true and what is fake and, consequently, producing new forms of subjectivation. 

I argue that deepfakes and their detectors counterparts are part of a new political economy of subjectivation that further intensifies the relationship between subjectivity, truths, and power, serving as a tool to modulate individuals and collective conditions of existence (Langlois & Elmer, 2019). Purposely, they are not well-polished tools but mechanisms through which one can gain control over an individual’s self-image in order to reconstruct their identity either by simulating non-existing behaviours and habits or dissimulating their character and personality.