Skip to main content

2022 | Buch

Deepfakes

A Realistic Assessment of Potentials, Risks, and Policy Regulation

insite
SUCHEN

Über dieses Buch

This book examines the use and potential impact of deepfakes, a type of synthetic computer-generated media, primarily images and videos, capable of both creating artificial representations of non-existent individuals and showing actual individuals doing things they did not do. As such, deepfakes pose an obvious threat of manipulation and, unsurprisingly, have been the subject of a great deal of alarmism in both the news media and academic articles.
Hence, this book sets out to critically evaluate potential threats by analyzing human susceptibility to manipulation and using that as a backdrop for a discussion of actual and likely uses of deepfakes. In contrast to the usual threat narrative, this book will put forward a multi-sided picture of deepfakes, exploring their potential and that of adjacent technologies for creative use in domains ranging from film and advertisement to painting. The challenges posed by deepfakes are further evaluated with regard to present or forthcoming legislation and other regulatory measures. Finally, deepfakes are placed within a broader cultural and philosophical context, focusing primarily on posthumanist thought.
Therefore, this book is a must-read for researchers, students, and practitioners of political science and other disciplines, interested in a better understanding of deepfakes.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
The intention behind this book is to analyse the use and potential impact of deepfakes—primarily images and videos that can involve both creating artificial representations of non-existent individuals and showing actual individuals doing things they did not do. As such, deepfakes pose an obvious threat of manipulation and, unsurprisingly, have been the subject of a great deal of alarmism in both the news media and academic articles. Hence, this book sets out to critically evaluate potential threats by analysing human susceptibility to manipulation and then using that as a backdrop for a discussion of actual and likely uses of deepfakes. In contrast to the usual threat narrative, this book puts forward a multi-sided picture of deepfakes, including by exploring the potential to use deepfakes and underlying or adjacent technologies in creative domains ranging from film and advertisement to painting. The challenges posed by deepfakes are further evaluated with regards to present or forthcoming legislation and other regulatory measures. Finally, deepfakes are placed within a broader cultural and philosophical context, focusing primarily on posthumanist thought. Hence, this book is explicitly conceived as a concise but, nevertheless, well-rounded treatment of deepfakes.
Ignas Kalpokas, Julija Kalpokiene
Chapter 2. Fake News: Exploring the Backdrop
Abstract
In order to better understand the immediate context of deepfakes and, in particular, of its most widely discussed application—disinformation—one must start with fake news. However, there immediately is a problem because any discussion of fake news has become very difficult, almost to the point of being counter-productive, precisely due to the ubiquity of the term. Indeed, one could broadly agree with Jankowicz’s (2020, p. xx) observation that the term ‘fake news’ has been used so much that ‘it has all but lost meaning’. Nevertheless, it must still be admitted that the current information environment appears to offer a favourable climate for deliberately manufactured false information to spread, and this is a key characteristic that determines both the use and perception of deepfakes.
Ignas Kalpokas, Julija Kalpokiene
Chapter 3. On Human Susceptibility: Assessing Potential Threats
Abstract
In an already attention-intensive environment characterised by ‘chaos and disorder’ as well as ‘a blurring of work and home zones, spurred by notions of temporal excess, absorption, immersion and a squandering of time’ (Chambers, 2019, p. 3), it comes as no surprise that individuals aim to minimise their cognitive load whenever possible. Indeed, according to the cognitive bottleneck theory, even when as few as two cognitive tasks need to be performed simultaneously, there will already be ‘a decrease in performance in at least one of the tasks’, necessitating a strategic balancing and distribution of attention (Tanner, 2020, p. 66; an identical point, albeit expressed in different terms, is also stressed by Citton, 2017, pp. 31–32 as well as Hendricks & Vestergaard, 2019, p. 3). Notably, information acquisition transpires to be one such area where savings are being made. It is, therefore, not accidental that this juncture of object selection and attention allocation is exactly the weak spot that disinformation agents increasingly seem to target (Till, 2021, pp. 1364–1365). Under such circumstances, reliance on substitutes to cognition, such as ‘emotional cues, experience, and existing beliefs’ ultimately ‘saves time and cognitive energy and leads to quicker but more biased decisions than using rational cues’ (Park & Kaye, 2019, p. 6; see also Kim, 2018, p. 4819). In this way, fake content already has as advantage in the very fact that it can be manufactured and refashioned in any way that the audience will find appealing.
Ignas Kalpokas, Julija Kalpokiene
Chapter 4. From GANs to Deepfakes: Getting the Characteristics Right
Abstract
Broadly speaking, deepfakes can be defined within the intersection of technology and communication and/or visual representation as ‘a technology that uses Artificial Intelligence to produce or edit contents of a video or an image to show something that never happened’ (Young, 2019, p. 8). More precisely, as Whittaker et al. (2020, p. 92) note, ‘deepfakes are the product of AI and the machine learning technique of “deep learning”, which is used to train deep neural networks (DNNs)’; although composed of simple computational units, or artificial neurons, such networks are more than the sum total of their operations. Instead, ‘when set up as a network of thousands and millions of units, these simple functions combine to perform complex feats, such as object recognition, language translation, or robot navigation’ (Whittaker et al., 2020, p. 92). To further narrow things down, the creation process employs so-called Generative Adversarial Networks (GANs).
Ignas Kalpokas, Julija Kalpokiene
Chapter 5. On Alarmism: Between Infodemic and Epistemic Anarchy
Abstract
Authors currently writing on deepfakes frequently do not shy away from strong and impactful assessments. It has become commonplace to assert, as, for example, Whittaker et al. (2020, p. 95) do, that ‘[d]eepfakes and GANs represent the next generation of fake news and threaten to further erode trust in online information’ due to the difficulty in spotting the manipulation, particularly when one takes cognitive biases and the structural features of today’s media, such as echo chambers, into account. Hence, as Breen (2021, p. 123) argues, deepfakes ‘take disinformation to the next level’ by ‘further complicat[ing] the ability to decipher true information’. Similarly, Whittaker et al. (2021, p. 5) emphasise a combination of low barrier of entry (as deepfakes can be easily created even by those with limited skills and resources), the ease of sharing content on social media, and the ever-growing amount of digital material featuring a vast proportion of the global population that can be used as training data as a major point of concern. Seen in this light, deepfakes ‘can have a massive impact on public perception of events’ (Breen 2021, p. 145). Hence, the popular narrative goes, ‘artificially generated content will further fuel the fake news crisis with their ability to undermine truth and confuse viewers’ (Whittaker et al., 2020, p. 95). The identified negative effects are fundamental, such as ‘distortion of democratic discourse, eroding trust in institutions and journalism, increasing social divisions, and threats to national security’ (Wilkerson, 2021, p. 412). The stakes seemingly cannot be higher—after all, as Huston and Bahm (2020) warn, ‘[t]ruth is under attack’ in what Schick (2020) describes as the ‘Infocalypse’, a world allegedly oversaturated with disinformation and manipulation to the extent that the public sphere is effectively destroyed, with deepfakes seen as a major contributing factor. Nevertheless, as will be shown in this chapter and subsequently in this book, such accounts give simultaneously too much and too little credit to deepfakes.
Ignas Kalpokas, Julija Kalpokiene
Chapter 6. I CAN Do It: From GANs and Deepfakes to Art
Abstract
Since GANs are effective in generating new data from existing examples, the natural question to ask is whether they represent a new step forward in machine creativity. To some extent, a certain (albeit limited and tightly circumscribed) idea of creativity is already central to machine learning in general—after all, at its heart ‘is the idea that an algorithm can be created that will find new questions to ask if it gets something wrong’—by learning from past mistakes, the algorithm tweaks itself so that it produces a better outcome next time (Du Sautoy, 2020, pp. 67–68). If displayed by human, such quality would likely be called creative thinking. Such ability is particularly acute in deep learning, which effectively involves the ‘rewiring’ of the entire neural network (or segments thereof) in the process, with such adaptability already manifesting the seeds of creativity within it.
Ignas Kalpokas, Julija Kalpokiene
Chapter 7. Regulation: Public, Private, Autonomous?
Abstract
Deepfakes are described as a dual-use technology by the European Parliamentary Research Service (2021, p. 70), and there is a good reason for seeing them in this way: despite of the benefits they may bring, there are also multiple threats to both individual persons and whole societies. This chapter will focus on three modalities of regulation: law, platform policies, and technological solutions, such as automatic detection. As shown below, current legal regulation is often insufficient or wholly inadequate, meaning that private regulation by online platforms transpires to be the most efficient regulatory measure available. However, such regulation is not uniform in either substance or application. Moreover, it must be stressed that due to the amount of content that has to be dealt with, such regulation can only be effective if it is automated; while such automation has obvious efficiency benefits, as demonstrated, potential vulnerabilities are present as well.
Ignas Kalpokas, Julija Kalpokiene
Chapter 8. Broader Implications: Politics and Digital Posthumanism
Abstract
As showed in the previous chapters, deepfakes do have a potential to partake in important societal transformations and to contribute to large-scale and broad-ranging problems and issues that partake to our social and political life, although not necessarily (or not always) in ways predicted by the more alarmist takes on the matter. However, deepfakes, and the underlying technology—GANs and, more broadly, deep learning—cannot be seen in isolation. Instead, they are part and parcel of broader processes involving humans, technology, and the natural environment. Generally speaking, such interrelationships are best analysed from a posthumanist standpoint. Therefore, this chapter is dedicated to wrapping up the analysis of deepfakes and their underlying technologies by pinpointing the way in which they contribute to the broader rejection of ideas around human autonomy and primacy while also questioning ideas around the very possibility of privileged access to reality.
Ignas Kalpokas, Julija Kalpokiene
Chapter 9. Conclusion
Abstract
The aim of this book has been to provide a nuanced and realistic assessment of deepfakes by exploring their most proximate environment, including technological underpinnings and the broad transformations of the media environment while also extending the thought process towards broader, more general developments, such as those encapsulated by posthumanist thought. In doing so, a picture emerges of deepfakes that can be, and in many ways are, dangerous, possessing the potential if not to undermine entire political systems, then at least to destroy individual lives, but also manifesting potentially beneficial uses and even representing important societal and cultural shifts, e.g. towards algorithmic creativity. They are also illustrative of broader transformations of our societies—most notably, involving a thrust towards deprivileging the human self. As such, deepfakes are perhaps less dramatic (in the headline-grabbing sense) but, in fact, significantly more interesting and intriguing than previously conceived.
Ignas Kalpokas, Julija Kalpokiene
Metadaten
Titel
Deepfakes
verfasst von
Ignas Kalpokas
Julija Kalpokiene
Copyright-Jahr
2022
Electronic ISBN
978-3-030-93802-4
Print ISBN
978-3-030-93801-7
DOI
https://doi.org/10.1007/978-3-030-93802-4