Two realities and the ethics of honesty
We could interpret Tognazzini’s distinction between two realities, a real one (the reality of the magician) and a fake, illusory one (the reality of the spectator) as a familiar metaphysical position that goes back at least as far as Plato: there is reality versus illusion, there are appearances versus the real. Plato’s metaphor and myth/narrative of the cave, which he famously presented in his Republic (514–520a), is applicable here, with the magician in the role of the artist-craftsman creating the illusions by using all kinds of objects (wall, fire, objects) and in the role of the all-knowing philosopher who can distinguish between appearances and reality. The prisoners, by contrast, are the spectators who live in illusion. The only difference with Plato’s metaphor seems to be that at least some spectators of stage magic have the desire to find out about reality, want to understand the tricks. (Note that this metaphor of the prisoner also explains why there is often an adversarial relationship between magician and spectator, as recognized in the literature mentioned above: the spectator wants to find out, but the magician forces her magic onto the spectators. This could be seen as an issue concerning power, which I will explore at the end of my paper).
The normative implication of such a position, then, is the imperative to keep reality and illusion separate. Perhaps it is fine to be a “prisoner” of the magician during the show, but one should know that it is only an illusion. And indeed Tognazzini proposes an ethics of honesty: during the show spectators are provided with an illusion and should think that the magician is supernatural, but outside the theatre magicians do not claim to be supernatural. In other words, we should maintain a distinction between illusion and reality:
the magician is not supernatural; the character he plays is. The computer is not capable of human intelligence and warmth; the character we create is. People will not end up feeling deceived and used when they discover, as they must ultimately, that the computer is nothing but a very fast idiot (Tognazzini
1993, p. 361)
For use of ICTs, this position thus means that users of for instance robots, talking machines, virtual reality technology, and games, should be made aware—through design, advertisement, and other means—that what they experience is an illusion, created by science and technology. It is fine to create illusion, for instance through a virtual reality device, as long as it is clear to the user that it is an illusion. Devices need then be designed and promoted in such a way that the user gains or retains this awareness, even if during use of the device this awareness may be temporarily suspended. Compare again to the show of the magician: outside the theatre, we know that it is a show and that it is deception. Thus, we can conclude from this position that the magician or designer should work in such a way that the user is provided with an illusion, but at the same time knows that the reality created by the computer is not real or that the computer, robot, app, etc. is technology, not a person.
This ethical position is in line with positions that see technology is a mere instrument and
should be a mere instrument. For instance, Bryson (
2010) argued robots are not persons, they are there to work for us, they are slaves. Consider also again the view I mentioned before by referring to the Sparrows: that robots should be designed as what they are, rather than pretending to be what they are not. This seems a reasonable position. Tognazzini’s interpretation of what happens in stage magic and illusionism is compelling, as is his application to interface design and—by expansion—to the design of ICTs. His ethics is also rather attractive since it is in line with many ordinary intuitions we have about technologies such as robots.
However, I am afraid we must complicate matters now. The rationale for doing so is twofold. First, when we look at experience and use of ICTs, we see that the phenomenology of this use and experience is sometimes difficult to describe in dualist terms, in terms of two realities. When we use the internet (e.g. through our smartphones), play games, talk to robots, etc. it seems rather than what we call “real” and “virtual” are mixed. Floridi (
2015), for instance, has used the term “onlife” for emphasizing how it becomes increasingly difficult to describe our use of internet as being about “online” versus “offline”. Similarly, we could say that when we interact with smart devices, human-like autonomous robots, voice interfaces such as Google Assistant, and so on, we do not generally experience this as “illusion” in contrast to “reality”. Now according to the Platonic position, this only shows our imprisonment in appearances. It shows how the companies developing these tools manage to give us the fake as opposed to the real. It is a lie. While this argument seems compelling, and can again be combined with an ethics of honesty (which could be developed in terms of a virtue ethics), it is not entirely satisfactory. Is our experience that there is only
one experience and reality when we use these devices, that they are part of and entangled with our lives, entirely misguided? Or shall we at least also consider other ways to conceptualize what is going on, while still being able to criticize certain phenomena such as attachment to machines?
Second, in the history of philosophy, especially the history of twentieth century philosophy, we see that the Platonic position is far from uncontested. There are all kinds of other, less dualist or nondualist metaphysics available. Perhaps there are only appearances, perhaps there is no-one behind the mask and no “real” behind the curtain. Perhaps there are multiple realities, or different perspectives, or different levels of description, or different levels of abstraction (in the context of thinking about ICTs see Floridi’s philosophy of information, for instance). Or perhaps there is just one reality—natural or informational, for instance. These are all distinct metaphysical positions which are highly relevant to discussions about the ethics of ICTs. We can connect them to discussions in the history of philosophy, for instance anti-Platonic positions in Nietzsche, Dewey, and so on. In this paper, I do not have the space to discuss these at length, let alone to engage with the history of contemporary philosophy to which these positions must be related. For my purposes, it will suffice that I try to construct a plausible non-dualist alternative to the Platonic position articulated above, a working approach so to speak. This approach is influenced by, and will be connected and integrated with, relational ways of thinking in contemporary machine ethics (Coeckelbergh
2012,
2014; Coeckelbergh and Gunkel
2014), and further applies process and performance oriented thinking (Coeckelbergh
2017) to thinking about technology. Then I will discuss its ethical implications.
Let me construct an alternative position by using three concepts: process, narrative, and performance.
1
First, the Platonic metaphysics, at least as presented above, is remarkably static. If we describe what goes on in stage magic and in use of ICTs in terms of two realities, what is left out is process. As becomes clear from descriptions of stage magic in the literature cited above, illusionism is a temporal affair, in take place in time, and it is even a particular configuration of time, in the sense that there is the experience time of the spectator (in a dualist framework called “subjective” time or duréee in Bergson’s terms) and the experience time of the magician/designer/programmer/hacker etc. (“objective”, scientific time). If we shed dualistic thinking, however, we simply have different times and experiences that intersect (or not), without giving necessarily ontological priority to one of them. For Tognazzini, magicians manipulate time in the following way:
Magicians use two techniques to offset the actual time a trick (the essential working of the apparatus) takes place from the time of the spectators think it takes place: Anticipation, where the magician does the trick early, before spectators begin looking for it, and Premature Consumption, where the magician does the trick late, after spectators assume it has already occurred (Tognazzini
1993, p. 359)
Thus, there are indeed different times: the time of the magician and the time of the spectator. With regard to ICTs, this means that there is the time and timing of the program known by the designer and there is the time and timing of the user. But instead of seeing these different times in terms of what “really” goes on versus what is illusion, or instead of ‘offsetting time of reality from time of illusion’ and speaking in terms of ‘actual’ time versus apparent time, as Tognazzini does (p. 359), we could see these different times as belonging to one reality, not understood as a static world but as a process or a combination of processes.
Second, in order to move beyond dualistic thinking about these experiences but still distinguish between the experience and actions of designers and those of users/spectators, one could also talk about two narratives, which may or may not interlock at different times. There is the narrative of the magician/designer, including a plot with a character (the magician as artist, craftsman, scientist, and so on) and events happening (e.g. the coin is moved into the pocket of one’s jacket). There is also the narrative of the spectators, which in the cases of “deception” under consideration in this paper tends to differ from the narrative of the magician/designer, but also has a narrative structure which involves a plot with characters, including the magician as magician, as a supernatural being [indeed Tognazzini says that the magician plays a ‘character’ (p. 361)], and events such as the disappearance of a person. In a dualist framework, this play is put in terms of the “real” narrative versus the “illusory” narrative. But one could also see two narratives, without giving one ontological priority.
Moreover, the two narratives may be entangled to some extent and in any case they are connected. If a person uses a robot as companion, the narrative of personal companionship and the narrative of the computer program running the robot may be very different, but in practice, in use and experience, they are connected. Sometimes narratives merge, as in augmented reality or alternate reality games. Consider for instance the game Pokémon Go, which involves people searching for fantasy characters outside on the streets using their smartphone. There is the narrative of searching for Pokémon creatures and there is the narrative of the gamer crossing the street. Both narratives combine if and when the gamer crosses the street in order to look for the creature. Thus, if we consider the use, experience and phenomenology of the gamer (rather than taking a third person perspective), there is a sense in which there is one narrative.
In addition, this one narrative is connected with the narrative of the code running the application, which makes possible the game narrative. Indeed, it must be emphasized here that the narratives and the times of the gamers are configured by the technology; these ICTs are what Coeckelbergh and Reijers, influenced by Ricoeur, have called ‘narrative technologies’ (Coeckelbergh and Reijers
2016): like a text, they configure characters and events into a meaningful whole. The text of the code thus acts as a kind of author or at least co-author of the narrative of the gamer. But neither the narrative of the code nor the narrative of the gamer is more “real”, and the narrative of the game and the narrative of the gamer mix, without that it can be said that the one is more “real” than the other. One can use these terms from a third person perspective, of course, but if one tries to describe what happens by using the terms “real” and “illusion”, it is difficult to make sense of the experience of the gamer, who does not see his crossing of the street or his interaction with the robot as illusory; rather, there is one unified, integral experience. In the phenomenology of the game play (and the phenomenology of human-robot interaction, use of speaking computer interfaces etc.) there is no Platonic dualism; there is one game experience and one use experience.
Third, to further elaborate this approach one could also use the term performance. The metaphor of stage magic and illusionism of course already hints at performance. Indeed, Tognazzini writes:
Magicians work to produce illusions, but they don’t call their stage presentation an illusion, they call it an act (Tognazzini
1993, p. 356)
Now we can use this part of the metaphor and have it do some philosophical work, which again differs from the Platonic scheme Tognazzini uses. Following his own advice in the quote above, we can replace the language of reality and illusion by that of performance and act. There is one act, one performance. Or perhaps there are two performances, one done by the magician/designer and one done by the spectator and user, who also performs. Or, perhaps still better: there is one co-performance, in which both the designer/performer and the user/performer participate. Indeed, what is missing in the account presented by Tognazzini is the user/spectator in a more active role. In the Platonic cave metaphor/narrative, the prisoners are passive. They watch. They are even immobilized. Similarly, in Tognazzini’s account, the spectator is also relatively passive. In the magician’ show, spectators are literally immobile, sitting on a chair. With a few exceptions they do not participate in the performance. And it is assumed that the creation of the illusion is entirely taking place on the side of the magician. But this is misleading for at least two related reasons. First, consider again stage magic and illusionism, the metaphor itself. Performance can be seen as a one-way affair, but we can also take a different view, according to which the spectator does not passively receive meaning from the magician, but actively co-constructs time, narrative, and performance. In that sense, the spectator is indeed co-performer. Without the spectator, there is no act, no performance. To the extent that this is hidden by the metaphor of stage magic, the metaphor has its limitations. But an appropriate understanding of what is happening here and a better understanding of the metaphor itself reveals this more active role of the spectator. Second, consider now use of ICTs. The metaphor of stage magic almost hides that ICTs are used and that it is part of practices. Users are not (mere) spectators; they do something, they perform. The alternative approach I articulate here and its temporal, narrative, and performative turn away from Platonism, is only possible by considering ICTs in their use and experience. It is only by considering ICTS in their use and experience that the real/illusion is overcome. If we only look at design, as Tognazzini and others do, then we miss this aspect and easily remain within the Platonic framework, which in practice is shared by many designers, engineers, and scientists working in fields such as social robotics, engineering, and so on, but also by many philosophers. Then we see what is happening from a third person point of view. But we need to move beyond the language of “a view”, and especially an outside view. There are processes, there are narratives, and there are performances, in which not only designers but also users are actively involved. Once we consider the performance and experience of the user of devices that “deceive”, the Platonic way of thinking evaporates. Then we see that there are techniques and technological artefacts used by designers, but also that there are techniques and artefacts used by the users of ICTs.
For example, if a person uses a robot as a companion, then we may distinguish between at least the following performances: there is the performance of the designer, who writes the code using a computer and computer programs, but there is also the performance of the user, who uses the robot, and there is the performance of the robot, which may use all kinds of artefacts. All these uses, experiences, and performances are part of a whole, they are connected through time and narrative, and through artefacts (especially the robot). There is also the performance of the company who wants data from the user. Now all these performances are “real”, and they involve various kinds of techniques, bodies, and artefacts. To describe what is going on only in terms of a deception designed by the designer/magician reduces a rich holistic performative configuration and process to only one performance, and—by using the term deception—gives ontological priority to one particular performance as opposed to others. Similarly, to focus on what the robot “really” “is” as opposed to the “illusion”, is to blind the analysis to all kinds of relations between this robot and various performances. The robot is embedded in narratives-in-the-making, there are processes and performances. If there is a reality at all, it is not a static “world” which we can “view” but a process-reality that is made and performed.
Moreover, as the “narrative technologies” approach mentioned already suggested, technology plays a more “active” role in these processes, narratives, and performances. Consider also Pickering’s reading of Latour (
1993): Pickering argued that there is human and material agency (p. 21), that humans and machines ‘collaborate in performances’ (p. 16) and that there is ‘interplay here between the emergence of material agency and the construction of human goals’ (p. 56). This gives us a different view of stage magic and of the use of ICTs, in which the magician/designer is no longer totally in control of the performances. Instead, both users and machines co-write the narratives, co-configure the time/experience of the user, and co-perform. At a meta-level, then, instead of talking to the all-knowing Platonic philosopher in order to understand what is going on, we have to take advice from the users and performers: the magician/designer as user of technology and as performer, but also the users of technology, the performers of technology.
To conclude, according to this more holistic and relational alternative approach that takes a narrative and performative turn, instead of asking “What is real and what is illusory?” (the Platonic question), now the main question is “What is going on?”, understood here as: what is going on in terms of time, narrative, and performance. This gives us a novel way of looking at the “deception” issue. If there is “deception” and “illusion” at all, it is a deception and illusion that are made in performances, and that are co-created and co-performed by humans (magician/designer and spectator/user) and non-humans (robots and other machines, artefacts, and devices).
This gives us a different approach to “deception” phenomena than, say Turkle’s or Sparrow and Sparrow’s, and suggests not only that we can dispense with a derogatory view of performance, but also that we do not need to use the language of deception. First, in contrast to Turkle’s use of the term, here performance is not seen as a negative term indicating illusion, but, decoupled from deception, magic, and illusion, it becomes a morally neutral process which involves humans and non-humans. Second, there may still be an ethical problem with robots that “pretend” having feelings, but this phenomenon and indeed problem, should not be framed in terms of pretence or illusion or “performance” as a negative and derogatory term, but in terms of performance as a morally neutral term to metaphysically bring together humans, and humans and non-humans. Humans and technologies co-perform and co-stage something here. Now in some cases this performance can rightly be seen as problematic: not because there are two different realities, but because there may be a problem with the performance and its consequences, or one could say that there are two conflicting performances. Let me unpack this.
What happens in so-called “deception” cases is that, one the one hand, the performance is successful, for instance in creating a robot with emotions. If the performance is successful, then in the experience of the viewer/user, there is not the “appearance” of emotions, there are emotions. On the other hand, at a different point in time or when viewed from the outside, the performance fails: it fails if and when others (e.g. philosophers or the same users at another time) think and say these emotions are not real (which is also a performance, one which uses language). Success and failure might also happen with regard to different groups at the same time. One group of users may experience the performance as successful, whereas another group of users may experience the same performance as unsuccessful. In such cases, instead of a deception problem, we have a performance problem: it is not entirely successful. This need not be problematic if everyone knows it was a show anyway; but it is problematic if the claim was that for instance the machine has emotions. Moreover, use of the language of deception is itself a performance, a counter-performance so to speak, which does not stand outside the performative field. In the case of the different user groups, one could say that there are different kinds of performances, and the term “deception” is used in a third performance to mark the difference between the successful and the unsuccessful performance. One can also reframe the problem in narrative terms: what is missing here is not “the real” as opposed to “illusion”, but success or failure on the part of the designer and the robot—but also the user/spectator—to co-write a particular narrative, for instance a love narrative or a companion narrative. Or again there may be two conflicting narratives: one about love and one about deception. The ethical question then concerns the ethical quality and consequences of these performances and narratives (and indeed of this “battle” of performances and narratives) for the people involved. Is it good that young children get involved in a narrative of companionship with a robot? Is it good that a particular adult co-performs sex with a robot? Is it good that elderly persons with limited cognitive abilities become involved in a performance of care in which robots play a specific role? To answer these questions does not require a discussion of metaphysics or does not need framing in terms of deception; it requires us to attend to the specific human–robot interaction as a performative and narrative process in which the experience and co-performance of users counts. There may well be a difference between performances and also a difference in ethical quality, but that difference is a matter of (relational) phenomenology, of experience-in-relation and experience-in-performance; it is not a metaphysical or theoretical-scientific difference between what the robot can do and not do (the properties of the robot), the reality of the world, the nature of emotions, and so on.
A relational turn
This move also invite us to connect with a relational approach to human-robot interaction (e.g. Coeckelbergh
2012,
2014), which enables us to criticize the distancing of the deception language. Those who use deception language or assume and real/illusion distinction tend to take what Nagel called a “view from nowhere”. While in general a third person point of view may not necessarily be problematic and probably is unavoidable, the
very distant and detached view of the scientist qua scientist and philosopher qua metaphysician
is problematic since it neglects the concrete relation between human and robot, human and human, and so on. By focusing on the properties of the robot (what the robot can do or cannot do, what the robot is or is not, has or has not, e.g. emotions or not), what remains out of sight is the concrete relation, encounter, (co-)performance, and experience. The ethical quality of the performance and whether or not it is successful is not a matter of what the robot is, has, or can do, or what the user/viewer/audience is, does, and so on, but of what happens in the relation between the two, here cast as: what happens during a (co-)performance. In the performative process and experience, there is no robot-in-itself and no human-in-itself; both are co-constituted in the performance and in the relation. If there is a so-called “deception problem” then this must be understood as a relational problem: one which does not concern the robot but the relation between human and robot as performed. What happens in this relation needs ethical analysis and judgment. Moreover, such performance relations invite other performative interventions such as the voice of the designer-roboticist, the philosopher-ethicist, and so on who may or may not use the language of deception as a performative gesture—interventions which does not stand outside the performative field, and could themselves be criticized, for instance as involving too much distance. The so-called “deception” issue is then not about “the real” or about what emotions “are” but is rather a problem concerning the performances and narratives humans and robots create and should (not) create in specific cases, situations, and contexts, and about the ethical quality and consequences of these performances and narratives.
Thus, the advantage of this re-description and re-evaluation in performative and relational terms is that it is now possible to ethically evaluate the relational process, performance, and experience itself, indeed the relation itself and its consequences, without having to involve a third distant metaphysical entity such as “reality” (the real world, real emotions, etc.), the “nature” of emotions, etc. or abstract scientific-theoretical concepts such as anthropomorphism, which blind us to the quality concrete relation, encounter, and performance. Of course roboticists, human–robot interaction scholars, ethicists of robotics, etc. often also start from the concrete experience. But they then take distance and turn these into “cases” with their theories and generalizations. And when, in their evaluative moments, they use deception language or make Platonic assumptions, their use of language takes distance from the concrete performative-narrative and relational process, and to the extent that they do this, their performance becomes itself problematic.
These qualifications are important: the aim of my proposal to use a different language—that of performance, narrative, and relations—is not to discredit the work of authors such as Turkle, the Sparrows, and so on as invalid, entirely wrong-headed, and so on. Generally, these authors pay a lot of attention to the concrete human–robot interactions, especially Turkle for instance. My “only” suggestion is that those interested in better understanding and evaluating contemporary (social) robotics need to be careful and critical when using terms such as deception, when using scientific methods and theories, and when assuming metaphysical distinctions concerning the real etc., and consider using alternative terms that do more justice to performative and relational experience—experience on the part of all people involved. This includes users, but also designers/engineers, roboticists, philosophers, social scientists, etc., since they may also be involved and co-create different performances and narratives with robots. Hence they cannot assume the role of “neutral” and distant observer; they themselves, with their science, criticsm, interventions, gestures, etc. (which are also performances) influence and even co-constitute the performance, relation, and their meanings and ethical consequences.
To conclude, I have proposed to re-frame and further analyze the “deception” problem by using the notions of performances, narratives, and relations, which may help to avoid assumptions and discussions concerning for instance a metaphysics of the real, what robots can do, what emotions really are, etc. and which brings the discussion back to human-robot use and interaction as an experiential and performative-narrative process which must be analysed and evaluated in terms of the quality and consequences of its performances and experiences, including ethical quality. As scientist or philosophers we can do this from a third person point of view, if we must, but then we need to make sure that it is one that stays close to the phenomena and starts from there to develop a better understanding and, if necessary, an ethical judgment. And although an increasing amount of researchers may seem to do this, there is always the danger of a too distant theoretical or metaphysical attitude. Moreover, the arguments and discussion presented so far, with all its difficulties and potential pitfalls, suggests that we are only in the beginning of achieving a better understanding of the phenomena. I have argued that the moral language of “deception” and “illusion” may hinder rather than help in this process.
Now what does this approach mean for ethics of ICTs? And what does it imply for ICT design and use?