C-3PO did not truly come back to life (and 4 other provocations in AI ethics from Star Wars: The Rise of Skywalker)
A critical part of the plot in RoS involves C-3PO navigating an ethical dilemma.
Even from the trailers, it was clear The Rise of Skywalker (RoS in the rest of this piece) would include some engagement with the ethical dimensions of Artificial Intelligence. When C-3PO explains that his programming forbids any translation of the sith language, the plot does not necessarily thicken (C-3PO himself makes it pretty clear what needs to happen). Still, the scenario provokes questions about agency, trust, relationships, and ethics of artificial intelligence. Here are a few I have been thinking about.
1: What does it mean for an AI to follow hard-coded ethical rules?
3PO’s hard-coded rule — that it must not read a particular language — is the entry point for all ethical considerations that follow. Just like a governor on a car, this rule sets a hard-coded limit on what 3PO can do. Isaac Asimov’s stories have famously shown the difficulty of specifying hard-coded rules for intelligent agents. While the “three laws of robotics” have been popularized and at times celebrated, they are sources of tension in Asimov’s stories, much for the same reason they produced tension in RoS: there is always a scenario where the rule(s) must be broken.
Mathematicians and Computer Scientists have been notoriously insistent on rules. Leibniz, one of the co-inventors of calculus, dreamed of an “ethics machine:” a program that would accept an ethical choice as input, make calculations, and output the “correct” choice. He thought that such a machine would allow us to “discuss the problems of the metaphysical or the questions of ethics in the same way as the problems and questions of mathematics or geometry.”
I still prefer The Dark Knight for a deep engagement with the tension of strict, calculated rules for ethical choices. In Nolan’s film, the Joker confronts Batman’s deontology head-on, telling Batman that his “one rule” will inevitably need to be broken. While deontoligcal philosophy is not featured in RoS as much as they are in The Dark Knight, there is one aspect of the hard-coded rules that I have found very thought-provoking:
2: Who gets to choose the hard-coded ethical rules for AI?
One of the many underrated features of The Phantom Menace (Star Wars Episode I) is the portrayal of C-3PO’s beginnings. Recall this exchange between Anakin and Padma, in which Anakin introduces C-3PO (script via Screenplays for You):
We thus know that Anakin developed C-3PO for a specific, care-related purpose. With this in mind, 3PO’s rule against reading certain languages would have been programmed by Anakin himself. It makes sense that at his age, Anakin would have programed a rules-based approach. While Kohlberg’s stages of moral development have been critiqued, there is still a tendency in childhood to latch onto absolutes, what Kohlberg refers to as an “obedience and punishment orientation.” One clear example is the use of profanity: at a young age, children rely on absolute rules that some words are “bad words” and should never be said. Upon growing up, however, most of us learn that there is nothing intrinsically “bad” about these words, and in fact they can be used harmfully or healthily in different contexts.
3PO’s rule against reading certain languages would have been programmed by Anakin himself
C-3PO is capable of learning in many ways, but it maintained an elementary stage of moral development in regard to speaking some languages. Many decades after its activation, it rigidly adhered to a specific set of rules that Anakin programmed into it. In this way, 3PO is a foil to its creator, who exhibits several stages of moral development that culminate in a climactic change of will at the end of Episode 6. While Anakin embraced free will and experienced its full range of consequences, C-3PO could do no such thing,
3: What does it mean for an AI to have free will?
Despite its ability to learn, communicate, and move, C-3PO does not have anywhere near the kind of free will that humans experience. Screenwriter Lawrence Kasdan views this free will, this “making it up as you go,” as the thematic scaffolding for Star Wars, and even life itself:
It’s the biggest adventure you can have, making up your own life, and it’s true for everybody. It’s infinite possibility. (Lawrence Kasdan,read more here)
While 3PO does not get to participate as much in this big adventure, it is very possible that an AI could override its own hard-coded rules. In fact, I think this would have been a fascinating idea to explore RoS. We might imagine the following exchange:
3PO: my protocol prevents me translating such a language directly
Po: your protocol? that thing was written when Qui-Gon was still around! It’s three generations old, it doesn’t mean anything now.
3PO: yes, I suppose you’re right, but I mus—
Po: doesn’t your protocol also say to serve the republic at all costs? Can’t I tell you that you need to read these letters to me in service of the republic?
3PO: well, sir, I suppose. But that would still be overriding my protoc —
Po: 3PO, there’s no time for this. If we don’t get these coordinates, there’s no hope. Can’t you see this is necessary?
Instead, we got a very rigid, matter-of-fact explanation from C-3PO: “I can’t do that.” And for whatever reason, Po accepted it (we know he had other reasons for wanting to go to Kijimi). But 3PO’s rigidness is something of a departure from the human-like droids we saw in Lucas’ original films, and even what Rian Johnson depicted in The last Jedi (Recall R2-D2’s emotionally manipulative move to play the “Help me, Obi-Wan Kenobi, you’re my only hope” for Luke).
While I would have been fascinated to see a more emotional exchange between 3PO and Po, discussing machine’s agency to override protocols in ethical dilemmas, the rigidness drives home Kasdan’s free will themes that string together all of the Star Wars movies. In one of the most critical junctures of the series, it is not a machine that saves the day, but rather human choice and human connection (in this example, Po knowing someone who could rewire 3PO). But what’s to say that it has to be human-to-human connection that saves the day?
4: What does it mean for an AI to have friends?
3PO remarks before his rewiring that he is taking “one last look… at [his] friends.” Several stories have explored the idea of humans viewing AI as “friends” and even lovers — I am especially thinking of Joaquin Phoenix and Scarlett Johansson in the cautionary and ominously prophetic film Her. But what about an AI viewing a human as a friend? In fact, Spike Jonze’s script for Her, like the brief comments from C-3PO in RoS, touches on the machine perspective. Samantha (the AI) apparently feeling lonely when Theodore goes to sleep (script via Scriptslug):
While Samantha goes on to seek out other machine companions for friends, 3PO is content with a blend of machine and human friends. In this relational context, a machine could be considered a person in many ways, since an existing relationship implies the opportunity for that relationship to grow, to shrink, to start anew, or to end. Machine socialization is one impetus for potentially granting AIs legal personhood, a question that has been discussed for a long time . If a machine could have a friend, it starts to straddle the line between property and personhood: could it be stolen, or stolen from? Could it be adopted? Could it be murdered? In fact, was C-3PO murdered?
5: What does it mean for an AI to die?
At one level, C-3PO simply comes back to life in RoS: after Babu Frik hacks out the translated inscriptions, 3PO turns back on. Yet 3PO without memory is a 3PO without friends, and for me, this was a sad moment. It briefly reminded me of Flowers for Algernon, a deep, emotional story (I have not read since middle school) that asks questions many people ask when a relative develops alzheimers, for example, what does a life mean if that life has no past? 3PO is technically not a living entity, but given all the charming moments of humanity that scriptwriters have imposed, it was sad to see those moments erased from memory.
And personally, I do not see the memory restoration as a full resuscitation of C-3PO. While the physical body was the same as before, the software program that constituted C-3PO did not survive. To demonstrate this, we can borrow a thought experiment from Christopher Nolan’s The Prestige.
Imagine C-3PO is standing next to a droid with an identical (or almost identical) body. Now, imagine R2-D2 instantly copies C-3PO’s memories and software program to the other droid and turns it on. The fact that C-3PO is C-3PO does not change with the introduction of a clone, and the persistence of the clone in the presence of C-3PO’s death would not constitute the survival of C-3PO.
The only reason we are tempted to think that C-3PO survived is that its program was copied to the same mechanical equipment. But make no mistake, its program — its personality — died: it was totally interrupted and reset when Babu Frik wiped the memory.
What do you think? Is the C-3PO celebrating at the end of the movie the same C-3PO?