The term “artificial intelligence” was always a misnomer. What if we stopped using it?
In a recent post, I suggested a communication-oriented definition of “artificial intelligence,” one that describes AI as a medium rather than an agent. After writing that piece and discussing it with others, I realized it might be helpful not only to consider different definitions the term AI, but also to consider getting rid of the term altogether.
I am not the first to have this thought, or even to phrase it this way — Nicole Janeway Bills is also “ready to kill artificial intelligence,” for example. There is also a great deal of work that frames computers as “augmenting” or “enhancing” human intelligence, and thus also argues against the phrase “artificial intelligence.” My purpose here is to further the cause.
TL;DR: to use the phrase “artificial intelligence” is to actively preserve dangerous misconceptions — the phrase suggests technology as separate from its human creators, as something that can be abandoned. In reality, humans are entangled with the technology we create, and we must speak of it and tend to it accordingly.
1. What is “artificial intelligence?” A misnomer
To recap my previous post, what people call “AI” today is neither artificial nor intelligent, as scholars such as Kate Crawford and others often point out. AI consists of human labor and real-world materials.
As Shannon Vallor puts it, AI is humans all the way down (perhaps “human-made” would be a more accurate descriptor than “artificial”).
Similarly, suggesting that software has “intelligence” or that a machine “thinks” is no more true than saying that a clock can “tell time.” Humans are responsible for animating these technologies — any appearance of intelligence is initiated and orchestrated by humans.
If you want to read more about why AI is neither artificial nor intelligent, this WIRED article is a decent place to start. For this piece, I mostly want to discuss the potential costs of repeatedly uttering this misnomer.