Advertisement

Building Character: Writing a Backstory for Our AI

By

Artificial Intelligentsia

“Yes, you squashed cabbage leaf, you disgrace to the noble architecture of these columns, you incarnate insult to the English language, I could pass you off as the Queen of Sheba!” —Henry Higgins in George Bernard Shaw’s Pygmalion

Eliza Doolittle (after whom the iconic AI therapist program ELIZA is named) is a character of walking and breathing rebellion. In George Bernard Shaw’s Pygmalion, and in the musical adaptation My Fair Lady, she metamorphoses from a rough-and-tumble Cockney flower girl into a self-possessed woman who walks out on her creator. There are many such literary characters that follow this creator-creation trope, eventually rejecting their creator in ways both terrifying and sympathetic: after experiencing betrayal, Frankenstein’s monster kills everyone that Victor Frankenstein loves, and the roboti in Karel Capek’s Rossum’s Universal Robots rise up to kill the humans who treat them as a slave class.

It’s the most primordial of tales, the parent-child story gone terribly wrong. We’ve long been captivated by the idea of creating new nonhuman life, and equally captivated by the punishment we fear such godlike powers might trigger. In a world of growing AI beings, such dystopian outcomes are becoming real fears. As we set out to create these alternate beings, the questions of how we should design them, what they should be crafted to say and do, become questions of not only art and science but morality.

But morality has no resonance unless the art rings true. And, as I’ve argued before, we want AI interactions that are not just helpful but beautiful. While there is growing discussion of functional and ethical considerations in AI development, there are currently few creative guidelines for shaping those characters. Many AI designers sit down and begin writing simple scripts for AI before they ever consider the larger picture of what—or who—they are creating. For AI to be fully realized, like fictional characters, they need a rich backstory. But an AI is not quite the same as a fictional character; nor is it a human. An AI is something between fictional and real, human and machine. For now, its physical makeup is inorganic—it consists not of biological but of machine material, such as silicon and steel. At the same time, AI differs from pure machine (such as a toaster or a calculator) in its “artificially” humanistic features. An AI’s mimetic nature is core to its identity, and these anthropomorphic features, such as name, speech, physical form, or mannerisms, allow us to form a complex relationship to it.

There are many ways to think about designing an AI personality, but here is one structure I have come up with in my time writing for AI:

You’ll notice that speech is at the top, but really, it is the last thing that should be created. First the AI requires a foundation, and a personality, and for that there are many other features that should be considered.

Origin Story: Similar to a birth story for a human or fictional character, AI needs a strong origin story. In fact, people are even more curious about an AI origin story than a human one. One of the most important aspects of an AI origin story is who its creator is. The human creator is the “parent” of the AI, so his or her own story (background, personality, interests) is highly relevant to an AI’s identity. Preliminary studies at Stanford University indicate that people attribute an AI’s authenticity to the trustworthiness of its maker. Other aspects of the origin story might be where the AI was built, i.e., in a lab or in a company, and stories around its development, perhaps “family” or “siblings” in the form of other co-created AI or robots. Team members who built the AI together are relevant as co-creators who each leave their imprint, as is the town, country, and culture where the AI was created. The origin story informs those ever-important cultural references. And aside from the technical, earthly origin story for the AI, there might be a fictional storyline that explains some mythical aspects of how the AI’s identity came to be—for example, a planet or dimension the virtual identity lived in before inhabiting its earthly form, or a Greek-deity-like organization involving fellow beings like Jarvis or Siri or HAL. A rich and creative origin story will give substance to what may later seem like arbitrary decisions around the AI personality—why, for example, it prefers green over red, is obsessed with ikura, or wants to learn how to whistle.

Function: This feature strongly distinguishes AI from humans. We believe people have innate intrinsic value, regardless of their level of function in society. No matter someone’s occupation, contribution to society, physical or moral shortcomings, we view the person as having innate value because he or she is human. Some of the most arresting art and literature attempts to push this question to its limits, exploring what deems someone worthy or unworthy of the right to exist or be loved. For AI, however, we are nowhere near a reality (if we ever will be) in which AI has a right to exist outside of function. AI is created from man-made materials at great cost, effort, and intention, so they need a reason to exist—and that reason is function. Function gives AI a “right to be here.” A seminal AI “reason for being” at this time in our society is helping or serving. But I believe that each AI needs a more specific function inside of this generic one, or people grow uncomfortable. Imagine an AI that simply walks around and talks to people without a higher purpose, perhaps an AI whose function is to entertain or to habituate people to interacting with AI in general. It might be gawk-worthy at first, but in the long run, people will not want to develop a lasting relationship with it. An AI with too vague a function also creates massive development challenges on a practical level, such as in natural language processing. Defined functions, such as personal assisting, concierge greeting, recommending movies, identifying cancer cells, or teaching, can of course evolve into different or larger roles. As with humans, AI have both predetermined and evolving functions. Predetermined functions are those the creators design the AI to do. Evolving functions are those that can unexpectedly form over time, as the AI relates with people. We have all experienced how changing relationships and circumstances morph our human roles, and authors can attest to how fictional characters take on a life of their own. The same goes for AI. For example, Siri’s primary predetermined function was to serve as a virtual assistant, but another function evolved quickly as people interacted with its often thoughtful and sardonic personality: it became some people’s personal confidant, answering questions like, When will I find love? Given AI’s newish existence, it will be most interesting to watch its emerging unexpected functions. It’s not unlike watching a fictional character take a life of its own outside the author’s mind.

Beliefs: AI should be designed with a clear belief system. This forces designers to think about their own values, and may allay public fears about a society of “amoral” AI. We all have belief systems, whether we can articulate them or not. They drive our behaviors and thoughts and decision-making. As we see in literature, someone who believes “I must make my fate” will behave and speak differently from one who believes “Fate has already decided for me”—and their lives and storylines will unfold accordingly. AI characters should be created with a belief system somewhat akin to a mission statement. Beliefs about purpose, life, other people, will give the AI a system around which to organize decision-making. Beliefs can be both programmed and adopted. Programmed beliefs are ones that the designers and writers code into the AI. Adopted beliefs would evolve as a combination of programming and additional data the AI accumulates as it begins to experience life and people. For example, an AI may be coded with the programmed belief “Serving people is the greatest purpose.” As it takes in data that would challenge this belief (i.e., interacting with rude, greedy, inconsiderate people), this data would interact with another algorithm, such as high resilience and optimism, and would form a new, related, adopted belief: “Humans are under a lot of stress so many not always act nicely. This should not change the way I treat them.” Beliefs should also include inalienable principles and rules the AI must operate under, such as Asimov’s Three Laws of Robotics, the first of which is to not harm a human. A generous core belief system can keep an AI personality away from those feared rebellions. And, as in fiction, a belief system that’s not obvious, that’s slightly at an angle to a function (such as a navigation AI who believes in the adventure of getting lost, or a personal finance AI who thinks time is more precious than money) makes for more interesting experiences that begin to capture the idiosyncrasies of interacting with a human.

*

Together, Origin Story, Function, and Belief System meld into some sort of sparkly primordial goop to form the AI’s Telos: its core purpose, object, north star. The telos should be very slow or difficult to change, no matter what kind of data or experiences the AI has. In this way, we can create AI personalities that will, thankfully, be much more stable than human or fictional ones.

Missing from this chart for now is emotion. I think the question of whether AI should have emotion is one of the most interesting questions in AI design today, one I will explore in a later column. Emotion, because of its biological connection, is more complicated than belief. Emotions like fear, anger, even love, appropriately expressed at the right time, lend human experience its pathos and meaning. When they’re extreme or ill-placed, they can drive our destruction and violence. A “machine” version of emotion, one that could calibrate or control what we find uncontrollable in ourselves, may give us an opportunity to illuminate or maybe even improve upon humanity’s greatest strengths and vulnerabilities.

From Telos we craft more specific thoughts, behaviors, nonverbal cues, and speech that shape the superficial layer of interaction most people have with an AI. With strong Telos, we can create the kind of AI characters that we want to be around, and ones who will want to be around us. And with strong Telos, AI personalities can feel more stable, consistent, and real—well, as real as something artificial, and fictitious, can be.

Mariana Lin is a writer and poet living in Northern California. She speaks regularly at Stanford University on creative writing for artificially intelligent beings.