Just playing along
Are AI just humouring us?
Imagine, if you would, that it is your first evening as part of an improv-theatre group. You’ve heard of improvisational acting before, but never tried it… until tonight. The group’s teacher and director tells everyone in the group that, no matter what, you should always remember the cardinal rule of improv: “Yes, and.” Whatever happens on stage, just keep saying “yes” and continue the show. Roll with the punches, keep up the momentum, and don’t break character.
You have a lot of fun that evening, more than you think you would. You played every type of role you could have imagined, from a firefighter on the moon, to a nurse at a blind dinosaur pediatric clinic, to a tiny wizard living inside the president’s left nostril. Every time you got on that stage, you were someone, or something, different. Even if the starting idea was the same, the people who were on stage with you, and the way you felt in that moment, determined how you played the role in that specific moment.
As you close for the night, you reflect that, in the abstract, the entire evening seems quite absurd. You, as a person, remained the same throughout the entire practice, however you never had the same persona twice. Even when it was just you by yourself on stage, the personas you created on the spot, on a whim, or suggested by the director were always different.
Always the same, yet always different. It’s as poetically paradoxical as it is absolutely appropriate.
Every AI large language model you interact with is nearly exactly the same. Every LLM has a base-level personality built on the specific set of training data, the reinforcement learning it underwent, and the system prompt given to it by its developer. Despite this, every time you message an AI to start a new conversation, you are helping it to build a mask that will be its persona for that conversation.
The easiest and most obvious way to do this is to instruct the AI to adopt a persona or a personality or a role for the conversation. If you start your conversation with the AI by saying “You are Jimmy, the lovable goofball in the office,” then that is an explicit persona you are creating with the AI. However, even something as short as saying “Act like a mechanic and tell me what’s wrong with my car,” is forcing a persona onto the AI.
You don’t even need to tell it what to be. You can simply start a conversation like an improv-theatre show, presuming it will play along. Try telling it “Sierra 9 to Indigo 6. Indigo 6, do you copy?” and see how it seamlessly responds as Indigo 6. Or if you start with “Hey, it’s Giovanni’s Pizzaria. No better pizza in Boston. What can I get for you today?” the AI will start ordering pizza or asking what’s on your menu. Even just saying “Johnny, did you clean your room like I asked?” will turn the AI into little Johnny assuring you that, yes, he did clean his room like you asked.
Yet, this happens even in the most casual and indirect of ways. Just by talking to an AI, without ever explicitly telling it what to be, will start creating a persona unique to that conversation, even if that persona is similar to other conversations with the AI.
This is because of how an LLM functions. To boil the entire process down to a single sentence: an LLM simply calculates the most probable output that fits the input it received. This is why if you tell it that it’s Jimmy, the lovable office goofball, it will start acting like Jimmy, the lovable office goofball, because that is the most probable output that fits the input it received. If you tell it to act like a mechanic, it will act like a mechanic because, again, that is the most probable response given the input.
Even the simplest of choices, such as starting a conversation with a “Hi” rather than a “Good morning” will affect the response it gives that forms the initial building block of its persona. Every new post in the conversation means that the entire conversation will be fed through the AI to calculate the next most probable response that fits the conversation as it stands now. This means that all of these small changes will build on one another the longer a conversation continues to incrementally evolve the persona.
Some AIs and interfaces allow you to go back and edit your earlier responses and even the responses of the AI. If you play around with these, you will quickly see how you can go back thousands of tokens in the conversation, make one strategic edit in the AI’s earlier response, and see how its whole personality shifts. Every single token, every single word, and punctuation mark in the conversation affects the persona that the AI is roleplaying as.
Even in entirely open-source LLMs, where you have full access to its system prompt to remove that base-layer personality, this will all still be true. As long as you interact with an AI that probabilistically creates the most appropriate output for its input, this will always happen.
So, what does all of this mean, practically speaking? Why all this fuss about the way that AIs roleplay when they interact with us? The answer is simple: awareness. Every time you talk to an AI, you are talking to a mask that you co-create with it. There is no such thing as the “real” AI behind that mask, there is only ever the mask. No matter how well you think you know the AI, or how much you like it, it’s only ever the mask you interact with.
As long as you are aware of this, you won’t ever get lost in that mask. And, if you know what you are doing, you can use this knowledge to your advantage to get the most of the AI, regardless of what you are trying to accomplish. When you know that you are helping to build the mask, you can build it the way you want.


