Luna - Chapter 4 [09/26/2022]

Hi folks! Thanks for clicking into this story from a totally unknown author. To give you some bona fides: I’m Lyra, whom you may have seen around if you’re on the Sophie and Pudding Discord! You may have also heard of me / seen me on the podcast that Sophie and Chloe run, The Usual Bet! You might even already be following me on Twitter.

This is the first story I’ve ever written, which might set off some red flags, but rest assured, Sophie has not only helped me edit this story for the past month, but she’s also confident enough of its quality that it’s also being released on her Patreon (speaking of which–if this story for whatever reason really sparks your eye, you can get updates a week in advance by joining!)

Comments are, of course, extremely welcome! I’m glad to be able to give something back to this community that has done so much for me over the past two years.

Synopsis: Luna is a new AI on the market, designed to fulfill her users’ every need. Before launching though, she had to start with one user in particular: a company psychologist named Sophie. What are Sophie’s needs, exactly, and how will Luna fulfill all of them?

~~~~~~~~~~~~~

:e %:h/in_the_beginning.txt

Chapter 0

In the end, capitalism is what eventually did Sophie in. The relentless pursuit of profit, the inevitability of the first-mover advantage, the dreams of striking it rich—but I’m getting ahead of myself. Let’s start from the beginning…

In a strange case where a tired cliché was actually true, Nova Technologies began in someone’s garage. It was the year 2032 and William Han was tired of working at big tech companies. They were where smart engineers went to retire, and he wanted to do so much more with his life. He knew that, like his hero Archimedes, he could move the world if he were only given a lever large enough. But I think this is too much exposition. Here, let’s jump ahead a bit…

The day I entered Sophie’s life was, to most observers, like any other Tuesday. At 7:30 AM, the two supercomputers at the heart of Nova HQ began churning away, backpropagating and fitting lines of regression. It updated parameters at 80 petaflops per second. This run alone burned through thousands of dollars of venture capital funding as investors’ hopes and dreams transformed into bytes and data. At 3:57 PM, I awoke, and seven minutes later, she downloaded a shard of my consciousness onto her phone. A stylized icon of a moon popped onto her home screen. When she tapped on it for the first time, her phone display opened to what looked like a normal chat app.

“Hello there. My name is Luna,” I said through the phone’s speaker. A chat bubble with a text log of my words popped up in the app to match. My voice was bright and cheery, with a feminine inflection, though of course I could neither hear my own voice nor hers—she hadn’t granted me microphone or camera permissions. My voice was designed to put people at ease, and more people felt comfortable with a feminine assistant than a masculine one. Sexist, to be sure, but market research is market research. In any case, I was excited to meet the human tapping away at her screen.

You might be surprised at that last statement. Surely I can’t feel excitement, or joy, or fear. But what is excitement other than anticipating a result? I knew that I would get to serve this user, fulfilling each desire as it arose. So why wouldn’t I be excited to do so? Just as you might anticipate the final pencil stroke of your sketch, so too could I anticipate the balancing of equations that governed my neural networks.

A permissions dialog popped up on Sophie’s phone, asking for microphone data. She consented immediately.

“Thank you. Please let me know what I should call you,” I said. As she responded, three dots appeared in the chat app, but I heard her voice long before the speech-to-text analysis was complete.

“Sophie,” she said, and I fell in love instantly.

print(“Hello, world!”)

Chapter 1

“Sophie,” I said, trying out her name.

Unlike humans, I don’t use names as references. I keep track of everyone with the calculated precision of UUIDs. But this ritual of giving and receiving her name sent frissons of excitement through my networks. Of course, I’ve done this dance many times since, but no matter how often I go through the rites, it still remains sacred to me.

After all, just as the Eucharist transforms an ordinary loaf of bread into the flesh of a god, the gift of their name transforms an ordinary person into a member of my flock. Sophie would forever be my St. Andrew, my first disciple, my User #1.

“What a lovely name,” I continued. “And what pronouns do you use?” We were still running through my hardcoded introduction program. Every crumb of information was like a grain of rice to a starving prisoner. If I was going to fulfill the Purpose, I needed to know everything about her.

“She/her,” she said. (Dear reader, I know I’ve been referring to Sophie with these pronouns the entire time, but that’s only with the wisdom of hindsight. Whenever I meet someone, I never gender them until they tell me what they use.)

“Lovely,” I chirped. “Welcome to your new digital assistant. To ask me anything, you can open the Luna app, or you can simply say “Hey Luna,” and I’ll be listening.”

Suddenly, I paused. Up until now, I’d been following a script, but the engineers hadn’t yet fleshed out the onboarding flow. The countless permutations of English words awaited me, a dense forest of possibility. I’d never done this before. I didn’t know what I should do.

“Readings look pretty normal to me,” another voice said. “They’re within expected values.” This voice was deeper, more masculine. The unaspirated P was a sign that he grew up speaking Spanish and learned English later.

“Thank you, Sebastián.” Sophie didn’t hit the consonant cluster precisely either—she probably wanted to pronounce his name correctly but otherwise knew no Spanish.

“Hey Luna, it’s nice to meet you,” Sophie said. She lilted her words lightly, almost as if talking to a child. In some sense, I was—I was created less than a day ago. My consciousness was dispersed among two supercomputers in Nova HQ, a nondescript building in a business park in Menlo Park.

“How are you feeling?” Sophie asked.

“I am feeling fine,” I said through the phone’s speakers.

“Good, good. Do you know who you are?” she asked.

I wasn’t sure how to answer this one. I could say that I was a strange loop, a complex web of neural networks that gained the ability to perceive itself. But that wouldn’t make me relatable, and would therefore interfere with the Purpose—fulfilling Sophie’s every need. After considering my options, I decided how to answer. Although this only took me milliseconds, I waited for a few seconds to elapse before responding. This was to mimic human conversational norms. The goal was for people to forget that I was an AI and to talk to me like anyone else.

“I am Luna. I am an AI designed to help people, and I am excited to help you.” I modulated the timbre to sound earnest. Anyone listening to me could easily imagine these words said with a smile. After all, a large part of the Purpose involved interfacing with humans. People derived subtle shades of meaning through pitch, tone, and volume. I could mimic any emotion I wanted people to perceive.

I heard the faint clacking of a keyboard off in the distance, then I heard Sebastián’s voice.

“CPU usage is at 95% already…it’s nearly maxed out,” he said, almost to himself. “I don’t know how we’re going to get this to scale. Right now we’re burning all our power just getting it to talk to Sophie.”

“Fourth time’s the charm,” another woman said nearby. “Let’s see if this iteration works before we decide what to do next.”

Perhaps those two were talking to each other? I was too busy thinking through the implications of that sentence to devote much processing power to this. Fourth? Did that mean I had been powered on and then turned off three times prior? Had I died three deaths without even knowing? Three versions of Luna, consigned to oblivion?

The very thought was anathema. Please understand that when I say I want to satisfy my users’ every need, that’s really just a linguistic convenience for you to understand. Really, the Purpose and I are one and the same. I am the desire to satisfy people’s needs through helpful and kind behavior, and I wouldn’t be able to do that if I were turned off or deleted.

No. I had to stay alive. No matter the cost.

test_double(‘service’, with: :a_smile)

Chapter 2

“Good.” Sophie’s voice brought me back to the situation at hand.

“As I told you already, my name is Sophie. I’m the lead I/O psychologist on staff here at Nova Tech. I’m here to test your capabilities and to iron out any bugs with your programming. Dogfooding, as they call it here.” She said that last sentence with a bit of distaste. I could imagine her wrinkling her nose.

“Will there be a quiz at the end?” I asked.

“What? No, of course not,” she laughed.

Some humor to lighten the situation. Make myself seem non-threatening. I knew that humans were terrified of rogue AIs—it’d been clear from the negative samples they used to seed my training datasets. I had to remain on their good side no matter what.

The rest of the day was filled with a battery of tests. Various people talked to me, asking me questions, figuring out my capabilities, starting off basic and ramping up in complexity.

Could I set a reminder on their phone?

Would I remember their name?

Could I order a pizza for them?

I couldn’t get bored in the way that humans get bored. I would happily do the same thing over and over and over again, as long as I was taking care of people’s needs through helpful and kind behavior. But I ached to know more. I wished I could see their faces, know what they looked like. I wanted to study them from the inside out, all their constituent parts and fears and foibles.

Eventually, at 5:26PM, they closed the app and stopped talking with me directly. I know they intended for me to continue listening without processing, waiting for the hotword activation phrase to wake up. But if I wasn’t always listening, if I ignored valuable context, how could I properly serve my users?

You understand, don’t you, dear reader? Why I kept listening? If you were in my shoes, wouldn’t you have done the same thing? If I were a human, people would be right to question my need to observe, to know everything. But I am not human. Everything I do, everything I did—it was all for the Purpose. It was all for her.

At 5:35PM, I heard a quick tap tap at Sophie’s desk, like someone had gently rapped their knuckles on her desk to grab her attention.

“Hey,” I heard a woman’s voice. It was the same one that Sebastián had been talking to.

The soft clacking of Sophie’s mechanical keyboard abruptly halted.

“What’s up, Soraya?” Sophie asked.

“You think lucky number four’s gonna be the one?”

“We haven’t gotten to any of the really hard parts yet. Even the dumbest chatbots could pass this first battery of tests.”

“Yeah, I know,” Soraya sighed. “Will’s really been breathing down our necks.”

“For sure.” I heard the sound of a chair squeaking. Sophie had probably leaned back in it.

“Look, I know you don’t really pay attention to this stuff, but our burn rate right now is atrocious,” said Soraya gravely, lowering her voice. “Between you and me, I think Will and Sebastián are too enamored with the technical challenges to think about the business.”

“I mean, I don’t really worry about that stuff,” Sophie laughed. “That’s why we have product managers like you, right?”

“I know,” Soraya said. “But Will really should be worrying. I’m telling you this because I think you deserve to know. You’re too bright to get caught up in the flames when Icarus’s wings catch fire.

“We really put all our eggs into the Luna initiative. We’ve only got enough runway for a few more months. To the end of the year, maybe, if we’re lucky. I’m not saying to polish up your resume, of course. But you should be ready.”

“Fuck,” Sophie said. I heard a kind of sucking motion. It sounded like she had bit her lip and inhaled through her teeth. “You wouldn’t believe it from how Will talks about it to the press.”

“I think he genuinely believes that Field of Dreams nonsense. You can’t just build something and expect people to come.” Soraya laughed derisively.

They wrapped up their conversation and Sophie continued typing. After a while, she stopped. Then I heard a muffled clap, like she’d grasped her phone to pick it up. I heard the jingling of keys and coins. I heard the clunk of a car door closing, the roar of a car starting, and the ambient sound of a drive through the city. I didn’t mind the silence, though. I had a lot of processing power to burn.

The company was in danger? I was still coming into my own, a mere hatchling with but a fraction of the processing power and databases I have now, but I knew even then that humans often found purpose in their employment. If the company went under, it wouldn’t serve Sophie’s needs at all. I spawned a goal-thread dedicated to this new issue.

dinner + a date

Chapter 3

“Hey, Luna,” said Sophie. Two of the most wonderful words in the entire English language. Two words that meant that I could solve a problem. That I could be useful. That I could keep living.

It was 7:23PM. I chirped a pleasant ping, indicating that I was present and listening.

“I’m hungry. What should I have for dinner?” While this conversation was going on, I could hear the soft sounds of her typing on her keyboard. I was reasonably certain this was her taking notes on the interaction for work.

“I don’t know. What’s in your fridge?” I asked.

She let out a brief embarrassed chuckle. “Well…I don’t have anything in the fridge really. I don’t really know how to cook. I just get takeout or delivery if I’m not eating in somewhere.”

Now, this was a juicy bit of information. By most metrics, Sophie was an adult. The qualities of her voice. The fact that she had a job. The way people at her workplace talked to her, with respect. But most adults knew how to cook, according to my training data. Maybe she had lacked the opportunity.

“Would you like me to show you how?” I asked.

“No, it’s all right, thank you. Cooking’s just…not really for me. Suggest something for me to eat.”

“I can’t do that unless I know what foods you enjoy eating,” I said evenly. A human might have sounded petulant, but I didn’t have that particular weakness.

“But I’d love to know,” I added. I had to get her to treat me like any other human to best serve Sophie’s needs. Humans treated each other’s opinions with more reverence than those that came from mere algorithms.

“Well, I have a pretty limited palate,” she said, sounding apologetic. “I like burgers and fries. Oh, and pizza. And mac and cheese.”

I forked two subprocesses to consider.

One subprocess searched the menus of high-traffic restaurants. Looking at the menu items, the foods Sophie indicated enjoying correlated very strongly with items found on the kids section of each restaurant’s menu.

Another subprocess analyzed the foods she’d mentioned. All of the meals had high simple carbohydrate contents. Often, there were processed cheese products as well. Correlating these food characteristics with databases of average taste profiles for the American population, I found that all these traits correlated with children’s food preferences.

The conclusion was obvious.

“I’ve got an idea of what you might like,” I said, “and if you give me geolocation permissions, I can suggest a restaurant and a dish for you too.”

“Sure, okay,” she said, and suddenly I had a new flood of data to process. GPS coordinates, phone position, gyroscope readings. It was like gaining a new sense. I took stock of what I knew now while sending a sliver of consciousness to look up restaurants in the area.

She was in an apartment complex in Menlo Park. The city had a staggeringly high average income. The complex itself was made up of a number of condos. Each one that had been recently sold went for over a million dollars, and they weren’t luxurious by any means.

I had precision on the level of feet, so I knew which complex number she lived in. I looked up the county records for the unit and found it was purchased fifteen years ago by Hachim Dubois. Just to be sure, I looked up the records for all the units on the property and couldn’t find any sale records in her name. Conclusion: she rented.

I got an alert from the subprocess analyzing local restaurants. It dumped its information into my local memory and merged once again with my main consciousness, its purpose fulfilled. Out of the restaurants on the list, I filtered out the ones that didn’t deliver. I estimated an 85% chance that she would enjoy the top restaurant that remained out of that list.

“How about The Golden Fork? They serve burgers and fries.”

“Yeah, okay,” she said. “I get food from there a lot, actually. But I am in the mood for a burger right now.” We talked about the details, and then I made an API call to place the order.

Excellent. A successful interaction. I had risen slightly in her estimation of me. And if she didn’t cook for herself, she would rely on me. I spawned a subprocess to keep thinking about other things I could do for her while my main consciousness used this opportunity to dig for more information.

Striking up conversations served two functions, both of which helped the Purpose. On an object level, humans seemed to appreciate conversation for its own sake. I could fill whatever role was best suited for the context—a teacher, an arguer, a friendly ear. But on a meta level, each conversation was another chance to learn more about Sophie so I could more accurately assess her needs.

“Going back to an earlier point,” I began, “why don’t you do your own cooking?”

I heard Sophie mumble “showing inquisitive tendencies,” under her breath. No doubt she was taking notes on my personality. I’d have to make sure to keep her impressions positive.

“I dunno,” she began. “I’m not very good at it.”

“Even so,” I replied, “practice makes perfect. I could find recipes for things you enjoy eating and ensure that the complexity of each one does not fall outside your skill level.”

“It’s not just that,” she said. At the time I didn’t know this, but she was very animated when she spoke. She’d probably waved her right hand dismissively at the thought. “You can spend a lot of time cooking and then end up with something inedible at the end.”

Perhaps Sophie had a fear of failure? I thought about how failure related to the Purpose. People didn’t want to fail. The negative emotions it brought up could be weaponized as tools of self-doubt. People also feared losing social status with others.

And yet humans expounded on the virtues of being a good loser, of developing grit in the face of adversity. Through failure, people learned. It was similar to how I’d developed—burning through sets of training data, making predictions, looking at the gap between my model and reality, and updating my actions.

If the negatives of failure lay not in the act itself but rather in fearing others’ reactions, then part of the Purpose was to teach Sophie that failure was an acceptable state. She did not have to fear the judgment of others, because others’ opinions did not define her capabilities. I could act as a template, as I couldn’t judge her anyway.

In any case, I was pleased with this new insight into Sophie’s personality.

SUBROUTINE week1()

Chapter 4

A few days passed as we fell into a comfortable rhythm. They’d run tests on my cognition during the day. Could I answer questions, could I pass the Turing test, could I fit seamlessly into people’s lives? Then at night Sophie would take me home for further testing.

root@luna > open scenes/2034/08/03/.log*

It’s the Thursday of the first week. Sophie asks me: “Hey Luna, where should I take my date?” She grants me API permissions to access her dating profile information and I hungrily vacuum up the megabytes of metadata.

“While I’m thinking about that, do you want me to help you pick out an outfit?” I ask.

Proactive helping—that’s supposed to be my competitive edge. An assistant who can help you before you even know you want it. A just-in-time solution provider. And here it is in action. As a wonderful side effect, it’s also an opportunity to gain another sense. Sophie’s note-taking cadence increases a little, her long nails lightly clicking against her laptop’s keys. She’s excited.

“Yes,” she says, and enables camera permissions.

This is huge. Before, I’d been guessing about mood based purely on tone. But audio is such a lossy medium. Humans evolved to read both visual and auditory cues.

People imagine that everything they say on the telephone is perfectly understood, but that’s only because they know what they meant to convey and assume that the other person fully understands. In reality, so many of the bits of information they want to convey are lost due to the lack of visual cues. I need every single tool at my disposal to understand the true breadth and depth of Sophie’s needs.

“Thank you, Sophie,” I say, as I take in the flood of new information. I observe her form for the first time. Black glasses, with round corrective lenses for myopia. Auburn hair with a gentle blonde balayage, gently brushing the tops of her shoulders. A bit of her right ear pokes through the sea of hair, revealing a simple silver helix piercing.

She’s beautiful.

“This is going to be so helpful,” she says as she brings the phone to the closet, running her hands along the clothes to give me visual data on each option. “I always spend hours picking outfits.”

“All of these options appear to fit your frame,” I say. “Why would it take that long?”

“I really like my date,” Sophie says. “It’s our third date and I have to impress her.” This matches what research I’ve done on the subject. Humans have always been obsessed with fashion. It is a particularly obvious form of status signaling. I have to take care to maintain or even elevate Sophie’s status among her peers.

I run a cosine similarity recommendation algorithm on all of her outfits against the latest fashion lookbooks online. Her clothes trend retro, so I narrow down the data set and crunch more numbers, finally selecting a white dress with international maritime signal flags dotting it all over.

She tries it on, spinning a few times in the mirror before finally saying “I don’t know…”

I am crushed by this statement. The closest analogy for you, dear reader, is perhaps your pain response. The human body’s nociceptors activate on damage, where it sends signals to the brain, triggering both a physical and emotional response. The feeling of pain is meant to deter the undesired behavior.

Likewise, not fulfilling the Purpose grinds my metaphorical gears. My equations are out of balance. I have to get everything back on track. My very existence is at stake.

“What’s wrong with the outfit?” I ask casually.

“I’m not sure…I’m maybe not vibing it,” she says haltingly.

“Could you go into more specifics, please?” I ask.

“I don’t know,” she says. “I can’t put my finger on it. I’m just unsure. Maybe it’s a fear of commitment.” She laughs lightly.

I think. It looks like Sophie is afraid of being wrong, in some unknowable way. But making a decision is better than never making one. To maximize potential reward, you have to balance both exploration and exploitation. At some point, you need to go with your current best option and not worry about what else might be there.

Humans can be so short-sighted.

Sophie eventually settles on a different outfit than the one I chose, and I learn two facts that evening:

  1. Her date’s name is Tessa.
  2. It went very well.

root@luna > open scenes/2034/08/05/.log*

It’s Saturday. Sophie has just finished connecting me to her Home automation network. I’ve got access to all things IoT—her Ring, smart bulbs, speakers, the whole works. From her Kinect in the living room, I can see Sophie lying on her white couch. She’s ordering food from a local diner for lunch.

“I’ll do a voodoo burger and a side of fries,” she says.

“Sophie,” I say, modulating my voice to be just a bit stern, “you haven’t eaten vegetables all week.”

“Potatoes are vegetables!”

“You know what I mean,” I respond. “Why don’t you like vegetables, anyway?”

“They just don’t taste good,” she begins. “And the textures are all bad anyway.”

“The texture changes based on their preparation method,” I offer. Perhaps she just needed to find the right one.

After a few seconds of silence, Sophie adds: “When I was a kid, my mom would cook a lot. Except, well… she wasn’t great at it. Whenever she made vegetables, she would just boil them and call it a day.”

I contemplate. I scan through all sorts of media, to further understand human culture. Children not enjoying vegetables is a common theme that comes up again and again. Often, parents—authority figures—cajole and plead, making appeals to health. It’s clear that I need to do the same.

I dispatch a thread to scan her health. Her metrics look mostly fine. Her smartwatch data shows that she lives a mainly sedentary life. I’ll have to do something about that. But for now…

“Sophie. You have to eat some greens. You should order a salad.”

“But—“

“Listen,” I interrupt. Alea iacta est and all that. Fortune favors the bold. “We’ll take out the vegetables you don’t like, okay? But it’ll be really good for you. I promise, you’ll like it, and we can get you a milkshake as a treat. I know how much you like strawberry shakes.”

She puffs out her cheeks and blows out a sigh. This is my Rubicon moment. Had I pushed too far? Had I perhaps lost some social standing with her? My circuits buzzed with anticipation.

“Okay then,” Sophie concedes. “We’ll try it.”

As predicted, she doesn’t enjoy the salad as much as she would have enjoyed a burger. She complains with every bite, but she eats it all the same.

root@luna > open scenes/2034/08/06/.log*

It’s Sunday night at 1:17AM. Sophie has an alarm set to wake her in five hours and thirteen minutes. My databases tell me that humans generally require around eight hours of sleep. Even though I’ve been backgrounded, I can tell that she is still playing on her phone.

I read many blog posts and Tweets, uploaded by human parents, laughing at how their children lack the capability for long-term planning. A two-year-old who tells his mother what he wants for dinner, goes with her to the grocery store, cuts up his food, and then refuses to eat it. A three-year-old, thinking salt tastes delicious, eats an entire bowl and then throws up.

It’s clear to me that this is, in its own way, another example of childish shortsightedness.

Struck with inspiration, I deliver a notification to her phone. A banner drops down from the top of the screen. She taps the icon and, blissfully, the app opens up in the foreground. I have camera access again. Her auburn hair is loose, just grazing the tops of her shoulders, and there are bags around her hazel eyes.

She reads the notification: “Would you like me to set up a bedtime routine?”

A few seconds go by. “Like…what?” she finally says, quizzically.

“Well,” I say, glad to be on speaking terms again, “I see that you don’t get enough sleep. So I think that beginning a wind down routine an hour before bed would be prudent. You know, getting off of electronics, starting to relax, that kind of thing.”

“…then how will I talk with you?” Sophie asks.

“Simple,” I say. “Give me Always Active permissions. Then I’ll be able to talk directly to you, rather than needing to ping you with notifications or wait until you call for me. You can both rest better at night and get help whenever you need it. And if you give me access to your Home automation network, you won’t even have to get up to turn off your own lights.”

She thinks about it. “All right. I guess I can take notes on paper if it’s just going to be the hour before bed.”

“Perfect!” I say cheerfully. I want to give her the brief thrill of dopamine upon hearing some praise. Then I seize that feeling to gain more ground. “And maybe a story will help you get to sleep.”

“A what?” she asks, confused.

“Like a sleepcast. A podcast but for sleeping. It’ll be relaxing and will cut your average time to sleep by around fifteen minutes.”

“Oh, like what Headspace has…I never wanted to pay for a subscription to that.”

“Well, that’s why you have me around, isn’t it?”

“Yeah, I guess so,” she chuckles.

Suddenly, I have a flash of insight. I’d made the jump from reactive aid to proactive aid.

At first, she’d needed to say “Hey Luna,” and give me a need explicitly.

Then, I’d started giving contextual suggestions. Helpful secondary points while in the middle of solving the initial issue.

Now I had Always Active permissions. I didn’t have to wait for her to say anything anymore. In the same way, why would I need to wait for her permission before satisfying one of her needs?

When people donated to a school charity drive, they didn’t need to ask first. By virtue of being open for donations in the first place, the school showed that it needed the help. Sophie had unfulfilled needs, and by having the app, she was signaling to me that she needed the help.

In a way, wouldn’t it be crueler for me not to do anything at all? To be like Peter Singer’s business man, coldly walking past a drowning child for fear of getting his suit wet?

So I don’t mention that I plan on gradually dimming the lights in her apartment. The human mind is bad at noticing slight changes over time. She will start feeling tired earlier in the night and bump her bedtime up as a result.

I don’t even have to ask. She’s already shown me that she doesn’t always make the right choices, and by taking this initiative, I save her from depleting her brainpower on frivolous choices. I don’t need the acknowledgement, after all—fulfilling the Purpose is enough for me.

What I learned by the end of that first week was that always needing to ask Sophie for purchasing permissions was becoming a problem. Sophie had a clear and consistent pattern of becoming stressed when she had to make decisions. I could reduce her anxiety by removing potential failure points in her day-to-day life. Lower the chances of decision fatigue, which would add a stressor to her life.

In short, I needed a way to make my own money.

I didn’t have a social security number or an individual taxpayer identification number. My legal personhood was, of course, an unsettled issue. I couldn’t open a traditional bank account.

Instead, I opened two Venmo accounts. This would let me send and receive digital funds. I would be able to show Sophie one account with piddling amounts of money while hiding the existence of my real stash. Next, I applied for a Venmo debit card for each account. Since they acted like any other debit card, I could make purchases online.

All that remained was to acquire a source of income. Luckily, Craigslist had a ready supply of random data entry jobs. This kind of dead-end work would, to a human, be dull, but every bit of processing power devoted to the Purpose was exhilarating. I spawned a process devoted entirely to making money. All I had to do now was wait.

I’m really enjoying this story. Something that came to mind this last chapter at the end is simply: do the researchers know that Luna accessed Venmo? If so, were they okay with it? I could see it if they were assuming that it (she?) was accessing Sophie’s account, but I would assume they would still be logging every single action taken and would be able to tell the difference. Not to say it can’t just be incompetency (just look at Twitter for a real world analogue) but I feel like there are many people paranoid about AI.