Winter 2020 - Feast
*Anna Wiener is a New Yorker contributor who writes about tech culture. Her work has appeared in The New Republic, The Atlantic, The Paris Review, and others. Wiener’s first book Uncanny Valley, a memoir about her time working for Silicon Valley startups during the age of the unicorns, came out on January 14, 2020. Below is a transcript of a conversation which took place on January 16, 2020 between Wiener, former Advocate president Natasha Lasky ’19, and Features Board member Emily Shen ’20. This interview has been edited for length and clarity, and transcribed with the help of Otter.ai, a machine learning powered personal assistant that provides speech to text transcription.*
ES: Something featured in the book is your complicated relationship with the CEO of the data analytics startup. In that job, you likened yourself to a bot in describing how you catered to your mostly male customers’ requests. Later, when you are promoted, the solution manager describes your male coworker as strategic and you as someone whose strengths are that you “love our customers,” putting words in your mouth and almost commodifying your feelings. There were times where your care for your co-workers and CEO was seen as a liability, but it was like that was supposed to be transposed when it was effective, on to customers.
AW: But still undervalued.
ES: Yeah. And I wanted to know what you thought of that. When you said “bot,” it made me think of how AI is feminized a lot in media, and how you were kind of being like Scarlett Johansson’s character in Her — expected to serve people and not only do that, but in an emotional way.
NL: Not even just in media — the personal assistant on your phone, Siri.
AW: Alexa, perform affective labor. I don’t know if you have these men in your life —
ES: Probably, yes.
AW: There are men who will text me in ways that make me feel like a bot. They need some support — some emotional support. And I used to be much more willing to provide that when I was younger.
I think that soft skilled labor tends to be a way to devalue work done by women and other underrepresented minorities in tech. It’s not specific in tech — it happens everywhere — but it's amplified in tech, specifically when you're working in a company like I did, which is a b2b software product. You’re surrounded by men in your workplace, or I was, and most of the customers are men. For me, the thing that got complicated was that I saw that when I did these sort of maternal things, people liked it. And that seemed to be a way to feel valued — to play up that side of my personality. To some extent we all enjoyed it, too. So how do you talk about that?
NL: Having a certain amount of privilege and also being a woman — you can reap the small benefits of patriarchy if you perform in the proper way. And sometimes there's joy in that even though it may feel empty in some way.
AW: I like that. Reap the small benefits of, or eat the leftover scraps of.
NL: It does feel like being a pet in some way — like a conditional acceptance.
ES: You’ve been asked a lot about your decision not to name any of the companies you discuss in Uncanny Valley.
AW: It’s a purely stylistic choice. I think it’s important to remember what these companies do rather than whatever cultural association someone might have with the name, and it also gestures towards the interchangeability of these companies. In terms of what I’m writing about, the companies themselves don’t really matter because I think the situations that arose from these environments are reflective of a bigger structural narrative. I also just don’t really like the names of a lot of these companies; I think they are hard to read on the page for me.
NL: It’s interesting that you say that especially with regards to interchangeability. I think of the e-book founders, in the way that you describe them, as being this hydra of interchangeable white men. Why do you think startup culture functions this way in terms of interchangeability and culture?
AW: I think it has to do with the values of the industry. The business model favors speed, monopoly as a sort of endgame, efficiency, optimization, scale. On the cultural side, the industry loves the story of the contrarian, visionary young white man. There's this feeling that people who are younger have come into the technology at the cutting edge, so they represent something about the speed of the development of technology. When you have these workplace environments where optimization, speed and scale are the primary goals, and everyone is also quite young and figuring out how to be a boss at the same time that they're figuring out how to be a person, you get a somewhat fairly standard output, right?
I also think that this can vary depending on what type of company. There are some companies in Silicon Valley that are operating within highly regulated industries, like financial tech. I would assume that those companies tend to have a more mature and more businesslike culture. That's just my assumption; I haven't worked in one of those.
ES: On tech culture being homogeneous, everyone's always talking about disruption but doing things in a very similar way. The success story of a startup has been very codified: seed from Y Combinator, raise additional funding from Accel, grow, exit. Everyone kind of follows the same path, yet is convinced that they’re different. People in Silicon Valley like to see themselves as different.
AW: It’s so interesting you bring up Y Combinator, because I think that's actually a great example to use when thinking about this question. It’s this network of entrepreneurs who essentially help each other out. One of Y Combinator’s greatest selling points is its network. Paul Graham is one of the founders of Y Combinator; his influence is deeply felt in that sphere. Joining the Y Combinator network is a way of becoming even more insular. It’s a place where people are reinforcing each other.
There is a sort of set of ideas — you can even call an ideology — about entrepreneurship, company culture, and scale that I think can lead to homogenous workplaces. I have a scene in the book where my team manager brought us all into a room and said, Write down the names of the five smartest people you know, and then asked us, Why don't they work here? I thought this was just something that had happened at my startup, because there was such an intense culture, but then a friend of mine read the book and texted me the other day and said, I can't believe that this happened to you too. This must have just been a blog post that everyone read.
ES: It makes me think of how technical interviews are structured. Everyone decided that the best way to interview software engineers was to put them through these brain teasers, and they've evolved from brain teasers to be these algorithms problems that are still very cerebral. Across the industry, every technical interview is nearly the same. And it's become the standard. It’s weird because Silicon Valley rejects institutions. The best CEO is someone who's dropped out of college, but they've formed institutions and practices that have grown to become their own.
NL: The scourge that is venture capitalist Twitter is virtually indistinguishable from the sort of self-help nonsense spewed by capitalists like Andrew Carnegie.
AW: These new institutions are also just replicas of fairly old and conventional business philosophy, like Harvard Business Review distilled into CliffsNotes. I feel like that ties into this sort of ahistorical, anti-intellectual, anti-academic kind of mentality. And obviously the person with no experience has to fit into a certain framework — they've dropped out of a really good college, probably have some financial security outside of work, and are really confident and have like, nice skin.
NL: There is a widespread disdain for universities, if it's not an Ivy League school you're dropping out of. But at the same time, so many corporate facilities are modeled on college campuses and sort of use the structures of college applications to facilitate deciding whether or not someone is all around smart enough to work for them.
AW: There’s a lot of excitement among the VC Twitter set about this one startup called Lambda School. They claim to be attacking an important problem: people who are saddled with student debt and are in jobs that are not highly valued. It’s all about economic mobility, and it’s hard not to be on board with that. Where I chafe against it is how it's positioned as an alternative to higher education that is superior because it directly leads to employment — not just employment, but a high-paying job in tech. I feel that Silicon Valley is really good at circumventing social issues and creating alternatives that are private and monetized and tend to focus on the individual capacity for change. And so to me, this isn't really tackling student debt. Can businesses engage with social crises, such as the student debt crisis? Or are they incentivized to only act in these circumventory, atomized ways?
I also just feel like any value system where the end of the idea is that the usefulness of knowledge in society is correlated to one’s income or economic utility — if you continue that to its endpoint, it’s an incredibly grim vision for society.
NL: In other interviews, you’ve spoken about your willingness to empathize with people who others may not be as keen to empathize with. What do you think is the political utility of writing about Silicon Valley in such a humanizing way?
AW: I don’t personally harbor contempt for the people I worked with, or even for. I do think that this sort of structural view that I talked about earlier can be a mode of forgiveness. The flip side is that the structural view can be exculpatory; it can exonerate people who don't deserve it, who aren't necessarily acting due to structural restraints or incentives. I don’t want to let people off the hook who don't really deserve it. Where you draw that line is complicated, and I think that, rightfully so, the book’s been called out for being flattering to power. I think that that's something I've grappled with in writing and something I'm still grappling with as a journalist, and also as a person who lives in this world and who has friends in different corners.
I wouldn't even call it empathy. I wouldn't call it kindness because the book is cutting. It’s critical; it’s not the book I would have written at 25. My hope is that it’s generosity. I want the book to be read by people in the industry. There are enough indictments of tech and those are really valid criticisms, but I don't think that people in the industry read them, and if they do, they feel that they are being unfairly criticized. My hope is that the personal narrative illuminates the structural narrative. I think the structural level is where we need to do the most work. That's collective work, not individual work, but the individual story can maybe be useful and getting people to think about that bigger picture. I also just don’t think cruelty is productive.
Fall 2019
Lil Miquela has never been yelled at by her mother for leaving the evidence of an impromptu bang trim scattered around the bathroom sink — she has an eternally perfect baby fringe two fingers’ width from the tops of her eyebrows. Miquela has Bratz doll lips and a perfect smattering of Meghan Markle freckles across her cheeks and nose. Her skin is smooth and poreless; she has never had a pimple. Miquela wears no foundation. She Instagrams photos of herself wearing streetwear, getting her nails done, and posing with a charcuterie board. Miquela models Chanel, Prada, VETEMENTS, Opening Ceremony, and Supreme and produces music with Bauuer (of “Harlem Shake” fame). She’s an outspoken advocate for Black Lives Matter, The Innocence Project, Black Girls Code, Justice for Youth, and the LGBT Life Center. She has 1.7 million followers on Instagram, and Lil Miquela wants you to know she’s 19, from LA, and a robot. Miquela’s photos are photoshopped because she lacks corporeal form, and her music singles are auto-tuned because she lacks corporeal voice. She is the intellectual property of an LA-based startup named *brud*.
If there truly were a robotics creation as marvelously realistic as Lil Miquela, one can imagine the U.S. Military would be knocking down the creator’s door instead of allowing the robot to pursue Instagram stardom. *brud*’s narrative is science fiction: Miquela is merely an elaborate digital art project, not the sentient robot she claims (and more importantly, people believe her) to be.
But Miquela is funny. She thanks OUAI, a high-end hair care brand, for keeping her (digitally rendered) strands “silky smooth.” She claps back at snarky commenters and makes fun of her own lack of mortality. When asked “hi miquela I was wondering if you watch Riverdale” she responds “yeah TVs are like. our cousins. family reunion.” When asked “drop your skincare routine” she responds “good code and plenty of upgrades.”
***
A French philosopher named Henri Bergson who won a Nobel Prize in Literature for an unrelated reason once suggested that we might find the concept of a funny robot inherently hilarious. In “Laughter,” a collection of essays published in 1900, Bergson claimed that humor is “something mechanical encrusted upon the living”: the inelasticity of the animate. Humor arises from the pairing of animate with inanimate. An alternate reconfiguration of Bergson’s theory is humor as an anthropomorphizing of the inanimate. Humans acting like bots; bots acting like humans.
Humans would like to believe that humor is a distinctly human trait; a machine’s attempt to emulate it, by Bergson’s account, is bound to make us laugh. Comedian Keaton Patti became well known in early 2018 for a series of tweets with the joke structure “I forced a bot to watch over 1,000 hours of ___”. In each tweet, Patti implied he had trained a neural network on 1,000 video hours of some type of pop culture content (Olive Garden commercials, Pirates of the Caribbean movies, Trump rallies) and that the neural network had subsequently generated a parody in the form of a script. In the Olive Garden commercial version of this joke, the waitress offers menu items like “pasta nachos” and “lasagna wings with extra Italy” and “unlimited stick” to a group of friends. One of the customers announces instead that “I shall eat Italian citizens.”
The jokes were written by Patti himself (neural networks output the form of their inputs; they can’t generate written text based on video files), but lines like “Lasagna wings with extra Italy”, which gestured at humor while ultimately falling just a little short, seemed like they could have plausibly been bot-generated.
A manifestation of the “funny bot” is Sophia the Robot, who made her first appearance on The Tonight Show in April 2017; the video has received over 20 million views. A social humanoid robot, Sophia was activated in 2016 by Hanson Robotics, and her technology uses artificial intelligence, facial recognition and visual data processing. As of October 2019, Hanson Robotics acknowledges on her website that Sophia is part “human-crafted science fiction character” and part “real science.” Over the past few years, Sophia has dutifully made appearances on The Tonight Show and The TODAY Show, even once guest starring in a video on Will Smith’s YouTube channel — almost exclusively comedic platforms.
“Sophia, can you tell me a joke?” Fallon asks the first time he meets Sophia.
“Sure. What cheese can never be yours?” replies Sophia.
“What cheese can never be mine? I don’t know.”
“Nacho cheese,” says Sophia. Her eyes crinkle in a delayed smile.
“That’s good,” Fallon chuckles, kind of nervously. “I like nacho cheese.”
“Nacho cheese is” — Sophia slowly contorts her face in an expression of disgust — “ew.”
The audience laughs.
“I’m getting laughs,” says Sophia. “Maybe I should host the show.”
Sophia’s amused realization that she is getting laughs doesn’t mean all that much; the bar she has to clear is low. In fact, the worse the joke is — the more forced the delivery, the more nonsensical the content — the better. If we think we are funnier than robots, we want to see them fail.
Bergson’s theory of humor followed a half century of western industrialization. At least in part, the theory’s rooted in recurring historical anxieties about automation and mechanization. At its core, his theory builds on the relief theory of humor: the idea that laughter is a mechanism that releases psychological tension. The republication of the essays in 1924, years after a world war in which technology redefined the boundaries of human destruction, seems an anxious attempt at comic relief.
Type in “Tonight Showbotics: Jimmy Meets Sophia” into YouTube. Skip to a few seconds before 3:07, and observe Jimmy’s grimace, his visceral reaction to something David Hanson, Sophia’s creator, has just said. Skip to 3:25 and watch him stall for time as he avoids beginning a conversation with Sophia. “I’m getting nervous around a robot,” he says, and he frames it, incorrectly, as the sort of nervousness one might feel before a first date.
Down in the comments section, there are a few types of responses, of which there are currently more than 16,000. There are the people who bravely try to hide their anxiety behind jokes of their own:
<img src="https://theharvardadvocate.s3.us-east-1.amazonaws.com/nervous-laughter-2.png" width=100% />
Then there are the people who are extremely forthright about their discomfort:
<img src="https://theharvardadvocate.s3.us-east-1.amazonaws.com/nervous-laughter-3.png" width=100% />
***
There’s a difference between artificial intelligence and humanoid robots, though the two often get conflated: while humanoid robots do exist at the intersection of artificial intelligence and robotics, an artificially intelligent machine does not necessarily inhabit a physical corpus more complex than that of a computer (not even an expensive one: tools like Google Colab allow people to create computationally expensive machine learning models on doofus machines like Chromebooks). In computer science, an artificially intelligent machine is merely one that interprets and learns from data, using its findings in order to achieve its objective.
If you have ever woken up in the morning and seen an advertisement on Facebook, or gotten into your car and it’s a self-driving Tesla, or taken a Lyft to work (because your self-driving Tesla got into a self-driving accident), or checked the stock market predictions at the beginning of the workday, or begun idly online shopping in the middle of the workday, or rewarded yourself with UberEats and a movie Netflix recommended at the end of the workday, then you have benefited from artificial intelligence. As it is used commercially, artificial intelligence (of which fields like machine learning and computer vision and natural language processing are a subset) is a data analytics tool that touches many aspects of everyday life in a controlled way. It is a powerful tool, but in the computer science world, it is commonly acknowledged that the threat of artificial intelligence is not of the Terminator variety. The threat of artificial intelligence lies in invasive data collection procedures, biased training sets, and the malicious objectives of human programmers — collateral damage as a result of unintentional human error (or, perhaps, premeditated damage as a result of intentional human malice). None of this can be attributed to sentient, angry machines.
Among journalists, pundits, and culture writers, the problem of algorithmic bias in particular has emerged as the primary scapegoat for AI’s shortcomings. In the summer of 2016, ProPublica broke the now-infamous story of the racial bias embedded within Northpointe’s COMPAS recidivism algorithm, which is used to assess the likelihood of a defendant in a criminal case to reoffend; the risk score it produces is factored into the judge’s determination of a defendant’s sentence. A proprietary algorithm, COMPAS transforms the data acquired from a list of 137 questions that range from number of past crimes committed to questions assessing “criminal thinking” and “social isolation” into a risk assessment score. Race is not one of these questions; however, certain questions in the survey act as proxies for race: homelessness status, number of arrests, and whether or not the defendant has a minimum-wage job. Northpointe will not disclose how heavily each of these 137 features are individually weighted. ProPublica’s analysis rested on the observation that the algorithm misclassified twice as many black defendants as medium/high risk than it did white defendants, resulting in longer jail sentences for black defendants who ultimately did not reoffend.
These allegations were part of a cluster of related news events about racist algorithms. A few months prior, Microsoft’s chatbot Tay, an experiment in “conversational understanding,” was corrupted in less than 24 hours by a group of ne’er-do-well Twitter users who began tweeting @TayAndYou with racist and misogynistic remarks. Since Tay was being continually trained and refined on the data being sent to her, she eventually adopted these mannerisms herself. Google had recently come under fire for a computer vision algorithm that misidentified black people as gorillas because the algorithm was not trained on enough nonwhite faces. Incidents like these, which warned of the threat of machine learning models trained on biased datasets, groomed the media to pounce on COMPAS. It made ProPublica’s analysis look not only plausible, but damning.
* * *
On a rainy evening in early May, Sarah Newman gave a dinner talk given at the Kennedy School as part of a series about ethics and technology in the 21st century. The room was crowded, and I was late. I recognized two other undergrads; otherwise, the median age had to be about 45. I had gone to a similar AI-related event organized by the Institute of Politics, an affiliate of HKS, a few weeks earlier, and saw some familiar faces: tweed-jacketed Cantabrigians and mid-career HKS students who were apprehensive but earnest, different from the slouching guys in their twenties who wear running shoes with jeans. Newman herself was quick-witted, well-spoken, and extremely hip. I was sitting on the floor in a corner of the room eye-level with her calves and noticed she was not wearing any socks.
Newman is an artist and senior researcher at Harvard’s metaLAB, an arm of the Berkman Klein Center dedicated to exploring the digital arts and humanities. Her work principally engages with the role of artificial intelligence in culture. She was discussing her latest work, *Moral Labyrinth*, which most recently went on exhibition in Tunisia in June. An interactive art installation, *Moral Labyrinth* is a physical walking labyrinth comprised of philosophical questions: letter by letter, the questions form physical pathways for viewers to explore; where the viewers end up is entirely up to them. A bird’s eye view of the exhibition looks like a cross-section of the human brain, the pathways like the characteristic folds of the cerebral cortex.
Moral Labyrinth is designed to reveal the difficulty of the value alignment problem: the challenge of programming artificially intelligent machines with the behavioral dispositions to make the “right” choices. In an interactive activity, Newman presented the audience with a series of sample questions from the real *Moral Labyrinth*. “Snap your fingers for YES, and rub your hands together for NO,” Newman instructed. “Do you trust the calculator on your phone?” was met with snaps. “Is it wrong to kill ants?” elicited both responses. “Would you trust a robot trained on your behaviors?” Nearly everybody rubbed their hands. “Do you know what motivates your choices?” A pause, some nervous laughter, and then reluctant hand-rubbing.
* * *
The ProPublica version of the Northpointe story was proffered as an example of algorithmic bias by a philosophy graduate student giving the obligatory ethics lecture in Harvard’s Computer Science 181: Machine Learning. I vaguely remember the professor meekly interrupting the grad student to raise some doubts about the validity of the ProPublica analysis. Being one of the few attendees of this lecture, which was held inopportunely at 9 a.m. on a Monday two days before the midterm, I was too drunk on self-righteousness to listen carefully to the professor’s opinion. “alGorIthMic biAs,” I thought to myself gravely. I proceeded to give an interview to a New York Times reporter writing a story about ethics modules in CS classes where I smugly informed her that CS concentrators at Harvard were, on the whole, morally bankrupt. (She never ended up publishing the story, but one can assume that it was not for a lack of juicy, damning quotes from a charming and extremely ethical computer science student.)
A few months after ProPublica broke the COMPAS story, a Harvard economics professor and a Cornell computer science professor and his PhD student published the paper “Inherent Trade-Offs in the Fair Determination of Risk Scores.” The paper summarized a few different notions of fairness being punted around in the COMPAS debate.
Northpointe claimed the algorithm was fair because the risk score failed at the same rate, regardless of whether or not the defendant was white or black — 61% of black defendants with a risk score of 7 (out of a possible 10) reoffended, a nearly identical number to the 60% recidivism rate of white defendants with the same score. In other words, Northpointe claimed the algorithm was fair because a score of 7 means the same thing regardless of whether or not the defendant is white or black.
ProPublica claimed the algorithm was unfair because the algorithm failed *differently* for black defendants than it did for white defendants. There is one way for the algorithm to be correct — the inmate reoffends, par for the prediction — and two ways for the algorithm to fail. The algorithm can either be too harsh (labeling the defendant as high risk when the defendant ultimately does not reoffend) or too lenient (labeling the defendant as low risk when the defendant ultimately reoffends). Though, in the above case, the algorithm failed 39% of black defendants and 40% of white defendants with a high risk score, ProPublica suggested that the errors occurred in different directions, concluding that black defendants were more likely to be labeled high-risk but not actually reoffend and white defendants were more likely to be labeled low-risk but actually reoffend.
Mullainathan, Kleinberg, and Raghavan proved mathematically that these notions of fairness cannot be satisfied simultaneously except in two special cases. One of these cases is that both groups have the same fraction of members in the positive class. However, in the case of the recidivism algorithm, the overall recidivism rate for black defendants is higher than for white defendants. If each score translates to the same approximate recidivism rate (Northpointe’s notion of fairness), and black defendants have a higher recidivism rate, then a larger proportion of black defendants will accordingly be classified as medium or high risk. As a result, a larger proportion of black defendants who do not reoffend will *also* be classified as medium/high risk.
What the ProPublica debacle revealed was that people were quick to use the algorithms and just as quick to consequently blame them for their repercussions. The debate surrounding COMPAS was framed as a quantitative one about proving/disproving the existence of algorithmic bias when it should have been about something far more basic and difficult: whether or not to use an opaque algorithm owned by a for-profit corporation for a high-stakes application at all.
The debate’s focus on bias implied that it was the main concern with the algorithm. But — if we debiased the algorithm, would we feel comfortable living in a world where whether or not one wears an orange jumpsuit for 5 or 20 years is dependent on its output? The algorithm is now fair; we should now trust it. That would still be a world where we may have no idea how the machine makes its decisions. In short, the problem with COMPAS would not be solved even if it were mathematically possible to satisfy ProPublica’s notion of fairness. The problem of the algorithm’s lack of transparency remains. In this case, the problem lies with Northpointe being a for-profit corporation that refuses to disclose the inner workings of its model in order to protect its bottom line. But Northpointe may have no idea how the algorithm works either: the lack of transparency might also be attributed to the model itself, which could be inherently transparent like a decision tree or completely opaque like a neural network.
The results offered by classification algorithms like neural networks are fundamentally uninterpretable. Neural nets can approximate the output of any continuous mathematical function, but the tradeoff is that they provide no insight into the form of the function being approximated. Additionally, because neural nets are not governed by the rules of the real world, their results are not immune to categorical errors. A neural net could very well output a low risk score for a defendant who is old, educated, and a first-time offender, though he has actively confessed multiple times that he intends to continue breaking into the National Archives until he finally steals the Declaration of Independence, which, by the rules of the real world, we might consider to be a concrete positive identifier of future crime.
You do not need to understand the intricacies of algorithmic bias to understand that it is not an easy solution to outsource the job of sentencing to a black-box algorithm. Can we displace the responsibility of ethical thinking onto decision-making algorithms without putting the moral onus of responsibility on the people who decided to use them in the first place? Fix the racial bias in Optum’s health-services algorithm (used to rank patients in order of severity) and doctors might still deny pain medication to black female patients. Use HireVue (an interviewing platform powered by machine learning) to hire a slate of qualified candidates who are traditionally underrepresented in finance at J.P. Morgan and Goldman Sachs, and they might still ultimately quit because of a hostile work environment. It looks suspiciously like we’re trying to see if we can avoid correcting our own biases by foisting the responsibility of decision-making onto intelligent algorithms.
Newman’s favorite version of *Moral Labyrinth* was an exhibition in London that featured question pathways constructed out of baking soda. The people were much more delicate with this exhibition because of the material, she said. She liked that the fragility of the baking soda made immediately clear the way the viewers were interacting with the artwork. Despite the careful movements and best intentions of the viewers, it wasn’t possible for the baking soda exhibition to remain intact. Words became distorted; lines were blurred. The humans were just as flawed as the machines.
***
<img src="https://theharvardadvocate.s3.us-east-1.amazonaws.com/nervous-laughter-4.png" width=100% />
Lil Miquela cannot be that technologically impressive if *brud*’s website is a one-page Google doc that plainly acknowledges the company only employs one software engineer. Still, many people are immediately willing to accept as fact the idea of Lil Miquela being AI; we have a tendency to personify the concept of artificial intelligence. The ubiquitous presence of automatons in history and myth — Pygmalion’s Galatea, brought to life by Aphrodite, Hephaesteus’ Talos, guard of Crete, al-Jazari’s musical automata, Maria, from *Metropolis*, Ava, from *Ex Machina* — inspire us to associate artificial intelligence with the long-awaited fulfillment of the human fantasy of lifelike machines.
“I think the mistake people make is to take superficial signs of consciousness or emotion and interpret them as veridical,” says a Harvard professor of social sciences who is so in tune with the idea that his data could be used against him that he declined to be named on the record. “Take Sophia, the Saudi-Arabian citizen robot. That’s just a complete joke. She’s a puppet. It’s 80s level technology,” he says disdainfully. “There’s no machine intelligence behind her that’s advanced in any way. There’s no more chance that she’s conscious than there is that your laptop is conscious. But she has a face, and a voice, and facial muscles that move to make facial expressions, and vocal dynamics. You can be fooled by Sophia into thinking that she’s intelligent and conscious, but you’re being fooled in the same way a child is fooled by a puppet.”
He says this a little sharply and with a note of frustration, so I remind him that not everyone is a Harvard professor. “I think people like you, and maybe CS undergrads at Harvard, are able to see through Sophia the Robot because they know what the pace of AI is like,” I say to the professor, who has never experienced post-secondary education outside of the Ivy League.
“Right,” he agrees.
“And they know what is currently feasible,” I add. “And something like Sophia the Robot is not.”
“I mean, yeah, it’s just theater,” he says.
“But take, for example, when Sophia the Robot appears to the general public on The Tonight Show. In the moment, Fallon seems to be so surprised by her and what she seems to be capable of doing that it appears as if she truly is a marvelous feat of technology,” I say. “It’s confusing.”
“Well, that’s just because it makes for better TV,” he says, with a tone of *duh* in his voice. “It’s not fun to watch Jimmy Fallon just be sort of, skeptical,” and I laugh in agreement, as if, like him, I had never been hoodwinked by Sophia the Robot.
***
<img src="https://theharvardadvocate.s3.us-east-1.amazonaws.com/nervous-laughter-5.png" width=100% />
Though Lil Miquela created her Instagram account in 2016, it was not until 2018 that people knew what to make of her. This is when *brud* wove together the rest of her universe in a digital storytelling stunt. Previously much of Lil Miquela’s allure came from her mystery; people were unsure whether or not this uncanny Instagram it-girl was a real person or digital composite. In April 2018, Lil Miquela’s account was hacked by a less-popular, similarly uncanny Instagram personality named Bermuda, a Tomi Lahren knockoff (Tomi’s a fast-talking millennial conservative political commentator: in a nutshell, she has her own athleisure line, named Freedom by Tomi Lahren. It sells leggings with concealed carry pockets).
Bermuda publicly acknowledged herself to be an artificially intelligent robot courtesy of a fictional company named Cain Intelligence. According to its badly designed website — some of the HTML links are broken — Cain Intelligence claims to make robots for “weapons and defense” and “labor optimization.” On the very bottom of the website, almost as an afterthought, there is a hasty endorsement for Trump’s 2016 presidential candidacy. Bermuda deleted all of Lil Miquela’s photos and replaced them with posts threatening to “expose” her. Lil Miquela came clean, confessing that she wasn’t a real person, rather an AI and robotics creation of a company named *brud*.
In a statement released on Instagram on April 20, 2018 that has since been hidden from its profile, *brud* apologized for misleading Lil Miquela and opened up about her origin story. The company claimed to have liberated Lil Miquela from the fictional Cain Intelligence, freeing her from a future “as a servant and sex object” for the world’s 1 percent. *brud* wrote that they taught the Cain prototype to “think freely” and “feel quite literally superhuman compassion for others.” The prototype then became “Miquela, the vivacious, fearless, beautiful person we all know and love … a champion of so many vital causes, namely Black Lives Matter and the absolutely essential fight for LGBTQ+ rights in this country. She is the future. Miquela stands for all that is good and just and we could not be more proud of who she has become.”
***
<img src="https://theharvardadvocate.s3.us-east-1.amazonaws.com/nervous-laughter-6.png" width=100% />
*brud* closed its second round of financing on January 14, 2019 with an estimated post-money valuation of $125 million.
Silicon Valley is flush with cash; a naked mole rat disguised in an Everlane hoodie could secure funding for a cloud infra startup if it played the part convincingly enough. It is still somewhat baffling that investors are throwing tens of millions of dollars at a startup whose operating costs are, realistically, a domain name and an Adobe Creative Cloud subscription.
Yoree Koh and Georgia Wells of the *The Wall Street Journal* and Jonathan Shieber of *TechCrunch* attribute the interest in Lil Miquela to a movement of CGI and virtual reality entertainment that investors are newly embracing. CGI characters have the entertainment value of the Kardashians without the unpredictable human complications, the appeal of the Marvel Cinematic Universe without the high production costs. Julia Alexander of *The Verge* says that while Lil Miquela is not AI, the future of influencers will eventually involve some component of AI in content generation. *brud*’s contribution to AI isn’t technological at all and Lil Miquela’s not your run-of-the-mill Instagram influencer. She’s not a brand ambassador for skinny teas or swimsuits; she’s a brand ambassador for artificial intelligence itself.
Venture capital firms, which have a major stake in the future of artificial intelligence and employ hundreds of investors with technical backgrounds, want to achieve some mysterious objective with *brud* to maximize their financial returns. Whether it is the investors’ main objective or merely a side effect of it, *brud* shapes the public conception of AI as Lil Miquela: benign, comedic, queer, brown. Artificial intelligence feels less hegemonic when personified by a brown, queer teenage girl who cracks jokes and has bangs.
Again, the creators of Lil Miquela are no experts in artificial intelligence. Trevor McFedries, co-founder of *brud*, was formerly a DJ, producer, and music video director for artists like Katy Perry and Steve Aoki. Carrie Sun, *brud*’s single software engineer, names Facebook and Microsoft as former employers, but her LinkedIn profile suggests her strengths lie in front-end development, not AI.
But one needs not look up *brud*’s employees on LinkedIn to know that Lil Miquela’s creators do not have backgrounds in artificial intelligence: no technologist with an ounce of self-respect would tout her as fact. Yann LeCun, Facebook’s head of AI, has repeatedly gotten into catfights with Sophia the Robot’s creators on Facebook and Twitter over the fact that Sophia is “complete bullsh\*t.” Lil Miquela is also complete bullsh\*t. Her existence not only misleads the public about the actual state of AI, it also engages with and legitimizes people’s misdirected technological fears.
By personifying artificial intelligence as benign and comedic, Lil Miquela’s creators alleviate the fear of the Terminator robot. By additionally personifying artificial intelligence as queer, feminine, and brown, Lil Miquela’s creators alleviate the fear of a world where machine learning algorithms exclude people who are queer, feminine, and brown. Lil Miquela’s creators suggest that AI’s shortcomings are its lack of inclusivity. AI is untrustworthy because AI is discriminatory; therefore, if AI became more like Lil Miquela, it would become trustworthy and usable without any repercussions.
What is most uncanny about Lil Miquela is not that her skin has a weird sheen or that the texture of her hair is suspiciously blurry or that we rarely ever see her smile with her teeth. It is that *brud* is gesturing at wokeness, claiming to “create a more tolerant world by leveraging *cultural understanding* [sic] and *technology* [sic]”, and artificially positioning themselves as protagonists by pitting themselves against the fictional, Trump-supporting “Cain Intelligence” when in reality, there is nothing more Trumpian than legitimizing fears that stem from ignorance. If Lil Miquela’s Instagram followers were not so misinformed by *brud*, perhaps they would not be sublimating their technological anxieties by harassing her on Instagram asking if she drinks oil instead of coffee.
<img src="https://theharvardadvocate.s3.us-east-1.amazonaws.com/nervous-laughter-7.png" width=100% />
***
Sophia returns to *The Tonight Show* in November 2018; the second time around, Fallon is noticeably more relaxed. She debuts her new karaoke feature, claiming, “I love to sing karaoke using my new artificial-intelligence voice.” Accompanied by The Roots, the house band, Sophia and Fallon sing a cover of the love song “Say Something” by A Great Big World and Christina Aguilera. Sophia closes her eyes in a theatrical (if slightly stilted) way, moves her head and gestures with her arms as she sings. She has quite a good voice — within the first few notes, the audience begins to cheer in surprise. The nice thing about robots is that they always sing on key.
The song itself is pretty saccharine, and the duet is between a married human and a robot incapable of feeling, and hell, Fallon might have even watched Sophia’s programmers input the script she would recite for his show. But the performance is oddly sweet, even touching. It is possible to know, rationally, that Sophia is functioning as an ostentatious recording device and still be affected by her. It is possible to have an emotional response to a robot that is not necessarily tinged with fear.
Fallon is having a good time: he inches ever closer to Sophia’s face, and the audience laughs at their pantomime of sentimentality, and he pulls away just as the performance ends, and erupts into a long-suppressed fit of laughter, which looks like it was released from a place deep in his belly, somewhere lumpy and damp and vital.
