M3GAN: Can a murderous doll teach us what it means to be human?


Please note: this review contains spoilers.

Research takes you to surprising places. For me, those places were toy stores in New Hampshire, asking shop owners if I could photograph their figurines and dolls, for science. My cognitive neuroscience lab and I morphed those doll photos with human photos and asked people how alive the hybrids looked. It sounds ridiculous, but this research helped us to better understand how the human brain moves from recognizing that something has a face to realizing that the face is social and attached to a mind. This recognition is crucial because it is a gateway to higher-level processing like perspective-taking, emotional resonance, and empathy. Much like morphs, technological advancements are increasingly blurring our definition of what it means to be alive and have a mind. The titular character of Gerard Johnstone’s M3GAN is a humanoid robot who tests that boundary. Not only does the robot imitate human form, she imitates, and then far exceeds, human intelligence. Ultimately the film is a campy, comedic horror, but in a world where robots are increasingly realistic, and algorithms are increasingly intelligent, M3GAN compellingly uses a murderous doll to explore what it means to be human.

In the film, “Model 3 Generative ANdroid” – M3GAN, for short – is a next-generation toy developed by the brilliant roboticist Gemma. Equipped with a four-foot-tall Barbie-esque body and advanced learning capabilities, M3GAN “imprints” on her primary user, learning about them and increasingly responding to their needs as they spend time together. When Gemma suddenly becomes the guardian of her orphaned niece Cady, neither are prepared for the transition. Enter M3GAN. Gemma outsources the monotonous supervision and emotional labor of caring for Cady by bringing home M3GAN, who easily steps in as the ultimate support system. M3GAN is a best friend, grief counselor, rule enforcer, and protective guardian for Cady. Of course, this all goes wrong as M3GAN and Cady become increasingly attached. On the human side, Gemma hides from her caretaking responsibilities while Cady avoids connecting with others and facing her parents’ death. On the non-human side, M3GAN’s protective instincts turn murderous; she eliminates anything perceived as a threat to Cady. By the end, M3GAN seems to revel in her rage, with gleeful and gratuitous violence directed at anyone in her path.


Still from M3GAN, Courtesy of Universal Pictures

It’s no surprise that M3GAN has captured audiences' imaginations and generated tremendous box office returns. This is due to some brilliant marketing, but also to our fascination with edge cases of what it means to have a mind. Humans see faces in clouds, write about animating monsters with lightning, and muse about philosophical zombies. This deep interest in other minds is because, for humans, survival of the fittest often means survival of the “groupiest;” we are innately tuned to seek out and interpret other’s minds. Without conscious awareness, your brain rapidly cleaves the world into living things and objects, spotlighting the alive things for additional processing. Unlike objects, which mostly just sit there, things with minds need to be quickly detected because they are capable of helping and hurting us. Since we don’t have direct access to others’ mental states, we must rely on what is telegraphed in subtle movements of their bodies and faces. We are highly sensitive to human form because it helps us to detect mental states.

This sensitivity means that to create humanoid robots or CGI humans, getting the visual cues right is a tall order. In fact, when you get it wrong, perceivers seem to experience feelings of revulsion. The “uncanny valley” or “bukimi no tani” is a theory put forth by the roboticist Masahiro Mori in 1970. He proposed that as objects appear more and more human-like, our appeal for them increases, but only up until a point. If the object gets too close to appearing human, there is a sudden revulsion; we strongly dislike the object and it falls into the uncanny valley. Later research in the early aughts proposed several reasons why this might be the case. Humanoid robots remind us of death, humans are sensitive to small perturbations in others’ form as a way to protect themselves from danger and disease, people are uncomfortable with category boundary shifts. Perhaps most interestingly, Grey and Wegner found that feelings of unease are caused by mismatches between expectations that something is alive and expectations about that thing’s ability to experience the world. A human who lacks the means to sense and feel may be just as unsettling as an embodied, intelligent, and responsive robot.

Roboticists and animators have long struggled with the uncanny valley. In 1989, Pixar won an Oscar for their groundbreaking computer-animated short, TIN TOY, which tells the story of a realistic-looking human baby terrorizing a set of toys by chasing, shaking, and breaking them. The baby, in its attempted realism, is deeply eerie. His eyes seem dead, his flesh far too solid. On the other hand, the toys are a delight. You see the emotion in them because you don’t have a mental model for the way they are supposed to look or move or sound. Avoiding realistic human form seems to allow our brains to access advanced social cognitive skills like emotional resonance, while bypassing the perceptual scrutiny that may lead to revulsion. Based on this insight, Pixar did not produce a movie with human characters for another fifteen years. When they did, The Incredibles characters were highly stylized to purposefully avoid the uncanny valley. Other movies have not been so wise. POLAR EXPRESS and BEOWULF both used advanced motion capture methods hoping to bring their CGI characters to life. Despite pouring millions into production, these movies were panned for their inability to convey humanness. Characters were called “digital waxworks” and “creepily unlifelike beings” where “you see the cladding but not the soul.” More recently, and more hilariously, the terrifying human feline hybrids in Tom Hooper’s CATS (based on the Broadway musical by Andrew Lloyd Weber) apparently caused Weber to get a therapy dog.

Rather than avoid the uncanny valley, M3GAN intentionally drags us into it. The filmmakers could have made her more cartoonish. Conversely, they could have cast a human without CGI and asked us to believe that it was a very life-like doll – an approach utilized in Stephen Spielberg’s casting of Haley Joel Osement in A.I. Instead, M3GAN is just realistic enough to be unsettling, and this design decision pays dividends by creating an eerie vibe throughout the film. Her eyes are too big, her voice slightly tinny, her blinks and movements a bit too jerky. She purposefully lacks the smoothness of a real human so that we remember she is a machine. However, mentally, M3GAN seems more human than most – her eyes more sparkly, her intelligence sharper, her ability to size people up more precise, her memories more perfect, her feelings of attachment and rage more intense.


Still from M3GAN, Courtesy of Universal Pictures

This combination of a decidedly unhuman body with a hyper-human mind creates an important tension throughout the film. The emotional resonance of her comforting Cady is interrupted by awkward, uncomfortable-looking gestures. A chase scene through the woods is made extra bizarre by M3GAN galloping on all fours like a deranged combination of Gollum and a charging gorilla. Every time M3GAN is physically harmed, it is deeply uncomfortable. You cringe when she is chained up and prodded by a researcher, sat atop and slapped by the young sociopath in the woods, and, quite literally, torn apart at the end. On one level your brain knows she is a robot, but her just-close-enough human form paired with advanced mental capacities sends you down a psychological chute of experiencing her pain. It’s no surprise that Bruce, the robot who ultimately saves the day, has a form that is only a bit anthropomorphic. Even more comfortingly, he doesn’t have a mind of his own. Bruce is controlled by the actions of a human operator; he does not make his own decisions. He is not uncanny, he is a tool.

There is much to learn about the human mind from our reactions to the film. Artificial Intelligence is increasingly sophisticated and increasingly available. Millions have shared their AI-generated selfies from Lensa, companies use algorithms to screen job candidates, and ChatGPT has sparked waves of excitement and panic over its ability to generate sophisticated answers to complex questions. While these tools are unquestionably fascinating in their ability to organize information and generate content, they can create some pretty terrible output: sexualized images, race and gender bias, disinformation. Like M3GAN, models make their content based on existing data. It is more than a little unsettling that M3GAN has access to all the knowledge in the world and ends up violent. In some sense, we perceive that she chooses to be violent, but it’s actually a damning condemnation of her training set.

We’re not different. Our brains behave much the same way. We have the world’s most sophisticated neural network between our ears. We take in information from the outside world, organize it based on similarity, make predictions, make choices, and act. Technology like M3GAN forces us to ask, what are our own training sets?

Despite the questions M3GAN raises about Artificial Intelligence, mind perception, and our reliance on technology, the movie has broad appeal because it doesn’t dwell on these questions and it never takes itself too seriously. Her rampages manage to be equal parts terrifying and hilarious. In the end, M3GAN is defeated, but a last-minute cliffhanger teases that she may have moved her mind before her body was destroyed. Given that they’ve just announced a sequel for 2025, it seems M3GAN may have more to teach us about what it means to be human.


More from Sloan Science and Film:

TOPICS

SHARE