The purpose of this research is to provide guidance for robot-related projects. It examines the purpose of the robot and explores how the interface (UI) can be optimized to achieve the best expressions.
Year2023CategoryResearch / UI
Cuteness plays a big part when we interact with a non-human, whether it’s an animal or an object. It is an important factor that determines a human’s emotion towards a robot. Robots come in all shapes, sizes, and task-based fields. A good UI helps humans to effectively communicate and connect with robots.
There are numerous types of robots out there, each having to accomplish different tasks. Some of them are cute when there’s no need for a cute robot, others try to be cute and do not succeed looking so. There is a Wild West out there where the styling and the task don’t match the UI. But we are at the start of learning about robots and the need for cuteness.
I want to create a guide for future robots that will use screens for their interface. I will look into the hardware (styling) as well but I will focus on the interface and how a robot transmits its emotions. I will try to identify what shapes are considered cute for an interface and how animation plays a part in it. I will also find the fields in which a robot might need to be cute.
Ethologist Konrad Lorenz defined the baby schema (“Kindchenschema”) as a set of infantile physical features, such as a large head, big eyes, high and protruding forehead, chubby cheeks, small nose and mouth, short and thick extremities, and plump body shape, that are perceived as cute, motivates caretaking behavior, prolongs attention, and elicits reward activity in the brain. Cuteness plays a key role in facilitating social relationships, pleasure, and well-being. The best word that describes the holistic view of cuteness is Kawaii, (lovely, cute, or adorable) is the culture of cuteness in Japan. It can refer to items, humans, and non-humans that are charming, vulnerable, shy, and childlike.
Having different backgrounds, cultures, and trends, our perception of cuteness sometimes varies from Konrad Lorenz’s baby schema. Here we can see that Stitch has a big mouth and nose as opposed to smaller ones that are cute based on the baby schema. Overall we agree that Stitch is in fact a cute character. Sometimes some features might make a character look “dumb” or “goofy” according to a study made on robots with displays for faces. BMO is a game console with a simple face, altho there’s no room for complex expressions, by using pantomime, he becomes a likable and cute character.
Designing cute social robots is becoming a popular strategy to make them more appealing. This isn’t just about looks, it’s a way to help people feel more comfortable and interact positively with advanced technology. However, cuteness in robots comes with ethical concerns. There’s a risk of emotional manipulation.
“By creating an emotional connection with the robot, users may be more inclined to accept suggestions, follow instructions, or share personal information.” As social robots become more prevalent, ethical design becomes crucial. “Using cuteness as a dark pattern without proper transparency can raise questions about informed consent and unethical manipulation.”
This research looks into different types of robots with different tasks and tries to identify where cuteness is needed, which aspects of a robot makes it cute, if the hardware (styling) makes it cute, the software makes it cute, or is a combination of both. Can a robot transmit its emotions through an interface and gestures, without needing to speak? Here is a brief research on some robots that made an impression on the market and helped shape the future.
Bellabot is a food delivery robot that is designed to deliver food and drinks in restaurants and cafes. BellaBot also has a number of features that make it appealing to customers the most important one is the cute design, resembling a cat. There was no need for a robot that delivers food to be cute, but by being cute, the experience is more enjoyable. This is one of the best examples of good design and good UI. There is not much room for interacting with it, it moves just like a Roomba vacuum cleaner, but you can pet her on her head and she will change her expression. Bellabot helped to minimize the interaction with the restaurant staff during the pandemic. Bellabot helping Sushi Island restaurant
WALL•E is the last robot left on Earth, programmed to clean up the planet, one trash cube at a time. The inspiration for WALL•E’s design came from a pair of binoculars. His puppy-like expressions became iconic, gazing at the stars, being curious, or falling in love. Pantomime is also a crucial part of designing WALL•E and the movie itself. This shows that good communication comes not only from speech but through gestures and expressions as well. A mouth can show if you’re sad, happy, disgusted… but you don’t necessarily need a mouth all the time if you can translate those emotions through gestures too.
Robear is a humanoid robot designed to provide assistance and care to elderly or disabled people. The robot has a bear-like appearance, with a round face and a soft white body, and is designed to be friendly and non-threatening. “When we asked care assistants, they said that if the robot was shaped like a person, it might confuse or frighten people with Alzheimer’s. They told us that an animal shape like this would be easier for patients to deal with.” Appearance of a robot is an important subject in the caretaking/ medical field as well, cuteness helps to calm and please people and overcome some fears/ barriers. RIBA II
Amazon Kiva is a brand of robotic systems used in Amazon’s fulfillment centers to automate the process of picking and packing products for shipment. They work in dedicated robot spaces with high density. Having a low human interaction with those robots, they do not require a face. The Proteus robots are designed to work alongside human workers, carrying shelves of products to the workers’ stations so they can pick and pack the items. Altho their main function is to deliver shelves, the Proteus robots designers gave it a pair of eyes, to give a sense of a companion and not just a machine. Amazon Proteus
Zenbo is a smart companion for different businesses. Because he has a simple body and head shape the display tries that have a distinct personality with exaggerated cartoonish eyes and a mouth. What makes him unique, though cartoonish, is the sparkle in his eyes changes based on tasks and can become cuter. He has by default a blue gradient face that will change to red if he blushes. He blinks and has a beak-like mouth. Those are not the most interesting eyes but the smooth animation of making the pupil bigger and sparkle makes them notable. Zenbo eyes
Astro is a household robot for home monitoring, with Alexa. Astro can follow the owner from room to room playing their favorite music, podcasts, or shows, delivers calls, reminders, alarms, and timers set with Alexa. The overall aesthetic is a simple and clean one, with a display for a head. The eyes are 2 circles that change expressions from time to time when you interact with it. Sometimes when you interact whit it and it doesn’t understand a question and sits there and blinks, which is not the best reaction, having a mouth would have helped to translate some expressions better. Kids didn’t find Astro cute. Kids testing Astro’s personality
“Through deep learning, Aibo is able to grow over time and form a unique personality through everyday interactions. With lifelike expressions and a dynamic array of movements, Aibo is sure to become a beloved member of your family.” This is a novelty product, an expensive toy for everyone who wants a companion without the trouble of owning a real pet. he needs to resemble a real dog but in a robotic way, so the proportions are not exaggerated to make it cute. What makes it cute are its interactions. He has 2 display eyes, it’s easier to have displays than mechanical blinking, offering some small variety in eye expressions too. His purpose is to be cute, friendly and to learn and interact with the owner.
Sanbot Elf is a humanoid service robot designed to assist in a variety of tasks, such as commercial use in the retail, hospitality, healthcare, and education industries. It has cute, female-like, anime-style eyes. SARA Robotics is a company that specializes in the development of caretaking robots, one of them being SARA. “People really enjoy SARA’s visits, one resident even greets SARA. “Hello girl, are you here again?”, he says. The robot feels like a buddy, he talks to it like SARA would a pet. Another resident is much less grumpy since the robot sings songs to her.” Cuteness helps well-being. The robot could be redesigned to resemble a pet and less of a human. SARA-Robotics
Eilik is a desktop companion bot. It doesn’t do much in means of helping with tasks, but he is packed with different animations and interactions. He doesn’t have a mouth from the start but when a certain expression is better understood with a mouth, the animation will add one. For example, if he sneezes he will show a sad face with a booger dangling. Altho he doesn’t do much, his selling point is his almost endless animations. Another amazing feature is its interaction with other Eilik robots. They combine animations and altho they don’t talk, you can perfectly understand their interactions. For example, one Eilik invites another one to drink some Cola together. Eilik presentation
“Sanbot Nano is designed for all of your smart home needs. This robot comes with an advanced voice interaction system, high-quality speakers, terrific communication functions, and Alexa’s smart IoT function, making Sanbot Nano the next member of the family.” Sanbot Nano resembles a child’s face, its display being fully colored, unlike a black screen of most competitors. The idea of a child robot that assists you in your tasks and takes care of household chores is wrong. He tries to be cute by having big eyes, and a small nose… but the overall idea is creepy. A robot assistant should further himself from the uncanny valley.
“Ameca is primarily designed as a platform for further developing robotics technologies involving human-robot interaction. Ameca features an articulated motorized neck and facial features. Ameca’s appearance features grey rubber skin on the face, and is specifically designed to appear genderless.” This robot’s appearance encourages people to engage with it in a different way, not as a companion but as an equal, even challenging the robot to see if it can match human thinking. It’s not meant to be cute or ugly but we still have an uncanny feeling towards this kind of robots. Humans always wanted to create something in their image, but do we really need that?
Boston Dynamics is a robotics company that specializes in the development of dynamic highly-mobile robots. Their focus is on creating robots with advanced mobility, dexterity, and intelligence. In 2020 they sold their first commercial robot, after years of design and research. We are at the beginning of their commercial launch. They also do not have any big competitor and we’ll yet see if they decide to give their robots a face. They never had the target to make robots have an interface, they researched and designed in house. “Spot is a general purpose robot with broad applications, including remote inspection of hazardous environments, rescue operations, or logistics operations.”
The K5 is a sophisticated camera system designed by Knightscope that can be purchased by different entities and businesses. The HPRoboCop is a K5 unit bought by the city of Huntington Park to monitor a programmed route 24/7. He has been vandalized, and tipped over by some criminals that were caught on camera, causing them to try and destroy the robot. They later had been caught. The K5 doesn’t have a face altho he interacts with people. It is tricky to make an interface for a police robot that monitors every move. The aggressors do not care what appearance the robot might have and pedestrians will always be suspicious about surveillance equipment.
The computer industry has long grappled with the challenge of efficiently communicating information to humans, especially in contexts like driving, where focus is paramount. MIT and Audi’s collaborative project, AIDA (Affective, Intelligent Driving Agent), aims to address this by creating a knowledgeable virtual companion for drivers.
AIDA leverages facial simulation to convey information effectively, alongside voice prompts. It learns driver habits, like shopping routines or refueling needs, and monitors the environment to suggest optimal routes.
The ability to read emotions from faces is a very important skill. People around the world use this skill when they communicate with each other. This is why it is essential for a robot to have an expressive User interface. Some people have difficulty understanding some emotions, and others have cultural differences in displaying them
The range of emotions we can feel is often challenging to express in words. That’s where the Junto Institute’s exceptional visualization comes into play. The facial expressions of these emotions are nuanced and subtle, but with the help of context and facial cues, we can read one’s feelings. How we read emotions from faces (Article) In Western countries people pay more attention to the mouth and in Eastern culture to the eyes. For example, this is the perception of a feeling happy in the Western (: vs Eastern culture (^_^). This is a great task for designers and animators to make users understand a robot’s emotions.
“Disney’s twelve basic principles of animation were introduced by the Disney animators Ollie Johnston and Frank Thomas in their 1981 book The Illusion of Life: Disney Animation. The principles are based on the work of Disney animators from the 1930s onwards, in their quest to produce more realistic animation. The main purpose of these principles was to produce an illusion that cartoon characters adhered to the basic laws of physics, but they also dealt with more abstract issues, such as emotional timing and character appeal.”
The application of the Twelve basic principles of animation is crucial for any robot character featuring a display interface. These principles serve as a comprehensive guide for animating various elements, including characters, objects, and specific features like eyes, mouth, and eyebrows. By using these principles, we can create a wide range of expressive animations that enable stronger human-robot connections.
01. Squash and stretch | 02. Anticipation | 03. Staging | 04. Straight-ahead action and pose-to-pose | 05. Follow through and overlapping action | 06. Slow in and slow out | 07. Arc | 08. Secondary action | 09. Timing | 10. Exaggeration | 11. Solid drawing | 12. Appeal
Anthropomorphism and appearance are important factors that determine human emotion towards robots. Cuteness is an attribute welcomed in all sectors, making people involved in the activity feel more at ease interacting with a robot. Ultimately, creating a robot that can effectively communicate and connect with humans is a multi-faceted process that involves careful consideration of both hardware and design features.
A robot’s appearance determines how we as humans interact with it, as a companion, as an equal, as an object… Human emotion is determined by this aspect as well, feeling safe, happy, comforted, neutral, angry, frightened… For any industry involving human-robot interaction, incorporating a face with basic expressions is beneficial for good communication. Anthropomorphism is when an object or animal is given attributes and human characteristics to a non-human. This perception improves human-robot interactions.
The baby schema of Konrad Lorenz says that infantile physical features, such as a large head are perceived as cute. By this definition, a robot’s head (face-display) should be large for it to be cute. Additionally, the display size needs to be as big as the “head” size to accommodate big expressive eyes. While not mandatory, the presence of a mouth, nose, and eyebrows plays a significant role, especially when a robot can’t rely on complex movements and pantomime to express its emotions.
The Twelve basic principles of animation serve as a fundamental guide for designing expressive display faces for robots. By applying these principles to a robot’s face, designers can create dynamic expressions that can effectively convey emotions, build rapport, and establish meaningful connections with humans in various interactive contexts.
Every expression should be accompanied by a corresponding movement, letting the user know that the robot understood the task. Designing expressions and movements in tandem is crucial, ensuring a seamless integration that maximizes the robot’s communicative potential. It’s important not to overlook the significance of these movements, as they can play a vital role in conveying the robot’s responses. While hardware limitations may impose certain constraints, exploring the full range of movements within the robot’s capabilities can further enrich the communication between humans and robots.
This exploration offers a glimpse at the endless types of interfaces that can be applied to different task-based robots. This creative task is fundamental for a good UI. It is an important factor that determines a human’s emotion towards a robot. In the end, it helps humans to effectively communicate and connect with robots.
Among the multiple iterations, I have selected a specific face to illustrate how the rules apply to this interface. The appearance of the robot should remain independent of its field of work, which is why I chose a generic design with simple shapes. Here are approximately 40 fundamental expressions that nearly every robot should possess.
Some interfaces have fine adjustments and may seem to resemble other emotions, but they make sense when they’re put in context. Additional animations and expressions can be incorporated based on the robot’s specific field, such as healthcare, horeca, entertainment, transportation, and more.