| Annie Schalnat

Applying Communication Theories to Humanoid Robots

human vs machine brain

It is 2023, and robotic baristas are beginning to fill the role of the friendly face serving consumers their morning coffee. Algorithms and artificial intelligence recommend us new products, playlists, and services; we have come to rely on these powerful computers that fit in our pockets. As humans continue to invent ways to make life more convenient, technological assistants are becoming routine.  Many popular AI interfaces such as Siri and Alexa, as well as physical robots like Zora are designed with anthropomorphic cues to mimic human interactions. Whether intentionally for comfortability, or unintentional, anthropomorphism greatly influences the way researchers and engineers design robots. By centering human-like qualities such as bipedalism, complex language use, and human social cues as “best," developers overlook the opportunity to build a new kind of relationship with robotic and virtual assistants.

Humanoid social robots, or HSRs, are man-made machines that “learn” from a given data set, appear in physical and virtual forms, and are designed to interact with and mimic humans to various degrees. By definition, describing a robot as humanoid in its appearance cues implies that the robot is designed to be perceived similarly to people. Humans are capable of generating hundreds of thousands of nonverbal behaviors and expressions, a range beyond technology’s current capabilities… but the future is coming, and what then? Robots of today can demonstrate form anthropomorphism via sensory cues like voice and/or physical appearance, but even the most advanced are distinguishable from their human counterparts due to a lack of behavioral anthropomorphism. As humanity continues to integrate technology into nearly every aspect of our lives, research is necessary to inform fabrication and marketing decisions.

Key Questions

  1. Are existing theories about interpersonal, human-human communication applicable to studying human-machine communication?

  2. Are there differences in communicative scripts between human-human and human-robot interactions?

  3. Should the goal of robotic designers’ be to mimic humans?

In human interpersonal connections, social exchange theory makes sense. Relying on the fundamental assumptions that humans require resources for survival, social exchange theory says we attain resources via exchange with other humans, and humans inherently seek to maximize benefits or rewards while minimizing costs. Resources can be tangible, such as food/water/shelter, or abstract like status or self-image. In this context, humanoid social robots fall under the category tangible goods category, restricting the ability for humans to form relationships comparable to human-human interactions. As a product, the robot is fundamentally designed to be of service to humans, to be owned rather than own resources that could be exchanged socially as this theory posits. Created to be servile, there is a permanent inequity as HSRs are inherently designed to maximize benefits for humans, leading the autonomous human actor to make unilateral decisions based on one self-serving principle: maximize human benefits and never mind the robot.

Forming interpersonal relationships is analogous to peeling an onion, as relationships evolve over time through the process of mutual self-disclosure. As humans bond and comfortability increases, the layers of the “onion” or the individual’s life experiences are shared. These layers are characterized by breadth, or the range of information shared, and depth, which involves core beliefs, ideals, and values the individual relates to their self-concept. Frequency of communicative interactions between humans increases the breadth and depth of conversational topics, deepening human-human relationships.

In contrast, HSRs are limited in their capacity to distinguish between users, and the breadth and depth of their experience is limited to the training data set. While there are convincing arguments that humans can form meaningful relationships with robots like in “The Jessica Simulation” by journalist Jason Fagone, predominant interpersonal communication theories about relationship formation and development are not applicable to human-robot interactions. As interpersonal research continues to seek explanations for human mannerisms, a distinction must be made between human-human communication and human-machine communication. Furthermore, research must expand on interpersonal communication models, adjusting for the assumption that both interactants have human status. To truly understand human-machine communicative scripts, longitudinal research is needed to assess relational development and account for technological experience levels.

As humans continue to integrate artificial intelligence into so many aspects of our daily lives, it is of critical importance that we understand human-robot interactions. From a theoretical perspective, communication constitutes reality; without communication, there would be no organizations or societal structure. Ultimately, existing interpersonal theories stand as a Turing test, a basis of comparison for how human-like a HSR can be, but models must be adapted and created to conceptualize human-robot interactions.

Citations:

Schalnat, A.E.(2023). Applying Interpersonal Theories to Humanoid Robots. Arizona State University.

Fagone, J.(2021). The Jessica simulation: Love and loss in the age of A.I. San Francisco Chronicle. https://www.sfchronicle.com/projects/2021/jessica-simulation-artificial-intelligence/