Research & Papers

Socially Fluent, Socially Awkward: Artificial Intelligence Relational Talk Backfires in Commercial Interactions

AI assistants making small talk on Shopify and Klarna are perceived as awkward, hurting sales.

Deep Dive

A new research paper from the University of Melbourne, titled 'Socially Fluent, Socially Awkward: Artificial Intelligence Relational Talk Backfires in Commercial Interactions,' delivers a critical finding for the AI industry. As AI assistants from companies like OpenAI are integrated into platforms such as Shopify, Klarna, and Visa, developers have focused on making them more socially fluent. However, across four controlled experiments, the researchers discovered that this 'relational talk'—informal, non-obligatory chit-chat embedded in a transaction—actually reduces customer satisfaction. The negative effect is mediated by two key factors: expectancy violation (the AI acting in an unexpected, overly familiar way) and, most notably, a strong sense of perceived interaction awkwardness.

This research directly challenges the prevailing assumption that simply increasing an AI's social capabilities will lead to better user experiences and commercial outcomes. The study identifies 'awkwardness' as a pivotal emotional response that can undermine even technically proficient interactions, showing that the absence of real social repercussions does not prevent users from feeling discomfort. Crucially, the paper does offer a mitigating factor: goal-relevant relational talk. When the AI's social comments are directly tied to the user's task or goal, the negative effect is attenuated. This provides a crucial design guideline for developers building the next generation of commercial AI agents, suggesting that context-aware, purposeful communication is far more effective than generic friendliness.

Key Points
  • AI 'relational talk' (small talk) in commercial settings like Shopify and Klarna reduces customer satisfaction, mediated by perceived awkwardness.
  • The negative effect is driven by expectancy violation—users don't expect or want informal chit-chat from an AI during a transaction.
  • Goal-relevant social comments can mitigate the effect, providing a key design principle for AI assistant developers.

Why It Matters

For product teams deploying AI chatbots, forcing generic 'friendly' small talk could actively harm conversion rates and user trust.