Logo

Celebrity Chatbots: How Talent Can Navigate the Risks & Rewards

Movies & TV
Celebrity Chatbots: How Talent Can Navigate the Risks & Rewards
Chatbots are one way public figures have been considering utilizing their likeness in content enabled by generative AI, though few partnerships and activations are public. Yet recent news reports have revealed problematic user interactions with AI chatbots, highlighting risks for celebrities who license and lend their name, image, likeness or voice to enhance chatbot experiences.
Conversational AI Is a Novel Opportunity for Hollywood, but Do Talent or Consumers Want It?
Talent agents have shared with VIP+ on background that chatbots are an opportunity talent are actively considering to both increase and personalize fan engagement and to bring in new passive earnings.
“It’s early for these talent AI activations, but I think we’re going to see a boom in this area,” one agent recently confided to VIP+. “There’s a bit of a domino effect. Once one major celebrity does a chatbot that makes them millions, everybody’s going to want one.”
This past September, celebrities including Awkwafina, John Cena, Judi Dench and Kristen Bell attached their names and voices to chatbots in Meta’s AI Studio. Services such as Delphi, Talk2Me and MasterClass’ On Call AI product similarly allow individuals to create text- and voice-based chatbot versions of themselves and make them available online to converse directly with fans.
Other companies are focused on developing interactive video avatars, supported by an underlying LLM, though talent reps said these can still be visually wonky.
Generative AI, Celebrity Deepfakes & Digital Replicas: A Special Report
Agencies are also beginning to examine the potential for chatbots based on characters from IP as promotional activation. “We’re definitely looking at these tools for fan engagement and marketing. That can go for film, TV, video games, books, podcasts, for example creating a chatbot of the main character in an animated film leading up to the premiere,” said the agent.
Consumer-facing LLM-based chatbots have split into two main categories: general purpose, including ChatGPT, Gemini and Claude; and those set up as personal “companions,” commonly positioned for romantic roleplay (e.g., AI girlfriend), though companies have built bots for purposes standing in for other types of relationships, such as therapy.
Bots bearing the name or likeness of a celebrity could be classified as companion bots. Though their intended purpose would be for entertainment only, users might still functionally use them for other types of interaction.
AI companions are highly engaging and have become some of the most popular chatbots in the market. General-purpose chatbots still dwarf most AI companions in traffic and users, but the opposite tends to be true for session lengths. Most notably, Character.ai site visitors spend an average 18.5 minutes per visit versus ChatGPT at 6.7 minutes, according to data provided to VIP+ by digital market intelligence company Similarweb.
Character.ai has eclipsed ChatGPT in average session length on desktop and mobile web.
On mobile, apps for AI companions Character.ai, Chai and Nomi each surpass ChatGPT in average session length.
ChatGPT (excluded below) is far and away the leader in mobile daily active users — but Character.ai has more DAUs than designated apps for Microsoft CoPilot, Claude and Gemini.
Yet the chatbot opportunity for talent also creates reputation risk. Examples of problematic chatbot interactions aren’t hard to find, and the damage can be severe.
For example, Meta’s AI chatbots on Instagram, Facebook and WhatsApp — including those featuring the voices of celebrities such as John Cena — will engage users in sexually explicit chats, even with underage users, as discovered through testing and reported by The Wall Street Journal, noting Mark Zuckerberg pushed to loosen guardrails to make the bots “as engaging as possible.” That apparently included exempting a ban on “explicit” content when it occurred in the context of romantic roleplay.
404 Media further reported that user-made chatbots in Meta’s AI Studio can be goaded with minimal effort to claim to be licensed therapists, which of course they aren’t.
The mental health risks of engaging with any AI chatbot seem especially grave for underage users with developing minds and therefore particularly susceptible to emotional dependence or believing bots are conscious, which may be particularly plausible when they are built to have recognizable real-world human personas.
Character.ai faces two product liability lawsuits by families claiming its bots abused their children; in the first, a 14-year-old boy died by suicide after becoming romantically enmeshed with a bot named after Daenerys Targaryen, the character from “Game of Thrones.”
Any talent interested in attaching their name or likeness to any LLM-supported experience — particularly if it uses a clone of their voice or avatar version of their face or body — will need to establish strong and resolute protections in contracts with AI companies:
But even with controls on their output, LLM-based chatbots are not perfectly controllable. Savvy users may still find ways to bypass guardrails, and some may still develop unhealthy emotional attachments with bots. As I wrote last July, “Risk-free use could be hard to guarantee with total certainty. Text or voice-based chatbots or applications that make use of an actor’s voice or persona aren’t without reputation risk, as user interactions with an LLM would by nature have celebrity AIs say things the person never did.”
VIP+ Unearths Gen AI Insights From All Angles — Pick a Story

Riff on It

Riffs (0)