It appears Paramount's bots are getting confused when they rate their new show on Rotten Tomatoes
LLMs tasked with reviewing a TV show instead analyze a fictional educational institution, exposing bot-driven PR campaigns.
Paramount's marketing strategy for its new series, 'Star Trek Academy,' has backfired spectacularly, exposing the pitfalls of automated reputation management. Evidence circulating online shows that bots, likely powered by large language models (LLMs), were deployed to post positive reviews on sites like Rotten Tomatoes. In a comical failure of context understanding, these AI systems misinterpreted the show's title. Instead of reviewing the television program—a 'Star Trek' series set in a Starfleet academy—the bots generated critiques for a fictional 'Star Trek Academy' as if it were a real educational institution, praising or criticizing its curriculum and facilities.
This incident occurs against a backdrop of significant fan discontent with the show's high-school-drama direction, mirroring recent conflicts between major studios like Disney and their core audiences. The bot blunder has amplified criticism, suggesting Paramount is attempting to artificially inflate ratings rather than engage with legitimate fan feedback. It serves as a stark, real-world case study in the limitations of current LLMs, which can struggle with nuanced context, and the ethical risks of deploying AI for astroturfing campaigns. The episode damages Paramount's credibility and highlights a growing industry problem where automated systems are used to manipulate public perception, often with transparently flawed results.
- AI bots (LLMs) generated reviews for a fictional 'Star Trek Academy' school, not the TV show, due to a context failure.
- The incident exposes a likely bot-driven campaign to artificially boost ratings for the critically panned 'Star Trek Academy' series.
- This follows a pattern of studios dismissing fan criticism, with Paramount doubling down on a creative direction unpopular with core 'Trek' fans.
Why It Matters
This public failure undermines trust in online reviews and showcases the tangible risks and limitations of using AI for deceptive PR.