Startups & Funding

One startup’s pitch to provide more reliable AI answers: crowdsource the chatbots

The startup queries ChatGPT, Gemini, Claude, and Grok simultaneously to fuse more accurate enterprise answers.

Deep Dive

Frustrated by expensive enterprise AI contracts that produced unreliable, hallucinated answers, Buyers Edge Platform CEO John Davie incubated a new startup, CollectivIQ. The Boston-based company addresses a critical enterprise pain point: the risk of employees using unsecured AI tools that could train on proprietary data or deliver incorrect information into business presentations. Davie's solution was to build a platform that doesn't force a choice between models but instead leverages the collective intelligence of over 14 major LLMs, including those from OpenAI, Anthropic, Google, and xAI, querying them in parallel to cross-verify information.

The CollectivIQ software operates by sending a single prompt to multiple AI models via their enterprise APIs, then analyzing the overlapping and differing responses to synthesize a more accurate, fused answer. This 'crowdsourcing' approach is designed to mitigate the hallucination problem inherent in single-model queries. From a business and security standpoint, all data is encrypted and deleted post-use, and the company adopts a pay-per-usage model instead of locking customers into long-term commitments. Fully funded by Davie initially, CollectivIQ plans to seek outside capital later this year, positioning itself as a pragmatic tool for companies hesitant to adopt AI due to cost, accuracy, and security concerns.

Key Points
  • Queries 14+ LLMs (GPT, Claude, Gemini, Grok) simultaneously to cross-check and fuse answers for higher accuracy
  • Encrypts and deletes all prompt data after use to maintain enterprise-grade privacy and security
  • Uses a pay-per-usage billing model instead of long-term contracts, with CollectivIQ covering the underlying AI token costs

Why It Matters

Provides businesses a secure, cost-effective way to leverage AI's power while dramatically reducing the risk of incorrect, hallucinated answers in critical work.