AI Safety

The Landscape of Generative AI in Information Systems: A Synthesis of Secondary Reviews and Research Agendas

A synthesis of 28 papers finds GenAI adoption is stalled by a critical socio-technical misalignment.

Deep Dive

A comprehensive new study led by a 17-author international team synthesizes the current state of knowledge on Generative AI (GenAI) in organizational contexts. By systematically reviewing 28 secondary studies and research agendas published since 2023, the paper, "The Landscape of Generative AI in Information Systems," identifies a core tension. While GenAI tools like GPT-4 and Claude 3 offer transformative potential for productivity, their adoption is constrained by a triad of challenges: technical unreliability (hallucinations, performance drift), societal-ethical risks (bias, misuse, skill erosion), and a systemic governance vacuum around privacy, accountability, and intellectual property.

Interpreted through a socio-technical lens, these findings reveal a critical misalignment. The fast-evolving technical capabilities of GenAI are outpacing the slower-adapting social structures—organizational procedures, regulatory frameworks, and societal values—needed to govern them effectively. This gap positions Information Systems (IS) research as crucial for achieving "joint optimization." The authors propose a reoriented research agenda that shifts IS scholarship from passively analyzing AI's impacts to actively shaping its responsible integration. This involves focusing on hybrid human-AI ensembles, developing validation methods for probabilistic systems, and creating adaptive governance models that can keep pace with technological change.

Key Points
  • Synthesizes 28 secondary studies to map GenAI's organizational challenges, identifying technical, ethical, and governance hurdles.
  • Reveals a core socio-technical misalignment: AI tech evolves faster than the social systems (rules, skills, ethics) needed to govern it.
  • Proposes a new IS research agenda focused on actively co-shaping AI with human systems, not just analyzing its impacts.

Why It Matters

For professionals implementing AI, this highlights that the biggest barrier isn't the technology itself, but building the human and governance systems around it.