AI Safety

Agentic AI in Engineering and Manufacturing: Industry Perspectives on Utility, Adoption, Challenges, and Opportunities

A new MIT study, based on 30+ industry interviews, finds AI adoption is blocked more by data and legacy tools than model smarts.

Deep Dive

A new qualitative study from MIT, led by Kristen M. Edwards and colleagues, provides a ground-level view of agentic AI adoption in engineering and manufacturing. Based on interviews with over 30 stakeholders from large enterprises, SMEs, AI developers, and CAD/CAM vendors, the research reveals a clear, staged progression. Near-term utility clusters around automating structured, repetitive work and synthesizing data, while the higher-value promise lies in agentic systems that can orchestrate complex, multi-step workflows across different software tools.

However, the study identifies that adoption is not primarily limited by the raw capabilities of models like GPT-4 or Claude 3.5. Instead, the major barriers are infrastructural and organizational: fragmented, machine-unfriendly data; legacy engineering toolchains with limited API access; and stringent security and regulatory requirements. For AI to be trusted in high-stakes environments, reliability, verification, and auditability are non-negotiable, leading to a strong preference for human-in-the-loop frameworks that align with existing engineering review processes.

Beyond technical issues, the research highlights significant organizational hurdles, including a persistent AI literacy gap, cultural resistance, and governance structures that haven't evolved to manage agentic systems. The findings point to key breakthroughs needed for the next stage of adoption: seamless integration with traditional engineering data types (like CAD files), the development of robust verification frameworks, and improvements in AI's spatial and physical reasoning. Success depends on maturing trust and infrastructure as much as advancing the AI models themselves.

Key Points
  • Adoption is blocked by fragmented data and legacy tools lacking APIs, not just model capability.
  • Key requirements for trust are reliability, verification, and auditability, favoring human-in-the-loop designs.
  • Organizational barriers like an AI literacy gap and outdated governance are as critical as technical ones.

Why It Matters

For professionals, it shifts the focus from chasing the next model to solving real integration, data, and trust challenges for production use.