1.14.5a2
Critical fixes prevent lost responses and token miscounts in multi-agent workflows.
crewAI, the popular multi-agent orchestration framework, has pushed its v1.14.5a2 pre-release with a focused set of bug fixes aimed at production reliability. Key patches include restoring task outputs in finally blocks (preventing silent data loss), properly counting thoughts_token_ counts in completion tokens, and preserving task outputs across async batch flushes. The release also fixes result_as_answer from returning hook-block messages or errors as final answers, ensuring agents only surface intended responses.
Additionally, the update prevents shared LLM stop words from mutating across agents (critical for parallel runs), improves BaseModel input handling in convert_to_model, and wraps output conversion in async paths with acall for consistency. Documentation was updated with new environment variables. Contributors include NIK-TIGER-BILL, greysonlalonde, lorenzejay, minasami-pr, theCyberTech, and wishhyt. This release focuses on stability rather than new features, making it a recommended upgrade for teams running crewAI in production.
- Fixes task output restoration in finally blocks to prevent silent failures
- Includes thoughts_token_count in completion token tracking
- Prevents LLM stop words from mutating across agents during parallel execution
Why It Matters
Improves reliability of autonomous AI agents working together in production pipelines.