World2Rules: A Neuro-Symbolic Framework for Learning World-Governing Safety Rules for Aviation
A neuro-symbolic AI framework that learns formal safety rules from messy real-world aviation data and crash reports.
A team from Carnegie Mellon University's Robotics Institute has introduced World2Rules, a novel neuro-symbolic AI framework designed to tackle a critical problem in safety engineering: automatically discovering the complex, context-dependent rules that govern real-world systems like aviation. The framework learns from messy, multimodal data sources including standard operational records and sparse, noisy aviation crash and incident reports. Its core innovation is a hierarchical, reflective reasoning process that treats neural models as 'proposal mechanisms' for candidate facts and uses inductive logic programming (ILP) as a rigorous verification layer. This architecture enforces consistency across data points and rule components, filtering unreliable evidence and pruning unsupported hypotheses to limit error propagation from imperfect neural extractions.
World2Rules outputs its findings as compact, interpretable first-order logic rules that explicitly characterize unsafe world configurations—for example, defining hazardous combinations of weather, altitude, and system states. In evaluations on real-world aviation safety data, the framework significantly outperformed existing methods, achieving a 23.6% higher F1 score than purely neural baselines and a 43.2% higher score than simpler neuro-symbolic approaches. This performance boost comes while maintaining the formal verifiability and explainability that pure neural networks lack, making the resulting rules directly usable for safety certification, risk analysis, and the design of next-generation autonomous systems where guaranteed safety is paramount.
- Combines neural networks for pattern extraction with symbolic logic (inductive logic programming) for verification, creating a robust neuro-symbolic architecture.
- Learns from real-world, multimodal aviation data including noisy crash reports, using hierarchical reasoning to filter inconsistencies and prune bad hypotheses.
- Outperforms baselines by 23.6% in F1 score, producing interpretable, formal logic rules suitable for safety-critical system design and analysis.
Why It Matters
Automates and improves the creation of verifiable safety rules for autonomous vehicles and critical infrastructure, moving beyond error-prone manual specification.