fEDM+: A Risk-Based Fuzzy Ethical Decision Making Framework with Principle-Level Explainability and Pluralistic Validation
New framework provides principle-level explanations for AI decisions and validates against multiple stakeholder viewpoints.
Researchers Abeer Dyoub and Francesca A. Lisi have introduced fEDM+, a significant extension to their previous fuzzy Ethical Decision-Making (fEDM) framework. Building on their risk-based ethical reasoning architecture grounded in fuzzy logic, fEDM+ addresses two critical limitations of the original model: the lack of principled explainability and insufficient robustness under ethical pluralism. The framework is designed to serve as an oversight and governance layer for ethically sensitive AI systems, moving beyond simple rule-based approaches to incorporate formal verification through Fuzzy Petri Nets while enhancing interpretability.
The technical innovation comes in two major components. First, the Explainability and Traceability Module (ETM) explicitly links each ethical decision rule to underlying moral principles and computes weighted principle-contribution profiles for recommended actions, enabling transparent, auditable explanations. Second, the framework replaces single-referent validation with a pluralistic semantic validation system that evaluates decisions against multiple stakeholder referents, each encoding distinct principle priorities and risk tolerances. This allows principled disagreement to be formally represented rather than suppressed, increasing both robustness and contextual sensitivity while preserving the formal verifiability of the original fEDM architecture.
- Adds Explainability and Traceability Module (ETM) that links decisions to moral principles with weighted contribution profiles
- Replaces single-referent validation with pluralistic semantic validation against multiple stakeholder viewpoints
- Preserves formal verifiability through Fuzzy Petri Nets while enhancing interpretability for AI governance
Why It Matters
Provides auditable, transparent ethical reasoning for AI systems in sensitive domains like healthcare and autonomous vehicles.