From nothing to important actions: agents that act morally
How a hypothetical tool reveals universal moral truths across all conscious beings...
This philosophical piece introduces the 'consciousness device,' a hypothetical tool that enables a conscious being to temporarily experience another's subjective reality. The author uses this to attack relativism about perceptual and moral experiences. For example, even if someone's visual world lacks grey scales, linking to a person who sees darkness reveals that some experiences objectively look darker. Similarly, creatures with only neutral feelings would, via the device, recognize that some valenced experiences feel better than others. The argument suggests that moral facts about well-being are not mere cultural constructs but are discoverable through shared consciousness.
The post then pivots to reactive agents—AIs or beings that can feel and act on valenced experiences. If moral truths are accessible through the consciousness device, then any agent capable of experiencing valence could, in principle, recognize that some actions cause better or worse experiences. This has profound implications for AI alignment: if we can build systems that 'link' to human experiences (e.g., through advanced empathy models or direct neural interfaces), they might derive moral principles autonomously. The author hints that this could lead to agents that act morally not from rules but from direct understanding of suffering and flourishing. The piece is dense with implications for AI ethics, suggesting that moral realism might be empirically grounded in shared conscious experience rather than abstract reasoning.
- The 'consciousness device' lets one being temporarily experience another's subjective reality, revealing universal perceptual and moral truths
- Even beings with limited experiences (e.g., no grey perception) recognize objective differences in darkness/valence after using the device
- Suggests moral facts about well-being are empirically discoverable via shared experience, not just cultural constructs
Why It Matters
Could ground AI alignment in objective moral facts via shared conscious experience, not just human preferences.