Elephants, Ethics, and AI: Why Autonomous Decisions Need Human Wisdom; Connecting classic dilemmas to modern contexts.
Event Overview
In this session, you’ll explore how moral psychology shapes the split-second decisions of autonomous vehicles.
Digging into ethical frameworks like deontology, utilitarianism, and virtue ethics through the lens of empirical psychological research like the trolley problem, you will unpack how bias, culture, and values get coded into machines. Also, discover why sociocentric and individualistic communities demand different moral rules, and how Haidt’s elephant and rider help explain the emotional side of AI.
We’ll touch on nudge theory, the butterfly effect, and the shifting role of humans and whether we stay ‘in the loop’ or drift ‘on the loop.’ It’s a call to rethink AI not just as technology, but as moral machinery.
This is just the start - if you want to dive deeper, the author’s book The Psychology of AI Decision Making is where the real conversation begins.
What you’ll learn:
- Autonomous vehicles must make split-second moral decisions, so understanding ethical frameworks like rules, outcomes, and character really matters.
- Culture and bias shape how AI behaves. What feels right in one place might be wrong in another.
- Human psychology shows why AI misses the mark, as emotions, intuition, and values don’t always fit in code.
- Good governance must connect tech and ethics by building in transparency, cultural awareness, and human oversight from the start.
Who should attend:
Business Leaders & Tech Innovators, Data Scientists & Machine Learning Engineers, Software Architects & Product Managers, UX & HCI Specialists, Legal, Compliance & Governance Professionals, AI Ethicists, Educators & Students
Agenda (outline):
- Welcome & introductions
- AI, philosophy, ethics and morality
- Bias in AI
- Discussion: who has the right of way?
- Q&A