Studying the effect of uncertainty on Machine Ethics. As autonomous systems take more responsibility in the real world, their decisions must reflect our values while handling moral and outcome uncertainty.
Find me on GitHub, Google Scholar, the University of Manchester and LinkedIn.
Towards Responsibly Non-Compliant Machines (2026)
Marija Slavkovik, Marie Farrell, Louise Dennis, Michael Fisher, Simon Kolker, Emily C. Collins
[Paper Upcoming]
Uncertain Machine Ethics Planning (2025)
Simon Kolker, Louise Dennis, Ramon Fraga Pereira, and Mengwei Xu
24th International Conference on Autonomous Agents and Multiagent Systems
Applying Ethical Decision Making in Space Missions (2024)
Simon Kolker, Louise Dennis, Ramon Fraga Pereira, and Mengwei Xu
In the New Ideas and Emerging Results Track at SMC-IT/SCC 2024
Selecting Ethical Actions by Retrospection on Hypothetical Outcomes (2023)
Simon Kolker, Louise Dennis, Ramon Fraga Pereira, and Mengwei Xu
In International Workshop on Computational Machine Ethics (CME) 2023
Machine Ethical Decisions with Hypothetical Retrospection (2023)
Simon Kolker, Louise Dennis, Ramon Fraga Pereira, and Mengwei Xu
By remaining a bystander, you have not directly caused harm, though you may have allowed it to occur.
Philippa Foot's famous trolley problem illustrates how our moral intuitions may conflict. Subtle variations of the problem lead many to different judgments.
As we develop machines whose actions have moral consequences, we must consider the kind of morality they should follow.
What if the scenario isn't so simple? If the outcomes are uncertain or the decisions are sequential?