Ethics of risk in the context of AI
Some argue that the use of AI creates a responsibility gap, because one never has fully control over AI, while direct control is a necessary condition for responsibility. This line of reasoning has been criticized by many scholars: direct control is desirable but not necessary for responsibility. Moreover, people can be held responsible even though they don't have direct control over the system, because they are (or should be) aware of the risk that they take when using AI. However, the notion of risk in the in the context of AI is underdeveloped. The talk will therefore explore that notion and ask whether risk assessment changes in the context of AI.
Linkedin profile
CV & Research Summary
Lode Lauwaert is professor of technology at KU Leuven (Belgium), where he also holds the Chair Ethics and AI. His current research is on AI from the perspective of both environmental ethics and cross cultural ethics.
Research summary
Philosophy of Technology; Ethics; Philosophical Anthropology