Prof. Lode Lauwaert, PhD.

Lode Lauwaert
Ethics of risk in the context of AI

Some argue that the use of AI creates a responsibility gap, because one never has fully control over AI, while direct control is a necessary condition for responsibility. This line of reasoning has been criticized by many scholars: direct control is desirable but not necessary for responsibility. Moreover, people can be held responsible even though they don't have direct control over the system, because they are (or should be) aware of the risk that they take when using AI. However, the notion of risk in the in the context of AI is underdeveloped. The talk will therefore explore that notion and ask whether risk assessment changes in the context of AI.


Linkedin profile