CECIIS 2023 keynote speakers announced

ceciis keynotes announced
18 May, 2023
We are very happy to announce that this year's keynote speakers at CECIIS 2023 are Damir Bogdan, CEO uptownBasel Infinity from Switzerland and Prof. Lode Lauwaert, PhD. from Belgium.
Damir Bogdan

Damir Bogdan, CEO uptownBasel Infinity

Damir Bogdan, CEO uptownBasel Infinity is heading a Center of Excellence for Quantum-computing and AI, QuantumBasel. uptownBasel is an innovation campus in Basel, Switzerland with global outreach and partnerships. Besides that, Damir is CEO of Actvide AG, a consulting company specialized on transformation within the digital age, leading C-Levels into the new future. He is active in Switzerland as well as in Silicon Valley. Part of his work is organizing strategy workshops and global immersions for Executives to the US. He is engaged at Plug and Play, Silicon Valley’s largest innovation platform.
Damir is multiple Board-Member within the Health Care, High-Tech and Industry sector. He is Senior Advisor at IWI University of St. Gallen and member of several Swiss startup committees and juries. He is a well-booked keynotespeaker and panellist on digital transformation, cultural change, disruption and startups.
Education: Executive MBA SUNY New York; Swiss Federal Diploma for IT Management; Leadership Certificate at London Business School; Certificates for “Disruptive Strategies” at Harvard, “AI Implications on Business Models” at the MIT; Alumni of the Singularity University in Silicon Valley.



The theme will be announced soon here.


Lode Lauwaert

Prof. Lode Lauwaert, PhD

Lode Lauwaert is professor of technology at KU Leuven (Belgium), where he also holds the Chair Ethics and AI. His current research is on AI from the perspective of both environmental ethics and cross cultural ethics. More info about Prof. Lauwaert can be found here.



Ethics of risk in the context of AI

Some argue that the use of AI creates a responsibility gap, because one never has fully control over AI, while direct control is a necessary condition for responsibility. This line of reasoning has been criticized by many scholars: direct control is desirable but not necessary for responsibility. Moreover, people can be held responsible even though they don't have direct control over the system, because they are (or should be) aware of the risk that they take when using AI. However, the notion of risk in the in the context of AI is underdeveloped. The talk will therefore explore that notion and ask whether risk assessment changes in the context of AI.