About this Workshop
US has been leading fast AI advances without a comprehensive AI legislation. As AI becomes more powerful, perceived AI risks also rise rapidly, some are real with documented incidents, some imagined that might happen in the future. Legislation is the first line of defense as a structural tool to manage AI risks, it also sets the framework for organizational AI risk management. With the EU AI act taking effect on August 1, 2024 but working out implementation challenges, UK published its policy paper titled βAI regulation: a pro-innovation approach, and in the US, the California AI legislation in its final step, the debate about AI legislation continues to intensify in the world.
AI is an extension of our brain, it helps humans make decisions, AI also makes some decisions independently. AI decisions interact with humans and impact our lives like any other human decisions. Through the lens of decision, we propose to examine AI risks in the context of AI decisions and AI decision-making process, governing AI decisions and actions in a similar way as we govern human decisions and actions through relevant existing ethics, laws and regulations, plus a minimum set of regulations tailored to AI specific risks.
From a decision perspective, we compare the advantages and disadvantages of EU, UK and US legislation approaches, provide a new way to analyze the current legislations, and pave the way for future legislation.
The decision-based approach can also complement the risk-based EU AI Act in designing and implementing effective and efficient risk management strategies required of AI developers, deployers as well as users to better address risks in safety, security, health, and fundamental rights of the public.