Rights and risks in data protection and AI legislation

What is a rights-based or risk-based regulatory model? Is rights-based regulation about to be replaced by risk-based regulation and, if so, what are the legal and societal consequences? These questions form the core of the research project, where the EU's General Data Protection Regulation (GDPR) and the proposed regulation on artificial intelligence (AI) will be analyzed as examples of different ways of legislating on elusive and extensive contemporary and future challenges. Through a critical close reading of the two pieces of legislation and related case law, the study will seek to map the ongoing trends at the EU level, where fundamental rights have gained an increasingly important role in legislation in recent decades but where a movement in the discourse can now be sensed towards a focus on controlling risks instead of protecting rights - not least in the AI context. A further task within the framework of the study will be to investigate which national parallels are noticeable regarding rights- and risk-based regulations (for example, the Public Access to Information and Secrecy Act can be said to constitute both parts), as well as to discuss challenges for the Swedish legal system when meeting the EU legislation that is now being developed. The interplay between the protection of fundamental rights and governance and oversight mechanisms, and how these legal structures affect individuals and other actors at the societal level, will be central to the study.