AI Risk Report Summer 2024: turbulent rise of AI calls for vigilance by everyone
Artificial intelligence (AI) is developing at a rapid pace, but as a technology, it is still in its infancy. Hence, there is a lot of experimentation, ranging from a ‘rat race’ in generative AI among big tech companies to the application of AI-based behaviour recognition systems in supermarkets and gyms in The Netherlands. However, adequate risk management of AI systems is not keeping up with this rapid development. For the Netherlands, this not only means that careful deployment of AI-systems has to take priority but also that society must be prepared for more AI-related incidents. This necessitates vigilance for AI-risks by Dutch citizens, corporate leaders and legislators.
This appraisal constitutes the current AI risk assessment for the Netherlands (summer 2024) as conducted by the Dutch Data Protection Authority (Dutch DPA) in its latest AI & Algorithmic Risks Report Netherlands (ARR). As the coordinating oversight body on algorithms and AI, the Dutch DPA analyses AI-related risks and advises society, businesses, government and politics on steps to be taken.
Low trust
In the Netherlands, trust in AI is lower than in other countries. This is cause of concern, because people are likely to encounter the risks associated with AI systems in their daily lives, ranging from the misuse of generative AI for cyberattacks and deepfakes to new forms of privacy violations and the potential of discrimination and arbitrariness.
Aleid Wolfsen, chairman of the Dutch DPA: ‘These are turbulent times, which is understandable given the rise of an emerging systemic technology. While this offers a multitude of opportunities, for example in medical treatments and inclusive services, there are also well known risks for people in the Netherlands. It is a silver lining that currently serious efforts are being made to establish up AI regulations. This is in addition to existing requirements in areas like data protection, consumer protection and cybersecurity. There is also increasing awareness that responsible AI is labour-intensive and demands a lot from organisations. Nevertheless, we issue a warning that as long as organisations are uncertain about the extent to which they understand he risks of AI, they should be cautious in deploying AI-systems.’
An example of a risk management measure that helps organisations to be in control of the effects of AI systems is so-called random sampling. Many AI systems are used to profile and select people, such as in fraud investigations. For such applications, random sampling help to detect and mitigate discrimination.
Information provision under pressure
In this edition of the report the Dutch DPA analysed, among other things, the risks of AI for online information provision, such as news. Through the AI systems behind social media and search engines, those platforms have a high degree of control over which information people see and, as a consequence, how they will perceive reality. In addition, the rise of generative AI incudes new risks of misinformation and disinformation. Due to AI applications that generate text, images, video and audio which cannot be distinguished from real, people can no longer be certain that what they see or hear is accurate.
For this reason, it is important that people understand how recommendation systems work, and that there is an option to disable or adjust these recommender system. Moreover, people should be able to assess the accuracy of information. This can be made possible, for example, by showing the sources of answers in AI search engines, by using AI to verify if images and videos were generated by AI, and by watermarking information generated by AI.
Democratic control of AI
Based upon on a survey conducted among Dutch municipalities and other local democratic bodies, the Dutch DPA analysed the extent to local councils and controlling bodies can exercise control over AI-systems deployed by the government. The survey results show that Dutch municipalities currently only have limited overview of their AI systems. In addition, council members doubt whether they have sufficient knowledge of AI, and it is apparent that only a few local audit offices occasionally investigate AI systems. The Dutch DPA notes that empowerment of municipal council members and other elected representatives is necessary in order to strengthen democratic control of the use of AI, empowering councillors and other people's representatives is necessary. This can be done, for example, by making knowledge accessible and creating audit obligations in the Netherlands.
National AI strategy
Finally, the Dutch DPA calls on the Dutch government to continue to prioritise the registration of algorithms by public authorities in a central national database. In addition, it should be considered to extend the registration requirement for algorithms to semi-public organisations. This is particularly important in sectors such as education, healthcare, social housing and public transport where AI is deployed in situations where people are vulnerable or dependent.
In order to have clear benchmarks and standards for trustworthy AI, it is essential to minimize the extent to which frameworks are non-binding, ambiguous or non-measurable. It should also be prevented that frameworks lag behind or conflict with the current state of scientific insights. The Dutch DPA sees the entry into office of a new government in The Netherlands as the timely opportunity to reassess the Dutch AI strategy.