Stronger Regulations for Algorithms and AI
Chair of the AP Aleid Wolfsen emphasized that there have been many times when algorithms have caused major issues. For example, he referenced the childcare allowance fraud situation where there were automated systems that identified citizens incorrectly as fraud suspects. According to Wolfsen, even five years after the child benefit scandal, the lessons remain, but there has yet been no meaningful follow-up. This is largely due to there being no strict regulations in place for the use of algorithms and artificial intelligence, as well as a lack of proper enforcement.
To be able to monitor the risks that relate to artificial intelligence, the authority has implemented a monitoring tool known as a "Barometer." The barometer consists of nine different components to measure possible outcomes associated with the use of artificial intelligence. Six months ago, the barometer showed two different warning signs. These were that the registration of algorithms and artificial intelligence systems was not sufficient. In addition, there was no overview of incidents involving algorithms and artificial intelligence.
Increasing Concern Over Governance and Public Safety
Six months later, both of these issues continue to remain unresolved, and it has only become more concerning. The AP has two more warning signs that have emerged since the last check of the Barometer. According to the AP, there continues to be a lack of governance to oversee artificial intelligence and algorithms, as well as a lack of sufficient governance to administer the oversight of artificial intelligence and algorithms.
Additionally, the authority has expressed grave concerns over the protections in place for children, as many children are using AI not only for schooling purposes but as a conversational tool. The authority has expressed concern that children may develop a dependency on AI and not fully understand the risks associated with using it.
Increasing Risks Associated with the Misuse of AI
The privacy watchdog has identified a number of risks associated with the rapid advancement of artificial intelligence. These include: the uncontrolled increase in the use of deepfakes; AI-driven fraud; and the psychological damage that can be inflicted by chatbots. The AP also states that security protections cannot be implemented for AI due to the ever-evolving technology.
At the same time, the AP believes many organizations may be avoiding their responsibilities. In particular, many organizations that are required to register their AI systems are labelling those systems as algorithms (to be subject to the lesser regulation).
Head of AI Oversight Joost van der Burgth explained that high-risk AI systems already exist in the fields of healthcare and crime detection. Beginning next year, organizations must meet a number of technical requirements, including documentation, risk assessment, and bias prevention, before they will be allowed to use high-risk AI systems. According to van der Burgth, organizations may not be willing to register, either because they do not want attention or because they believe registration will not apply.




