Netherlands formulates vision to address risks of GenAI

There are numerous risks and challenges associated with generative AI – which is why the Dutch government has formulated a vision document for its responsible and safe use in the Netherlands

The Netherlands has the ambition to be a leader in Europe in applying and regulating safe and equitable generative AI (GenAI). To achieve this, state secretary for digitalisation Alexandra van Huffelen, minister of economic affairs and climate Micky Adriaansens, and minister of education, culture and science Robbert Dijkgraaf, sent a government-wide vision on generative AI to the House of Representatives (Tweede Kamer) to make recommendations to the cabinet. 

GenAI, according to Van Huffelen, Adriaansens and Dijkgraaf, must serve to increase human welfare, prosperity, sustainability, justice and security. They say it is essential that everyone can participate in the digital age, everyone can trust the digital world and everyone has control over their digital lives.

That this still needs to be more prominent is evident from research conducted by the Rathenau Research Institute at the request of the Ministry of the Interior and Kingdom Relations.

GenAI complicates known risks

“It is a real possibility that current and announced policies, including the upcoming European AI [artificial intelligence] regulation, are insufficient to counter the risks of GenAI systems,” the Rathenau researchers argue.

“Politicians and policy-makers need to work now to counter the risks of GenAI, but it will take time before policies will be effective” 
Linda Kool, Rathenau Research Institute

According to the researchers, the rise of generative AI amplifies known risks of digitisation, such as discrimination and insecurity. However, the technology also poses new risks to intellectual property and human development. 

“The rise of generative AI systems reinforces and complicates known risks of digitisation. The production of disinformation becomes easier, and data protection becomes more complicated. And while it was already difficult to obtain transparency about the operation of AI systems, with the more complex generative AI systems, it is virtually impossible,” the Rathenau report states. 

The influence of some large technology companies was already strong in social domains such as healthcare and education, but is expanding further into science. GenAI pressures democratic processes such as access to knowledge, news coverage and public debate, making it even more challenging to gain democratic control over technology.

“Politicians and policy-makers need to work now to counter the risks of GenAI, but it will take time before policies will be effective,” research coordinator Linda Kool argues. “Until then, every citizen and organisation faces the question of whether they can use the technology responsibly at this time.” 

Worldwide inequality

“Responsibly” is the crucial word in this phrase, because many people and organisations are already making full use of GenAI, especially since the introduction of ChatGPT almost a year and a half ago.

According to software supplier SAP, the technology could lead to great inequality worldwide, without adequate guidance and ethical standards. During this year’s World Economic Forum, the company called for coordinated action to make the benefits of AI accessible to all. 

“Companies like SAP, non-governmental organisations and intergovernmental organisations must take a leadership role in ensuring that no one is left behind in AI advances,” Julia White, chief marketing and solutions officer at SAP, told the forum. “SAP is committed to realising this vision through strategic initiatives and collaborations to advance inclusive and ethical AI applications worldwide.”

Employee concerns

Research firm McKinsey estimates that generative AI could provide up to $4.4tn in economic benefits annually and enable the automation of some 70% of business activities. Accenture research shows that many employees are concerned about this. Remarkably, management teams mostly do not share those concerns.

“The gap in perception between employees and executives about the impact of AI is remarkable,” says Rob Knigge, managing director of Accenture in the Netherlands.

“On the one hand, we see that employees are aware of both the opportunities and challenges AI brings. On the other hand, leaders need more understanding and skills to guide this transformation. It is essential that leaders recognise the emotional and professional impact of GenAI on their employees and actively develop strategies to create a positive environment in which everyone can thrive.”

Opportunities and risks of GenAI

The Dutch government recognises the dangers and risks. “The realisation of public values and fundamental rights such as transparency, privacy, autonomy and non-discrimination may come under pressure from irresponsible development and the use or misuse of GenAI,” reads the vision document by Van Huffelen, Adriaansens and Dijkgraaf.

The wide availability of the technology and the scale and pace at which it is currently developing requires a future-proof vision. 

Their vision was presented to the House of Representatives in January 2024. It recognises the many opportunities and possibilities of GenAI for Dutch society. “For example, it can contribute to making medical diagnoses adequately and efficiently, improving care and contributing to medical research,” the vision states.

“Economists predict productivity growth, not only for large companies, but also for SMEs, as applications make tasks such as performing financial analysis and legal processes available in a cost-efficient manner. In the cultural sector, generative AI supports the creative process, including when creating marketing content or generating descriptions of works of art.”

The drafters of the vision document see that generative AI can lead to productivity growth and that new roles and responsibilities within organisations and the economy will emerge due to the technology’s deployment.

Yet Van Huffelen, Adriaansens and Dijkgraaf also warn of the negative impact of generative AI on public values such as non-discrimination, privacy and transparency. “As the Netherlands, we largely depend on language models from non-European countries. The challenge is, therefore, to create a market in which GenAI applications are offered that comply with all Dutch and European values and laws,” they say.

Principles and lines of action

The Dutch government has been working on AI policy for some time. This is prioritised because AI is a key technology, and related public values must be safeguarded. Moreover, the drafters of the vision document have expressed their ambition to realise a robust AI ecosystem in the Netherlands and the European Union (EU) in which plenty of innovation can take place with responsible GenAI.

“We do this by creating preconditions for its development and use while maintaining our digital open strategic autonomy. This is why the cabinet is focusing on four principles: Generative AI in the Netherlands is developed and applied safely (1); is developed and applied in a just manner (2); serves human welfare and autonomy (3); and contributes to sustainability and our prosperity.”

Action is needed to deploy GenAI in the Netherlands responsibly. The vision document describes six lines of action:

  1. Cooperation.
  2. Closely following all developments.
  3. Designing and applying laws and regulations.
  4. Increasing knowledge and expertise.
  5. Innovating with generative AI .
  6. Solid and clear supervision and enforcement.

“For the entire Dutch society to benefit from generative AI, it is important for the Netherlands, as part of the EU, to be at the wheel itself,” states the vision document. The cabinet, therefore, emphasises the importance of continuing to monitor and analyse the developments and consequences of GenAI.

Read more about generative artificial intelligence (GenAI)

Read more on IT innovation, research and development

CIO
Security
Networking
Data Center
Data Management
Close