Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on November 6, 2023

Netherlands building own version of ChatGPT amid quest for safer AI

Governments are trying to wrestle control of Generative AI from Big Tech


Netherlands building own version of ChatGPT amid quest for safer AI

The Netherlands is building its own large language model (LLM) that seeks to provide a “transparent, fair, and verifiable” alternative to AI chatbots like the immensely popular ChatGPT. 

It seems that everyone and their dog is developing their own AI chatbot these days, from Google’s Bard and Microsoft’s Bing Chat to the recently announced Grok, a new ChatGPT rival released by Elon Musk’s xAI company this week. 

But as Silicon Valley pursues AI development behind closed doors, authorities are left in the dark as to whether these LLMs adhere to any sort of ethical standards. The EU has already warned AI companies that stricter legislation is coming.

In contrast, the new Dutch LLM, dubbed GPT-NL, will be an open model, allowing everyone to see how the underlying software works and how the AI ​​comes to certain conclusions, said its creators. The AI is being developed by research organisation TNO, the Netherlands Forensic Institute, and IT cooperative SURF. 

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

“With the introduction of GPT-NL, our country will soon have its own language model and ecosystem, developed according to Dutch values ​​and guidelines,” said TNO. Financing for GPT-NL comes in the form of a €13.5mn grant from the Ministry of Economic Affairs and Climate Policy — a mere fraction of the billions used to create and run the chatbots of Silicon Valley.

“We want to have a much fairer and more responsible model,” said Selmar Smit, founder of GPT-NL. “The source data and the algorithm will become completely public.” The model is aimed at academic institutions, researchers, and governments as well as companies and general users.   

Over the next year, the partners will focus on developing and training the LLM, after which it will be made available for use and testing. GPT-NL will be hooked up to the country’s national supercomputer Snellius, which provides the processing power needed to make the model work. 

Perhaps quite appropriately, the launch of GPT-NL last week coincided with the world’s-first AI Safety Summit. The star-studded event, which took place at Bletchley Park in the UK, considered ways to mitigate AI risks through internationally coordinated action. Just days before the summit UK Prime Minister Rishi Sunak announced the launch of an AI chatbot to help the public pay taxes and access pensions. However, unlike GPT-NL, the technology behind the experimental service is managed by OpenAI — symbolic of the UK’s more laissez-faire approach to dealing with big tech, as compared to the EU.

Elsewhere, the United Arab Emirates has launched a large language model aimed at the world’s 400 million-plus Arabic speakers, while Japanese researchers are building their own version of ChatGPT because they say AI systems trained on foreign languages cannot grasp the intricacies of Japanese language and culture. In the US, the CIA is even making its own chatbot. 

Clearly then, governments and institutions across the world appear to be realising that when it comes to AI models, one-size-fits-all probably isn’t a good thing.  

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with