Denmark is at the forefront of integrating artificial intelligence into structured regulation, recently launching a new AI supercomputer.
EU law on AI
In August this year, the EU implemented the Artificial Intelligence Act, the world’s first major legislation aimed at regulating artificial intelligence technologies. The law takes a risk-based approach, classifying AI applications into different risk levels:
- Unacceptable risk: AI systems deemed to pose a significant threat to security or fundamental rights are prohibited.
- High Risk: Applications in critical industries such as healthcare, finance, and utilities must meet rigorous requirements, including transparency, data governance, and human oversight.
- Limited and minimal risk: These categories carry fewer obligations, focusing mainly on transparency and user information.
On Wednesday, Denmark proactively presented a national framework to ensure compliance with EU AI law. This framework, led by IT consultancy Netcompany and endorsed by entities such as Microsoft, provides guidance for both the public and private sectors on the responsible implementation of AI. It emphasizes secure data management, risk mitigation and adherence to best practices, facilitating a cohesive approach to integrating AI across various industries.
From weight loss to supercomputers
Denmark’s first AI supercomputer, Gefion (named after a Norse goddess), became operational in October this year.
Built on NVIDIA’s DGX SuperPOD platform and powered by 1,528 NVIDIA H100 Tensor Core GPUs, the supercomputer cost approximately 700 million Danish kroner, with primary funding from the Novo Nordisk Foundation, which committed 600 million Danish kroner. The foundation, which is non-profit, holds a majority stake in Novo Nordisk A/S, a leading pharmaceutical company known for developing the highly successful weight-loss drugs Ozempic and Wegovy.
Additional funding of DKK 100 million comes from the Danish Export and Investment Fund (EIFO), which holds a 15% stake in the supercomputer’s operating entity, the Danish Center for Artificial Intelligence Innovation (DCAI ).
This cutting-edge system is hosted at the Danish Center for Artificial Intelligence Innovation (DCAI) in Copenhagen in a sustainable data center powered entirely by renewable energy.
Gefion is intended to support pilot projects in Danish academia and industry. Select organisations, including the Danish Meteorological Institute and the University of Copenhagen, are already leveraging Gefion for cutting-edge research. The supercomputer will allow the meteorological institute to reduce weather forecasting times from hours to minutes, while university researchers aim to simulate quantum circuits approaching “quantum supremacy”.
Jensen Huang, CEO of NVIDIA, hailed Gefion as an “intelligence factory” during its launch, highlighting its role in promoting a sovereign AI infrastructure for Denmark. According to Mads Krogsgaard Thomsen, CEO of the Novo Nordisk Foundation, Gefion will not only strengthen Danish research and industry, but will also position the country as a leader in developing AI-based solutions to global challenges.
Sovereign AI
Gefion exemplifies Denmark’s investments in “sovereign artificial intelligence”.
Rather than relying on external AI systems developed globally, sovereign AI is designed to reflect a country’s unique values, language and needs, allowing for greater control over data privacy, security and alignment with local policies. This approach is tied to concerns about digital sovereignty, as nations seek to maintain control over critical infrastructure that supports everything from healthcare to security, while promoting innovation within their borders.
Amnesty International’s concerns
In related news, Amnesty International yesterday expressed concern over Denmark’s use of artificial intelligence in its welfare system.
The organization reports that Udbetaling Danmark, the agency responsible for social benefits, uses artificial intelligence algorithms to detect fraud, which could inadvertently lead to discrimination against marginalized groups. For example, the “Model Abroad” algorithm marks individuals based on nationality, potentially violating the right to non-discrimination.
Amnesty International calls for greater transparency and oversight to ensure that applications of artificial intelligence do not perpetuate bias.