Your data. Your choice.

If you select «Essential cookies only», we’ll use cookies and similar technologies to collect information about your device and how you use our website. We need this information to allow you to log in securely and use basic functions such as the shopping cart.

By accepting all cookies, you’re allowing us to use this data to show you personalised offers, improve our website, and display targeted adverts on our website and on other websites or apps. Some data may also be shared with third parties and advertising partners as part of this process.

ETH Zurich
News + Trends

ETH and EPFL present their own AI model

Samuel Buchmann
11.7.2025
Translation: machine translated

A new open source language model from Switzerland supports over 1000 languages and offers 70 billion parameters. This makes it similar in size to Meta's Llama 3 and is intended to offer an alternative to proprietary LLMs.

ETH Zurich and EPFL have developed a fully open Large Language Model (LLM). It is due to be released in late summer 2025. According to the media release, the model supports over 1000 languages. It was trained on the supercomputer «Alps» at the Swiss Supercomputer Centre CSCS.

In contrast to proprietary LLMs, such as those from OpenAI or Anthropic, the Swiss model is based on transparency. Source code, model weights and training data are all made available. This is rare in the industry. Although LLMs from Meta and DeepSeek are «Open Weight», they are not fully «Open Source». This means that the algorithms and training data remain under lock and key.

  • Background information

    7 questions you have about DeepSeek (and the answers)

    by Samuel Buchmann

The ETH model will be released in two versions - with eight and 70 billion parameters. The latter is comparable to Meta's Llama 3, while OpenAI's GPT-4 is estimated to have around 1,800 billion parameters and Anthropic's Claude 4 Opus around 300 billion. The number of parameters is not the only metric for the performance of an LLM, but it is an indication. Proprietary models currently achieve the highest scores in benchmarks. However, open source models offer advantages in terms of traceability, customisability and data control.

Data protection taken into account during training

Swiss data protection laws, Swiss copyright law and the transparency obligations under the EU AI Act are taken into account when developing the LLM. According to a recent study by the project managers, there is practically no loss of performance for everyday tasks if the opt-outs for web crawling are respected during data collection - and the training thus ignores certain web content.

The Swiss AI Initiative has access to the supercomputer «Alps» at CSCS.
The Swiss AI Initiative has access to the supercomputer «Alps» at CSCS.
Source: Swiss National Supercomputing Centre

The model is published under the Apache 2.0 licence. This should make it accessible for both scientific and industrial applications. It is a result of the Swiss AI Initiative, which was launched by EPFL and ETH Zurich in December 2023. With over 800 researchers involved and access to over 20 million GPU hours per year on the supercomputer at CSCS, it is the world's largest open science and open source project on AI base models.

Header image: ETH Zurich

607 people like this article


User Avatar
User Avatar

My fingerprint often changes so drastically that my MacBook doesn't recognise it anymore. The reason? If I'm not clinging to a monitor or camera, I'm probably clinging to a rockface by the tips of my fingers.


News + Trends

From the latest iPhone to the return of 80s fashion. The editorial team will help you make sense of it all.

Show all

93 comments

Avatar
later