OpenAI releases GPT-4: Image input, better fact hit rate and more
News + Trends

OpenAI releases GPT-4: Image input, better fact hit rate and more

Martin Jud
15.3.2023
Translation: machine translated

GPT-4 has been released and can now be used by Plus customers of the ChatGPT chatbot. Among other things, the new version understands larger contexts and image inputs.

The fourth generation of OpenAI's GPT AI system can understand and handle images. This means that text can be linked with visual input - GPT is now a multimodal model. This is in contrast to GPT-3, which you can only feed with text. However, this innovation has not yet been fully implemented in ChatGPT-4. You can already use the chatbot with a paid Plus subscription in the latest version. However, the upload function for images is not yet integrated - according to OpenAI, this will be added at a later date.

In addition to this, GPT-4 should deliver much more creative output and understand longer contexts than its predecessor. The input can be up to 25,000 words long. According to OpenAI, GPT-4 should also generate significantly less unwanted content. And the hit rate for facts has increased by 40 per cent compared to GPT-3.5. Comparative examples from internal tests to back this up can be seen on the OpenAI website. In terms of conversational capabilities, GPT-4 is said to differ only slightly from the previous version.

By the way, Microsoft is already relying on an extended version of ChatGPT 3.5 for its Bing Copilot. The company has now confirmed what has long been rumoured: This extended version is already GPT-4, but it has only now been officially unveiled. You can find more information about GPT-4 and examples on the GPT-4 product page.

Cover image: shutterstock

38 people like this article


User Avatar
User Avatar

I find my muse in everything. When I don’t, I draw inspiration from daydreaming. After all, if you dream, you don’t sleep through life.


These articles might also interest you

Comments

Avatar