OpenAI releases GPT-4: image input, better fact hit rate and more
GPT-4 has been released and can now be used by Plus customers of the ChatGPT chatbot. Among other things, the new version understands larger contexts and image inputs.
The fourth generation of OpenAI's AI system GPT can understand and deal with images. Text can be linked with visual input - GPT is now a multimodal model. This in contrast to GPT-3, which you can only feed with text. However, the innovation is not yet fully implemented in ChatGPT-4. You can already use the chatbot with a paid Plus subscription in the latest version. However, the upload function for pictures is not yet integrated - according to OpenAI, this will be added at a later date. >. Besides this, GPT-4 is supposed to deliver more creative output and understand longer contexts than its predecessor. The input may be up to 25 000 words long. Furthermore, according to OpenAI, GPT-4 should generate significantly less unwanted content. And the hit rate for facts has increased by 40 percent compared to GPT-3.5. Comparative examples of internal tests to back this up can be seen on the OpenAI website. In terms of conversational capability, GPT-4 is said to be only slightly different from its predecessor.
By the way, Microsoft is already relying on an enhanced version of ChatGPT 3.5 for their Bing copilot, and now the company has confirmed what has long been rumoured: This enhanced version is already GPT-4, but it has only now been officially unveiled. More information on GPT-4 and examples can be found on the GPT-4 product page.Titelbild: shutterstock
I find my muse in everything. When I don’t, I draw inspiration from daydreaming. After all, if you dream, you don’t sleep through life.