OpenAI Comes with the Latest Generation of its AI Language Model: GPT-4. The new algorithm has to come up with fewer answers and can work with both text and images.
OpenAI, the company behind ChatGPT, has announced a new version of its language algorithm. GPT-4 is, according to the makers, more creative and “human” than previous versions and could solve complex problems more accurately. In addition, the AI can process input from images and text but only generates text.
The makers point out that GPT-4 still has many of the problems that previous versions had. However, the difference between GPT-3.5 and GPT 4 is quite subtle in ordinary conversations. For example, the AI can still make up some information, and the algorithm can generate threatening or hurtful language. Those last two issues surfaced in recent months when the general public had a chance to test ChatGPT.
What is striking is the new image processing. For example, GPT-4 can recognize images and recommend meals, based on a photo of what’s in your fridge.
The new model can be tested for people with a subscription to ChatGPT Plus, the paid service of about twenty dollars a month that OpenAI offers. But if you want to see the model in action, you can also try the Bing chat robot.
The company announced that Microsoft’s search engine already uses the GPT-4 model. This is a modified version of the search engine. The Bing chatbot creates summaries and writes answers to questions based on search results. There is a waiting list for what is officially still a beta test.
In addition to Microsoft, OpenAI partners with a few other companies, including financial startup Stripe. It uses GPT-4 to scan business websites. Language learning app Duolingo incorporates AI into its courses, and companies such as Morgan Stanley and Khan Academy have already tested GPT-4 for automatic summaries and training.