Yesterday morning Openai showed up the new generation of GPT, ain that powers both Chatgpt and Bing ai, and the new features of GPT-4. Here are three new features in the new generation of GPT:
GPT-4 can see
The new generation of GPT has got “eyes”. GPT-4 will be able to describe, categorize and analyze input images. In one of the examples Openai showed, an image of a bunch of helium balloons tied to a weight on the ground was fed in and the question “what happens if the strings are cut?” was asked. GPT-4 responded with “the balloons would fly away.”
The image analysis in GPT-4 can be used for more than just asking simple questions. In the app Be My Eyes ai is being used to help the visually impaired by describing what the mobile phone’s camera sees, in some of the examples shown, ain read out a restaurant menu and gave directions inside a gym.
GPT-4 even succeeded transform a simple sketch on what an admittedly simple website would look like, to html code that produced the desired website.
GPT-4 can write more
The new generation GPT will be able to handle much more text. Common Chat GPTs handle inputs of 3,000 words, and can generate as much. GPT-4 will be able to handle up to 25,000 words, in both what is input and what it generates.
Ain has also become more creative, and should now be better able to collaborate with inputs and learn the user’s way of writing to produce, for example, song lyrics and movie scripts. As an international colleague said, “say a little prayer for those who moderate self-publishing on Kindle.”
GPT-4 is smarter
The question of whether GPT-4 is actually smart, or