OpenAI ChatGPT-4 – Features, Release, And Details

It’s been pretty much impossible to avoid the discourse around ChatGPT and its GPT (Generative Pre-trained Transformer) protocol, but if you have somehow managed to miss everything that’s been said about this controversial AI, here’s the gist. ChatGPT is an AI language learning model that is showing some pretty scarily intelligent characteristics; it can create recipes from ingredients, for instance, or write poetry or songs to specifications provided by users.

Recently, ChatGPT creator OpenAI released GPT-4, its new deep learning multimodal model that will power ChatGPT going forward. If you’re wondering, the distinction is thus: GPT-4 is the actual model used to power the software, while ChatGPT is the frontend that lets users talk to it and interact with it. GPT-4 is said to be significantly more powerful and capable than its predecessor, so let’s take a look at some of the features it has, as well as its release date and more.

When is GPT-4 being released?

GPT-4 is actually already out there right now, according to TechCrunch. The site says that Bing Chat, Stripe, and Duolingo have all been using GPT-4 for various purposes; Bing Chat uses GPT-4 to parse user requests, Stripe uses the protocol to deliver business summaries to customer support personnel, and a new Duolingo subscription tier uses GPT-4 to help users learn new languages. If you’ve used any of these services, you’ve already used GPT-4, and it’ll start appearing in other places before long.

If you want to use this tech yourself, then you can pay OpenAI for ChatGPT Plus, its paid model. There is a usage cap, though, so make sure to take a look at ChatGPT Plus’ full rules to understand what you’re getting yourself into first.

What can GPT-4 do that GPT-3.5 couldn’t?

The last model before GPT-4 was GPT-3.5, and as you can probably imagine, GPT-4 is significantly more capable and powerful than GPT-3.5 was. Here are some of the things that GPT-3.5 couldn’t do that GPT-4 can.

  • Generating responses from both image and text inputs. GPT-3.5 was only capable of responding to text inputted by users, but GPT-4 can respond to image prompts and extrapolate information from them as well, which is pretty darn impressive if you ask us. An example given by OpenAI involves GPT-4 explaining what makes an image of a VGA connector hooked up to a smartphone funny, so if you want modern memes explained to you, it looks like GPT-4 will be the way to go!
  • Passing human exams. According to OpenAI, GPT-4 managed to pass a simulated bar exam (that’s the exam taken by prospective lawyers) with a score in the top 10% of applicants, while GPT-3.5 was only capable of managing the bottom 10%. That’s a huge leap in quality. The model also managed to pass the SAT exams for evidence-based reading and writing, as well as math, in the 89th to 93rd percentile.
  • Language parsing. OpenAI says that it tested GPT-4’s language capabilities using the MMLU benchmark, which puts 14,000 multiple-choice questions spanning a total of 57 subjects to its respondents. In 24 of 26 languages tested, including Italian, Afrikaans, and Indonesian, GPT-4 outperformed GPT-3.5’s native English result, meaning that GPT-4 isn’t just a more powerful protocol for English, but for other languages as well.
  • Embodying personalities and roles. One particularly interesting example given by OpenAI regarding GPT-4’s capabilities involves assigning it the role of the “Socratic tutor”, in which the AI never gives clear and straight answers but instead tries to guide students to the right answer themselves through questioning. Even when the student says they just want an answer, the AI refuses to give them one and sticks to its role of the Socratic professor. Impressive, no?

What are some of the limitations of GPT-4?

As with any other AI language learning model, GPT-4 has its limitations, and OpenAI is, appropriately enough given its name, fairly open about them. Here are some of the issues that OpenAI still needs to overcome when it comes to GPT-4 and its AI model.

  • Hallucinations. If you haven’t heard of “hallucinations” in an AI context, they’re effectively deviations from fact that the AI has arrived at through a strange or unusual process of logic. The answer often sounds totally convincing, but is, in fact, junk data. OpenAI acknowledges the fact that GPT-4 still suffers from this problem, and although hallucinations have been significantly reduced when compared to previous protocols, they’re still possible.
  • Lack of up-to-the-minute information. GPT-4’s data set still revolves around data that was collected prior to September 2021, and so asking it a question about anything that happened after this date is likely to return an incomplete or false answer. There are some small exceptions to this, but for the most part, if you want to ask the AI questions about Taylor Swift’s Midnights album, for instance, you may not get the answer you want.
  • Lack of learning. GPT-4 doesn’t learn from its experience, according to OpenAI, so even if a user corrects it, there’s a chance that GPT-4 won’t take the new information on board and will continue stubbornly sticking to its guns. This means that any answers or information received from GPT-4 should probably be taken with a pinch of salt and double-checked.
  • Simple errors and false statements. Like human beings, GPT-4 is prone to making false statements with confidence, as well as making simple mistakes. Also like humans, it will often stick to its guns when being confronted about those mistakes.

Make sure that you read OpenAI’s full blog post about GPT-4, as it contains many fascinating insights, graphs, and other data that you should read, whether you’re a budding AI enthusiast or simply an interested party!

Join Our Newsletter

Elevate Your SEO Knowledge: Subscribe for Monthly Insights!