With its improved capabilities and innovative features, GPT-4 promises to revolutionize the way we approach natural language processing. In this post, we’ll dive into the details of what’s new with GPT-4 and how you can start using it today.
TLDR — Chat GPT-4 Features Summarized
- You can only use it if you’re a ChatGPT Plus subscriber for now. ($20 USD per month)
- The usage is currently limited, with 100 messages every 4 hours. I’m sure that will be improved over time, though.
- It’s faster — of course. OpenAI also claims it’s far more creative than before and can help people compose songs, learn a user’s writing style, and even create screenplays in the correct format
- It’s a bit better at being…truthful: “GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.”
- GPT-4 can handle over 25,000 words of text, which would be approximately 32,768 tokens (here’s my simplified guide on what ChatGPT tokens are). For most of us though, the limit is now 8,192 tokens, around 6,000 words (double from GPT-3.5).
- There’s an upcoming feature in GPT-4 (it’s presumably in beta-testing right now) where you’ll be able to input actual images. That should be interesting to see how users play around with that — when it’s available.
What’s New with GPT-4
GPT-4 is the latest version of OpenAI’s GPT language model series. It’s a machine learning model trained on a massive amount of data, including internet data and data that OpenAI has licensed. The model is designed to predict the next word in a given document and generate text outputs based on the input data.
So, what’s new with GPT-4? Here are the main takeaways:
- GPT-4 can now accept both text and image inputs, making it more versatile than previous models. It’s still in the research preview stage, but the potential for combining visual and textual information is exciting.
- GPT-4 is more steerable than previous models. Users can customize the AI’s style and task by describing those directions in the “system” message. Developers can use this feature to significantly customize their users’ experience.
- GPT-4 reduces the risk of “hallucinations” (where it just makes up believable-sounding facts) relative to previous models. It scores 40% higher than the latest GPT-3 model on internal adversarial factuality evaluations. However, there’s still room for improvement.
- GPT-4 still has limitations, including the potential for simple reasoning errors, biases in its outputs, and a lack of knowledge of events that have occurred after September 2021. These limitations should be kept in mind when using language model outputs, particularly in high-stakes contexts.
How to Use GPT-4
There are two main ways to access GPT-4: as a ChatGPT Plus subscriber on chat.openai.com or through the GPT-4 API.
If you’re a ChatGPT Plus subscriber, you can access GPT-4 on chat.openai.com with a usage cap. Right now it’s $20 USD a month. You can find it over at https://chat.openai.com/chat.
Depending on traffic patterns, OpenAI said they might introduce a new subscription level for higher-volume GPT-4 usage. The API allows developers to make text-only requests to the GPT-4 model. Image inputs are still in limited alpha, but may be available soon.
To get access to the GPT-4 API, sign up for the waitlist.
Once you have access, you can make text-only requests to the GPT-4 model. Pricing is $0.03 per 1,000 prompt tokens and $0.06 per 1000 completion tokens. Default rate limits are 40,000 tokens per minute and 200 requests per minute.
OpenAI is also providing limited access to the 32,768-context version, gpt-4–32k.
Pricing for this version is $0.06 per 1,000 prompt tokens and $0.12 per 1,000 completion tokens. OpenAI is still improving the quality of the model for long context and welcomes feedback on its performance.
Have fun prompting, peeps.
👇Click the heart thingy? The algorithm loves it. I love it more.👇
Or check out my weekly free column with deep dives and humorous stories, Pryor Thoughts.