Today at the OpenAI DevDay, its first ever developer conference, the company has introduced a preview of its latest iteration, GPT-4 Turbo. It is a refined version of its flagship AI model, GPT-4, at the company’s inaugural developer conference.
“GPT 4 Turbo will address many of the things you’ll have asked for. We have six major upgrades to this model,” Sam Altman said at the event.
Claimed to be both more potent and cost-efficient, GPT-4 Turbo arrives in two versions, one dedicated to text analysis and another proficient in comprehending both text and images. Available in a preview through an API, OpenAI plans to make both versions generally accessible in the following weeks.
It’s priced at $0.01 per 1,000 input tokens and $0.03 per 1,000 output tokens. The pricing for image-processing with GPT-4 Turbo will vary according to the image size. The company optimised its performance to offer GPT-4 Turbo at significantly reduced costs: 3x cheaper for input tokens and 2x cheaper for output tokens compared to GPT-4.
Features in GPT-4 Turbo
“We are just as annoyed as all of you, and probably more that GPT-4’s knowledge about the world ended in 2021. We try to never get it that outdated again,” Altman said. The updated GPT will have knowledge till April 2023 which Altman will continue to keep up over time.
This context window, larger than any commercially available model, aims to provide better-informed responses and avoid straying off-topic. Additionally, the model supports a new “JSON mode” for valid JSON responses, offering increased utility in web applications and niche settings.
Altman said, “We’ve heard loud and clear that developers need more control over the models, responses and outputs. For this, we have a new feature called JSON, which ensures that the model will respond to valid JSON. It’ll make clean API’s much easier. “
Fine-Tuning Program and Pricing Updates
OpenAI concurrently announced the launch of an experimental access program for fine-tuning GPT-4, with an increased requirement for oversight and guidance due to technical intricacies. While doubling the tokens-per-minute rate limit for paying GPT-4 customers, pricing will remain at $0.03 per input token and $0.06 per output token for models with varying context window sizes. The company is committed to continually refining and enhancing both GPT-4 and GPT-4 Turbo to meet the needs of developers and users.
Furthermore, OpenAI announced updated knowledge bases and longer context windows for both GPT-4 and GPT-3.5. The company also pledged to provide legal indemnity through the Copyright Shield program, offering support and covering costs in the face of potential legal claims around copyright infringement for enterprise users.
OpenAI’s continuous improvements across its flagship models and the commitment to legal protection mirror the company’s efforts to advance AI capabilities while ensuring legal safeguards for enterprise users, aligning with industry peers’ similar initiatives to protect customers facing copyright-related challenges.
The overarching goal is to continually refine and improve both GPT-4 and GPT-4 Turbo, aiming to provide developers and users with enhanced AI capabilities for diverse applications and tasks.