Today, we shared dozens of new additions and improvements, and reduced pricing across many parts of our platform. These include:
- New GPT-4 Turbo model that is more capable, cheaper and supports a 128K context window
- New Assistants API that makes it easier for developers to build their own assistive AI apps that have goals and can call models and tools
- New multimodal capabilities in the platform, including vision, image creation (DALL·E 3), and text-to-speech (TTS)
We'll begin rolling out new features to OpenAI customers starting at 1pm PT today.
⚠️ This post links to an external website. ⚠️
If this post was enjoyable or useful for you, please share it! If you have comments, questions, or feedback, you can email my personal email. To get new posts, subscribe use the RSS feed.