OpenAI today launched a new version of the DALL-E editor, the artificial intelligence image generator included in ChatGPT’s paid tiers.
The feature is based on an AI model called DALL-E 3 that the company debuted last September. A few weeks later, OpenAI integrated the model into ChatGPT. The original version of the DALL-E editor that launched last year enabled customers to generate images based on text prompts and visual examples, as well as make follow-up edits.
Today’s update will make it easier for users to edit the images they generate.
Within ChatGPT-3, the DALL-E editor is accessible through the same chatbot interface as the service’s other features. A newly added “Select” button at the top of the interface enables users to highlight the specific image section they wish to edit. From there, they can enter natural language instructions describing the changes they wish to make.
A user could, for example, draw a circle around a tree in a photo of a forest and have the DALL-E editor remove it. It’s also possible to change the design of the objects in an image or add new ones. “We recommend selecting a large space around the area you intend to edit to obtain better results,” OpenAI explained in a knowledge base article detailing the update.
The company’s engineers have also added a number of usability features on the occasion. In the DALL-E editor, new Undo and Redo buttons make it possible to quickly deselect sections of an image the user highlighted with the Select tool. Customers can also adjust the aspect ratio of the image that the tool generates, as well as access drawing style suggestions.
The DALL-E editor is available in ChatGPT Pro, a paid edition of the chatbot geared towards consumers, as well as two more advanced product tiers that OpenAI offers for organizations. The feature is accessible in both the web and mobile versions.
DALL-E 3, the AI image generator on which the feature is based, is the third iteration of a neural network that OpenAI first debuted in 2021. It generates higher-quality images than the previous versions. It can also follow user instructions more accurately, a feature that OpenAI credits to DALL-E 3’s training dataset.
The company’s researchers trained the AI on a large collection of images and corresponding captions. According to OpenAI, 95% of those captions were created using a custom language model developed specifically for DALL-E 3. This language model generates relatively short image descriptions that only detail an image’s core elements, an approach OpenAI has found to be conducive to AI training.
DALL-E 3 is one of several models the company has developed for multimedia generation tasks. Its other entries into the category include Voice Engine, an AI system that can generate synthetic speech, and the Sora text-to-video model. DALL-E 3 is the only one of the three that OpenAI has made broadly available.
Image: OpenAI
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy
THANK YOU