ChatGPT update enables its AI to “see, hear, and speak,“ according to OpenAI

Ars Technica 2023-09-25

Summary:

An illustration of a cybernetic eyeball.

Enlarge (credit: Getty Images)

On Monday, OpenAI announced a significant update to ChatGPT that enables its GPT-3.5 and GPT-4 AI models to analyze images and react to them as part of a text conversation. Also, the ChatGPT mobile app will add speech synthesis options that, when paired with its existing speech recognition features, will enable fully verbal conversations with the AI assistant, OpenAI says.

OpenAI is planning to roll out these features in ChatGPT to Plus and Enterprise subscribers "over the next two weeks." It also notes that speech synthesis is coming to iOS and Android only, and image recognition will be available on both the web interface and the mobile apps.

OpenAI says the new image recognition feature in ChatGPT lets users upload one or more images for conversation, using either the GPT-3.5 or GPT-4 models. In its promotional blog post, the company claims the feature can be used for a variety of everyday applications: from figuring out what's for dinner by taking pictures of the fridge and pantry, to troubleshooting why your grill won’t start. It also says that users can use their device's touch screen to circle parts of the image that they would like ChatGPT to concentrate on.

Read 10 remaining paragraphs | Comments

Link:

https://arstechnica.com/?p=1970737

From feeds:

Everything Online Malign Influence Newsletter » Newsletter
Cyberlaw » Ars Technica
Music and Digital Media » Ars Technica

Tags:

& policy-digital newsletter credible whisper vision text tech synthesis speech recognition openai my multimodal models microsoft machine learning large language it ios eyes ethics computer chatgtp chatgpt chat biz bing be android ai

Authors:

Benj Edwards

Date tagged:

09/25/2023, 15:37

Date published:

09/25/2023, 14:38