Last week, OpenAI published the GPT-4o “scorecard,” a report that details “key areas of risk” for the company’s latest large language model, and how they hope to mitigate them.
In one terrifying instance, OpenAI found that the model’s Advanced Voice Mode — which allows users to speak with ChatGPT — unexpectedly imitated users’ voices without their permission, Ars Technica reports.
“Voice generation can also occur in non-adversarial situations, such as our use of that ability to generate voices for ChatGPT’s advanced voice mode,” OpenAI wrote in its documentation. “During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user’s voice.”