• Technika
  • Elektrické zařízení
  • Materiálový průmysl
  • Digitální život
  • Zásady ochrany osobních údajů
  • Ó jméno
Umístění: Domov / Technika / OpenAI's new AI model promises to be “more truthful and less toxic”

OpenAI's new AI model promises to be “more truthful and less toxic”

techserving |
1226

OpenAI has made a new version of its GPT-3 AI language model available that promises to be better at following user intentions while also producing results that are more truthful and less toxic.

The Open AI API is powered by GPT-3 language models that can be used to perform natural language tasks using carefully engineered text prompts. However, the models can also produce outputs that are untruthful, toxic, or reflect harmful sentiments.

The organisation's AI models have been criticised in the past for a range of shortcomings, including racism against specific genders and religions. The organisation once called GPT-3 too dangerous to make public, due to the API being able to create fake news stories by taking cues from the eight million web pages it had scanned to learn about language.

OpenAI's new AI model promises to be “more truthful and less toxic”

The organisation said this is partly because GPT-3 is trained to predict the next word on a large dataset of Internet text instead of safely performing the language tasks the user wants.

To make its models safer, and more aligned with users, OpenAI used a technique known as reinforcement learning from human feedback (RLHF), using human helpers called labelers to assist the AI in its learning.

“On prompts submitted by our customers to the API, our labelers provide demonstrations of the desired model behavior, and rank several outputs from our models. We then use this data to fine-tune GPT-3,” said the company.