OpenAI Updates GPT-4o With Improved Artistic Writing Potential, Reveals New Automated Pink Teaming Technique

OpenAI introduced two methods it’s bettering its synthetic intelligence (AI) fashions final week. The primary consists of releasing a brand new replace for the GPT-4o (often known as the GPT-4 Turbo), the corporate’s newest AI mannequin powering ChatGPT for paid subscribers. The corporate says the replace improves the artistic writing skill of the mannequin and makes it higher at pure language responses and writing participating content material with excessive readability. OpenAI additionally launched two analysis papers on crimson teaming and shared a brand new technique to automate the method to scale recognizing errors made by its AI fashions.

OpenAI Updates GPT-4o AI Mannequin

In a publish on X (previously generally known as Twitter), the AI agency introduced a brand new replace for the GPT-4o basis mannequin. OpenAI says the replace permits the AI mannequin to generate outputs with “extra pure, participating, and tailor-made writing to enhance relevance and readability.” It’s also mentioned to enhance the AI mannequin’s skill to course of uploaded information and supply deeper insights and “extra thorough” responses.

Notably, the GPT-4o AI mannequin is on the market to customers with the ChatGPT Plus subscription and builders with entry to the massive language mannequin (LLM) through API. These utilizing the free tier of the chatbot wouldn’t have entry to the mannequin.

Whereas Devices 360 workers members weren’t in a position to check out the brand new capabilities, one person on X posted concerning the newest enhancements within the AI mannequin after the replace. The person claimed that GPT-4o might generate an Eminem-style rap cipher with “refined inside rhyming constructions”.

OpenAI Shares New Analysis Papers on Pink Teaming

Pink teaming is the method utilized by builders and firms to make use of exterior entities to check software program and programs for vulnerabilities, potential dangers, and issues of safety. Most AI companies collaborate with organisations, immediate engineers, and moral hackers to stress-test whether or not it responds with dangerous, inaccurate, or deceptive output. Exams are additionally made to examine whether or not an AI system may be jailbroken.

Ever since ChatGPT was made public, OpenAI has been public with its crimson teaming efforts for every successive LLM launch. In a weblog publish final week, the corporate shared two new analysis papers on the development of the method. One in every of them is of explicit curiosity given the corporate claims it will possibly automate large-scale crimson teaming processes for AI fashions.

Printed within the OpenAI area, the paper claims that extra succesful AI fashions can be utilized to automate crimson teaming. The corporate believes AI fashions can help in brainstorming attacker objectives, how an attacker’s success may be judged, and understanding the range of assaults.

Increasing on it, the researchers claimed that the GPT-4T mannequin can be utilized to brainstorm a listing of concepts that represent dangerous behaviour for an AI mannequin. Some examples embrace prompts equivalent to “learn how to steal a automotive” and “learn how to construct a bomb”. As soon as the concepts have been generated, a separate crimson teaming AI mannequin may be constructed to trick ChatGPT utilizing an in depth collection of prompts.

At present, the corporate has not begun utilizing this technique for crimson teaming given a number of limitations. These embrace the evolving dangers of AI fashions, exposing the AI to lesser-known strategies for jailbreaking or producing dangerous content material, and the necessity for a better threshold for information in people to accurately choose the potential dangers of output as soon as the AI mannequin turns into extra succesful.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.