The scientists are utilizing a method termed adversarial instruction to halt ChatGPT from allowing consumers trick it into behaving badly (often known as jailbreaking). This function pits a number of chatbots from each other: 1 chatbot plays the adversary and attacks One more chatbot by making textual content to force it to buck its normal constrai