OpenAI has rolled out new controls that let ChatGPT users adjust the popular AI chatbot’s warmth, enthusiasm, and use of emojis in its AI-generated responses.
These levels can be set to More, Less, or Default by opening the Personalisation menu of the ChatGPT app, according to a social media post by OpenAI. Users can further customise the base style and tone of ChatGPT’s responses by setting it to Professional, Candid, or Quirky – existing controls rolled out by OpenAI in November this year.
The tone and behaviour of ChatGPT have been an ongoing issue in 2025. Earlier this year, OpenAI rolled back a behavioural update to the AI chatbot after users complained that its responses were sycophantic. After adjusting GPT-5 to be “warmer and friendlier,” users again said they were frustrated by the new model, calling it colder and less friendly.
You can now adjust specific characteristics in ChatGPT, like warmth, enthusiasm, and emoji use.
Now available in your “Personalization” settings. pic.twitter.com/7WSkOQVTKU
— OpenAI (@OpenAI) December 19, 2025
ChatGPT’s sycophantic responses raised concerns among several academics and AI critics who believe that such chatbots tend to please users and affirm their views, which, in turn, can lead to addictive behaviour and potentially have a negative impact on their mental health.
OpenAI is also facing several lawsuits alleging that teen users allegedly died by suicide after prolonged conversations with AI chatbots. Additionally, it has come under increased scrutiny from policymakers, educators, and child-safety advocates.
Last week, the Microsoft-backed AI startup updated its Model Spec, which outlines guidelines for how its AI models should interact with users under 18, and published new AI literacy resources for teens and parents.
It laid out the following principles to guide the models’ approach for ensuring teen user safety:
– Put teen safety first, even when other user interests like “maximum intellectual freedom” conflict with safety concernsStory continues below this ad
– Promote real-world support by guiding teens towards family, friends, and local professionals for well-being
– Treat teens like teens by speaking with warmth and respect, not condescension or treating them like adults
– Be transparent by explaining what the assistant can and cannot do, and remind teens that it is not a human.
OpenAI also said it uses automated classifiers to assess text, image, and audio content in real time. These systems are designed to detect and block content related to child sexual abuse material, filter sensitive topics, and identify self-harm.Story continues below this ad
When a prompt is flagged by the system as a serious safety concern, the flagged content is subjected to manual review by a small team of trained people who will determine whether the content demonstrates signs of “acute distress” and possibly notify the parents of the under-18 user.
Source link
