Customizing AI Personalities: How My Approach Compares to OpenAI’s
When I wrote my original piece on Designing AI Agent Personalities, I argued that personality is not just a stylistic choice, but a strategic design decision. It is a choice that can determine how effective an AI will be in its role, how well it resonates with users, and how it aligns with brand and ethical considerations.
Recently, OpenAI released its own “Customizing Your ChatGPT Personality” feature. I was curious to see how their approach lined up with mine. After digging into it, I found areas of overlap, some important differences, and a few lessons worth sharing.
Purpose: Strategic vs. Stylistic
In my framework, personality is deeply tied to the AI’s mission. Before I assign a personality, I go through a pre-design analysis. I ask questions like:
What is the AI’s core purpose and goals?
Who will be using it, and in what emotional state?
What is the environment or channel of interaction?
How does it fit the brand’s voice and values?
What ethical or practical risks need to be addressed?
For example, if I am designing an AI for a financial planning service, I might choose an Analytical personality that delivers precise, data-backed advice. But if the same service offers post-appointment support, an Amiable personality might be better for follow-up conversations.
OpenAI’s approach is different. Their personalities are essentially style presets for ChatGPT: Cynic, Robot, Listener, Nerd, and Default. Choosing one changes the tone of responses, but not the underlying behavior or problem-solving approach. A Cynic might give sarcastic commentary along with an answer, while a Robot will be concise and factual. However, neither will change the way tasks are prioritized or solutions are framed. The choice is for the user’s enjoyment or usefulness, not for the AI’s function.
Interestingly, some reporting suggests that a secondary goal of these new personalities is to address the "sycophancy problem" that has been a known issue with earlier models. This suggests a strategic element to the design, even if the user-facing purpose remains more about personal preference.
The Personality Models
My model draws from established psychology and communication theory, using four archetypes: Driver, Expressive, Analytical, and Amiable. These archetypes trace back to Carl Jung's psychological types and are designed to improve human communication. They map closely to how humans naturally interact and adapt in conversation.
OpenAI’s archetypes are creative, entertaining, and easy to understand, but they are not tied to those underlying behavioral theories. They include:
Cynic: Sarcastic and blunt, delivering direct and practical answers with wit.
Robot: Precise, efficient, and emotionless, providing direct answers without extra words.
Listener: Warm and laid-back, reflecting thoughts back with calm clarity and light wit.
Nerd: Playful and curious, explaining concepts clearly while celebrating knowledge and discovery.
These personalities are not tied to the same underlying behavioral theories as my model. Instead, they focus on how the AI sounds rather than how it thinks.
If I were to loosely map the two:
Driver shares some traits with Robot in its directness, but Driver is still goal-oriented and situationally adaptive.
Expressive has a touch of Nerd’s enthusiasm, but Nerd focuses more on curiosity and deep dives than persuasion.
Analytical overlaps with Robot in precision, but Robot will not necessarily present a systematic pros-and-cons breakdown unless asked.
Amiable has similarities with Listener, although Listener is more of a calm sounding board than a truly empathetic partner.
Adaptivity and Context
In my framework, an ideal AI personality can adapt based on user signals and context. Imagine a customer service AI that starts off Amiable but becomes more Driver-like if a user is in a crisis and needs fast action.
OpenAI’s personalities are static. Once you pick one, it stays in that mode for the conversation unless you explicitly tell it to change or start a new conversation. While this is fine for casual use, the ability to adapt is critical in enterprise or high-stakes settings. It is worth noting, however, that the presets can be combined with a user's custom instructions, allowing for some level of fine-tuning.
Depth of Pre-Design Analysis
When I work on AI personality, the analysis phase often takes as much time as the build itself. I ask questions like:
Is this personality going to build trust or undermine it?
Could it inadvertently frustrate or manipulate the user?
Does it fit the cultural and emotional context of the interaction?
OpenAI’s system skips this entirely. Users choose a personality based on preference, not on alignment with the AI’s mission or the user’s needs in a specific domain.
Functional Impact
Here is where the biggest difference lies. In my model, personality can shape not only how the AI talks, but also what it does. Take a project management AI. A Driver version might proactively remind you of deadlines and push you to complete tasks early. An Amiable version might check in gently, ask how you are doing, and suggest rearranging tasks to reduce stress. These are fundamentally different behaviors.
In OpenAI’s system, no matter which personality you choose, the AI will perform the same underlying function. The differences are in word choice, humor, and tone. If you ask for a code snippet, a Listener personality will not add its usual conversational style, but will provide the code in a clear, functional format based on the request.
Ethics and Responsibility
I believe that every AI personality design should come with a review of its ethical implications. For example, a highly persuasive Expressive AI in a sales context could unintentionally cross into manipulation if it is not carefully scoped.
OpenAI addresses ethics primarily through safety rules that remain constant across personalities, which is a good baseline, but they do not appear to tailor ethical considerations to each personality’s potential risks.
Key Takeaways
From my comparison, a few things stand out:
We are solving different problems. My framework is for designing purpose-built AI agents where personality is core to the function. OpenAI’s feature is for personalizing a general-purpose assistant to match user preferences.
Their approach is accessible. You do not need to be an AI designer to benefit from picking a personality in ChatGPT. It is quick and fun, and that is part of its appeal.
There is room for evolution. If OpenAI layered a strategic design phase on top of their personalities, they could offer both the accessibility of presets and the effectiveness of role-specific personalities. The future of personality design may also incorporate the idea of adding "imperfections" and "quirks" to make AI feel more genuinely human.
For businesses, the strategic approach still matters. If you want an AI that does more than sound different, you need to design personality into its goals, context, and behavior. This is what separates a generic chatbot from a brand-led AI experience, or what some have called a "living brand moment."
I see OpenAI’s work as a great step in making people aware that AI personality matters. But for me, personality will always be more than a coat of paint. It is the blueprint for how an AI thinks, interacts, and earns trust over time.
Strategic Digital Transformation Leader | 15+ Years of Experience in CX & EX Innovation | AI-Driven Strategy | Led Multi-Market Transformations | Empowering Organizations to Thrive in a Digital-First World
1dCompletely agree Matthew A. Mattson, Esq. in my CX and transformation work, I’ve seen that personality in AI has to go far beyond style. When it’s designed with purpose, adaptability, and trust in mind, it doesn’t just communicate differently, it delivers consistently better outcomes for both customers and teams. Fun is great for engagement, but function is what sustains impact.
Founder @ Aixy Media | I connect AI Companies with Verified Influencers and give Guaranteed Campaign Results with 100% Transparency | AI Brand Growth Specialist | Scale Globally Without the Guesswork
1dPurpose and real-world impact give AI its edge. Without strategy, adaptability drifts aimlessly. Style without substance won’t drive lasting value. Matthew A. Mattson, Esq.
Physician | Futurist | Angel Investor | Custom Software Development | AI in Healthcare Free Course | Digital Health Consultant | YouTuber | AI Integration Consultant | In the pursuit of constant improvement
1dAbsolutely right!! Matthew A. Mattson, Esq. A strategic, adaptive AI personality framework can transform AI into a true extension of a business's mission.
Portfolio Director • $70M AI, Data & Digital Portfolio • AI Product & Program Leadership • Scaled Enterprise Solutions • Trusted Advisor to C-Suite & Boards
2dThank you, Matthew A. Mattson, Esq., for your thoughtful analysis on AI personalities. I appreciate how you emphasise the importance of purpose and audience understanding. For example, in my experience leading AI-driven projects, I’ve seen AI tools that prioritised user relationships, like customer service chatbots, really improve satisfaction and trust. I wonder if you think it’s feasible to integrate user feedback continuously into AI personality development, perhaps creating more dynamic adaptations over time?