Here’s how 20 years of cross-disciplinary exploration shaped my approach to building AI-enabled solutions, and why teamwork matters now more than ever.

My former team and I built our first AI-enabled solution in 2017, and in this post I want to share a few lessons we learned – especially regarding the role a human-centered perspective can play in shaping software solutions with meaningful AI features. After more than two decades of living, studying, and writing about interdisciplinary collaboration, I believe that in this era of AI, collaboration is as important as ever.
Who should read this?
- Founders & heads of product – Especially those trying to build the best possible solution and assemble the right talent to solve challenging problems. Many may not realize the true benefits of this kind of collaboration.
- Data scientists – If you haven’t worked closely with social scientists or designers before, you might be surprised by what cross-disciplinary collaboration can offer.
- UX professionals – If you’re wondering what the rapid rise of AI means for your career and professional practice, this is for you.
Some of these points are relevant to broader LLM experiences, but here I’m focused on solutions designed to drive real-world behavior change, where, I argue, human-centered practitioners and data scientists must collaborate to achieve the best results.
In the coming weeks, I intend to write posts that delve into individual project details, but today I want to offer an initial framework – one I hope will be useful to anyone staffing, guiding, or leading product teams. My goal is to provide practical advice instead of theoretical abstractions. I hope you’ll find it of value!
Context is Almost Always Bigger than the Context Window
When we design new solutions, we can’t assume users are always sitting in front of a desktop with a keyboard and mouse. They might be on a mobile device, on the go, juggling a weak wifi connection, or rushing to an appointment. We have to consider what they’re trying to do, where they are, the tools they have at their disposal, and potentially, their urgency.
The context in which a solution is used is almost always larger than the technical context window. This wider context relates to user intent, and it should be a key factor in thoughtful solution design.
Chatbot Personality and Conversational Design
There’s been active debate about whether ChatGPT is too positive – sometimes verging on sycophantic. UX researchers I’ve spoken to have observed that ChatGPT seems to surface positive outcomes from research data, but it often requires explicit pressure to highlight negatives. This remains a model-level challenge that should be addressed if AI is going to credibly solve business problems.

For business solutions, a chatbot should reflect the brand through politeness, tone, word choice, and reading level. These qualities matter, because the chatbot is a reflection of the business and the brand.
Experts in fields like anthropology, sociology, and HCI (human-computer interaction) have studied conversational design for decades. There’s no need to reinvent the wheel – we should learn from this deep body of knowledge when crafting computer-mediated interactions.
The Data Model and Users’ Mental Model
Building AI solutions usually involves data engineers or scientists thinking about the ontology and data model. Ontologies define meaning and relationships; the data model determines how data is structured. Similarly, UX professionals describe users’ thinking, motivation, and context to inform product strategy, and these can be represented in a variety of ways, including a UX artifact called mental models. Well designed, human-centered products consider both the underlying data and the users’ mental models.
In future posts, I’ll share detailed examples of how this plays out, but the key idea is this – data in an app must be organized in meaningful categories so any returned results are actionable for the end user. Legacy software like Veeva (CRM) exemplifies pitfalls; rather than providing a unified view of a physician, reps must consult separate tables for details, prescribing behaviors, call history, sampling, and more. The experience is organized around the data model, not how sales reps actually work.
To build better solutions today, data scientists and UX professionals need to work together. Data scientists bring deep understanding of the data structure and relationships; UX professionals know how users frame problems and which vocabulary is natural to them. Together, these perspectives shape interface features, from navigation and menus to the way results are represented on screen.
When prompts and the user’s mental model align, ease-of-use improves, and users get the answers they need, faster.
Algorithm Design
Some might argue that algorithm design is exclusively the responsibility of the data scientists. Yet, if an algorithm is designed ‘nudge’ human behavior (which they often are), the combined team has to understand:
- What are we nudging users toward? Understand the journey, happy paths, and pitfalls.
- What do users hope to accomplish? Clarify user intention.
- Which nudge works, and when? Analyze human rationale to design effective interventions.
Algorithms shape what appears in the interface, how, and when. We must understand both the underlying data and the individual receiving the nudges. It’s at the intersection of these two that we create impactful outcomes that actually drive behavior change.
Some of these points are relevant to broader LLM experiences, but today I’m focused on solutions designed to drive real-world behavior change – where human-centered practitioners and data scientists collaborate for the best results.
Further reading
- Managing chatbot personalities isn’t just for software solutions – Claude’s creators confront this challenge, too – How Anthropic Builds Claude’s Personality
- An insightful look at training Claude to match Every’s editorial tone: Teaching Claude Every’s standards—and what the editor learned
- Indi Young’s Mental Models: Aligning Design Strategy with Human Behavior is an excellent resource. There’s a helpful summary here: Book summary
- In healthcare, UX almost always incorporates behavior change design. Irrational Labs defines it as “using behavioral science insights to inform design decisions,” drawing broadly from behavioral economics to cognitive psychology and more. Learn more in this series by Samuell Salzer. Including experts in this field strengthens outcomes well beyond what UX generalists alone can deliver.
- Read Aakash Gupta’s interview with Teresa Torres, especially the section on How To Do Discovery for AI Features.
What do you recommend?
If you’ve come across compelling, practical work on interdisciplinary teams in AI – especially studies, articles, or stories from real projects (not just theory) – I’d love your suggestions. Please share them in the comments.

Leave a Reply