People

Celebrating the humans behind NLX

by Cecilia Bolich

People

Every great conversational AI experience is held together by invisible decisions. When to keep it simple, when to take action, when to clarify, and so on. The work is subtle, but the failures can be, too. Trust is the real KPI in conversational AI, and it’s the easiest thing to lose.

At NLX, Sophie and Dan are two of the people obsessing over those details so your experiences feel steady and real.

“Build AI with boundaries.”

Most people think conversation design is about writing prompts, but Sophie Henry, our Senior Conversation Designer, approaches it more like designing emotional physics. These are the little conversational moves (tone, timing, clarification, pushback, recovery) that reliably produce trust.

But Sophie’s quick to call out one of the biggest threats to trust: people-pleasing AI.

Over-helpful AI is optimized to be accommodating even when it shouldn’t be. These people-pleasing behaviors agree rather than prioritize what’s helpful. Sophie’s stance is simple: the best assistants know when to push back and set boundaries, like a good friend who doesn’t let you walk into traffic.

She’s also tuned to a dead giveaway that an AI workflow was designed for the business and not the user. “There’s either no flexibility, or there’s flexibility without action,” says Sophie. If users can ask questions but the system can’t respond meaningfully (or can’t do anything with their answers), the experience collapses into an inflexible script.

Sophie designs for the tiny moments most conversational AI teams skip. Graceful recovery and well-timed clarification that makes customers feel cared for without letting the AI pretend it’s capable of something it isn’t.

“The thing to stop over-engineering…”

If Sophie protects the user experience from the inside, Dan Pereira, the team’s Principal Solutions Architect, protects everything around it.

Dan lives in the reality where conversational AI meets the messy world of production systems, security requirements, latency constraints, integrations quirks, and the dreaded “this has to work on Monday.” Dan’s superpower is translating ambition into architecture, and he has a knack for spotting the biggest trap teams fall into with AI: assuming a successful demo means a system is ready for production.

But Dan knows AI doesn’t fail like traditional software. It can be an overconfident optimist. That’s why Dan pushes teams to treat model output “as suggestion, not truth.” Add determinism, validation, and guardrails when an AI system is expected to behave like a source of truth.

But he’s quick to point out the counter-trap. “Too much safeguarding can be expensive and even introduce new risks.” The real work is finding the right balance. Engineers want correctness, but probabilistic systems can trigger an instinct to pile on abstraction, layers, and orchestration before trying the simpler path. Dan’s rule: get a solid model, strong concise prompting, and targeted guardrails, so you can ship something you can maintain and trust.

Follow along Sophie and Dan's journey

Follow Sophie here.

Follow Dan here.