Practical AI Assistants People Can Trust
Many teams want to use AI, but are worried it will say the wrong thing, cost too much, or confuse their users. We helped create three real-world assistants — Ask SolE, Ask Hally, and Hey Bloom — by giving each of them a clear purpose, safe boundaries, and simple ways to control cost and behavior.
The challenge
Leaders were excited about AI, but nervous about using it with real people. They had seen chatbots give wrong or risky answers, ignore language needs, and run up unexpected bills. They needed assistants that were clear about what they could and could not do, would not “go rogue”, and could be trusted in sensitive areas like health, education, or business decisions.
Our approach
We started by defining the real job of each assistant: who it serves, what it is allowed to answer, and where it must say “I don’t know” or “you should talk to a human”. Then we built a shared system that every assistant could use — one place to manage what information it can see, what topics are off-limits, which languages it supports, and how much it is allowed to spend. On top of that, each assistant got its own “personality” and content, but all of them followed the same simple, safe rules.