Innovation has always existed in tension between two extremes: optimism about progress and concern about its cost. AI has become the latest example of this divide, raising hopes for efficiency and access while also sparking fear about misinformation, bias, and job loss. The real question, however, is not whether AI is inherently good or bad; it’s why we’re building it and who it is meant to serve.
Purpose as strategy, not sentiment
Purpose-driven innovation is a framework for focused decision-making. It helps teams prioritize the right problems and act with intention. It means asking the right questions early: What’s the need? Who benefits? And what does success look like?
This approach is especially critical in areas where the consequences of innovation are immediate and personal. In fields like health, education, and public service. Nowhere is this more evident than in the world of youth mental health, where technology is rewriting how, when, and from whom help is sought.
Our experience using AI to support youth mental health
(Example of generic AI response to subtext of self-harm)
(Example of response by KHP’s Gen AI Prototype built by Nascent)
In our work with Kids Help Phone, we saw a shift: more youth turning to AI when they weren’t ready to talk to someone directly. Mental health has become one of the top reasons young people engage with generative AI. That shift signals both an opportunity and a risk.
Unregulated tools were stepping into moments that used to involve caring adults like, parents, caregivers, teachers, or counsellors. However, many of these systems were designed for engagement and convenience rather than support. The result is that responses to youth in distress are inaccurate, inappropriate, culturally insensitive and harmful.
That insight informed Kids Help Phone’s mission: To build AI that creates an emotionally safe space, with the guardrails to detect risk indicators, and guides youth to personal-led support when needed.
Expanding the reach of Kids Help Phone
Together with Kids Help Phone, we’re developing an early-stage, clinically guided AI prototype designed to identify risk indicators in language and guide conversations with empathy. The tool empowers young people to choose the type of support that feels right for them, including the option to connect with a human counsellor, ensuring help is available when it’s needed most.
This approach highlights how AI can be applied in other areas such as helping workplaces better support employee mental health.
The big questions we found ourselves asking
As we built and tested alongside KHP, the focus evolved. What began as a challenge to make AI work safely for youth evolved into a reflection on what responsible innovation truly means and the questions every organization will eventually face.
Five questions stayed with us, ones that continue to shape how we think about responsible innovation:
Is this advancing quality of life? Does it make someone’s day easier, save time that could be spent elsewhere, simplify a complex task, or make critical information more accurate and accessible? Progress should be felt by people, not just measured on a dashboard.
Does this amplify human capability? The most powerful applications of AI don’t remove people from the process, they multiply what humans can do: reach more, respond faster, and act with greater care.
Are we building for trust and transparency? As BNN Bloomberg reported in its coverage of the Humanity AI initiative, the next wave of innovation will be judged not by speed but by integrity, how clearly systems show what data they use, and how responsibly they act on it.
Is this top-line or bottom-line innovation? Purpose-driven innovation creates new value for customers, employees, and the organization. It’s top-line when it generates new opportunities, unlocks new markets, or delivers fresh experiences that strengthen loyalty and relevance. Efficiency can sustain growth, but it’s new value that fuels it.
What happens when this scales? As technology grows, it should move in step with the organization, reflecting its evolving products, policies, and brand values. It’s not a “set it and forget it” solution. The question is whether the system can stay current, trusted, and aligned with who you are as you grow. Innovation that can’t evolve with you eventually works against you.
These aren’t checkboxes. These are the kind of questions that keep innovation honest.
The blind spot

For many organizations, these questions remain unanswered or worse, unasked. In the race to deploy, AI too often becomes a proof of capability. Features are launched because they can be, not because they should be. The result is intelligent technology solving the wrong problems.
What’s missing isn’t another AI implementation strategy; it’s the contemplation of the real and tangible ways AI will help amplify the unique value of your product.
Does this make life better?
Are we expanding human potential?
Are we creating long-term value or short-term optics?
Clarity, not capability, is what separates progress from motion.
Purpose as a competitive advantage
The organizations that lead the next decade won’t be the ones that deploy AI first, they’ll be the ones that deploy it wisely. They’ll measure success not by how much they automate, but by how much they elevate trust, inclusion, and human potential.
Because the future of AI isn’t just about what we can automate. It’s about how technology can amplify the quality of life for the people we serve.


