HOW AI IS CHANGING OUR HEALTH JOURNEY
14 January 2026
This year’s Innovation Challenge focuses on Health and Social Care, at a time when artificial intelligence is moving into healthcare in ways that many people still underestimate.
In the past two years, we have seen AI become a mainstream tool in the browser, in the car, and increasingly in the day to day decisions people make about their own wellbeing.
A recent OpenAI report suggests that AI is now embedded in personal health routines at global scale. OpenAI states that more than 40 million people use ChatGPT for health related information every day, representing over five per cent of all messages sent through the platform.
What is most striking is how people are using it. The dominant patterns are not attempts to replace a clinician’s diagnosis.
Instead, many users are doing practical and human things, trying to sense check symptoms, decode medical terminology, interpret a lab result, understand medication side effects, and prepare clearer questions ahead of an appointment.
In other words, people are using AI to manage the complexity and bureaucracy that often surrounds modern healthcare, as much as the clinical questions themselves.
Access gaps as a driver of digital health and social care
The timing also matters. OpenAI reports that around seven in ten health related conversations happen outside normal clinic hours, late evenings, overnight, and weekends.
That is a strong signal that people are seeking reassurance and clarity when professional support is least accessible.
The report also highlights heavy use in areas with limited access to in person services. In hospital deserts, defined in the report as communities more than a 30 minute drive from a general medical or children’s hospital, OpenAI found that ChatGPT averaged more than 580,000 healthcare related messages per week across a late 2025 sample period.
Taken together, this points to a simple conclusion. For many people, AI is already filling a gap, helping them navigate healthcare systems, interpret information, and decide what to do next when formal services are difficult to reach.
That is a meaningful access benefit, but it also introduces a reliability and safety challenge. People can be placed at risk if they treat an AI system as authoritative, or if inaccurate guidance is delivered with confidence, particularly in stressful circumstances.
OpenAI is careful to position ChatGPT as an aid, not a replacement for professional judgement, and that caution is appropriate.
At the same time, it is notable that the report includes initial policy proposals for what the company describes as the Intelligence Age, signalling that the conversation is moving beyond product design and into governance.
This combination, mass public usage plus active policy positioning, suggests the foundations are being laid for a future where conversational AI could be treated as a regulated component of care pathways, rather than a general information tool used on the side.
That future is arriving in pieces, not all at once
In the United States, Utah has launched a pilot enabling an AI system to participate in prescription renewals for certain routine, non controlled medications, under state oversight.
The programme is positioned as a cautious step to reduce administrative burden and improve access, while still raising important questions about accountability, clinical safety, and public trust.
OpenAI has also announced new health focused experiences within ChatGPT itself.
Reporting in January 2026 describes ChatGPT Health as a dedicated area within ChatGPT, with the ability for users to connect certain wellness apps and, in some cases, medical records through partner integrations, to support tasks such as understanding lab results and preparing for clinical visits.
This is encouraging, but it also reinforces the importance of careful design and governance for health related AI tools.
Google’s recent actions underline why this must be handled carefully. Following reporting that some health related AI summaries in Google Search could be misleading for sensitive medical queries, Google has removed AI Overviews for certain liver test range searches.
AI Overviews are the short summaries shown at the top of search results, designed to simplify complex topics. In healthcare, however, simplification without the right context can become actively misleading. This is why tools in this area need clear sourcing, high accuracy, appropriate uncertainty, and transparency about limitations, particularly where decisions can be life critical.
Alongside consumer use, OpenAI has also unveiled healthcare focused tools aimed at clinical settings, emphasising administrative relief, workflow support, and the need for rigorous safety evaluation.
Enabling the expansion of predictive care
Whether in the home or in the hospital, the direction of travel is clear. AI is moving closer to the frontline of care, and expectations will rise accordingly.
At the same time, research breakthroughs are expanding what AI might detect, and when.
Stanford researchers have published results on SleepFM, a sleep foundation model that can predict risk for 130 diseases using data from a single overnight sleep study recording.
This is based on clinical polysomnography signals and long term health records, and it demonstrates how a single stream of physiological data can become a powerful predictive signal when analysed at scale.
For Health and Social Care, these developments present both an opportunity and a responsibility.
The opportunity is to reduce administrative friction, support earlier intervention, widen access to information, and help clinicians reclaim time for human care.
The responsibility is to ensure that safety, privacy, equity, and accountability are embedded from the outset, particularly when tools are used by the public outside supervised environments, or when they begin to touch regulated decisions.
This is exactly why the Innovation Challenge matters.
Focusing innovation on real world impact
We are interested in innovations that tackle the everyday pressure points, helping residents understand and navigate care, reducing avoidable administrative burden, supporting safe triage and signposting, strengthening continuity between health and social care, and improving access for those who struggle most with distance, time, or complexity.
AI is already changing how people manage their health, often quietly, late at night, and without fanfare.
The question for the next phase is not whether this will continue, but how we shape it deliberately for the Isle of Man, working with the best innovations to support our citizens and communities, and ensuring adoption is safe, well governed, and focused on what matters most, healthier and happier lives, a better experience of care, and a more efficient and sustainable health and social care system.
Learn more about the 2026 Innovation Challenge here.