Americans using AI to diagnose health issues, study finds

VP Vance on artificial intelligence
WATCH: Vice President JD Vance Delivers Remarks at the American Dynamism Summit.
LOS ANGELES - As artificial intelligence races forward, many health care professionals are racing to catch up.
According to Elsevier’s 2025 Clinician of the Future survey, clinicians around the world increasingly believe AI can help them diagnose patients faster, improve outcomes, and reduce burnout. But the enthusiasm isn’t always translating to real-world use—especially in the U.S.
While the number of health care workers using AI tools has nearly doubled since last year, the majority still rely on generalist platforms like ChatGPT rather than medical-specific tools. And fewer than one in five are actually using AI to support clinical decision-making.
Why are clinicians turning to AI?
The backstory:
The survey gathered responses from 2,206 clinicians across 109 countries, offering a snapshot of how physicians and nurses are responding to the AI wave reshaping their field.
Nearly half of those surveyed said they’re treating more patients now than they were two years ago, and 28% admitted they don’t have enough time to provide quality care. Many see AI as a potential lifeline:
- 70% expect it to save them time.
- 58% believe it can speed up diagnoses.
- 55% think it will improve outcomes.
Despite these hopes, only 16% said they currently use AI tools to help make clinical decisions.
What we know:
Nearly half (48%) of respondents reported using an AI tool for work—a sharp jump from 26% in last year’s report.
But most are turning to generalist tools like ChatGPT (97%), not ones designed for medical tasks (76%).

A person opens the ChatGPT app on their smartphone. A new survey shows many clinicians are using general AI tools like this one to support health care work. (Photo by Smith Collection/Gado/Getty Images)
Among countries surveyed, AI usage was highest in China (71%) and lowest in the U.S. (36%) and UK (34%).
What we don't know:
The report did not specify which medical AI tools clinicians are using or how outcomes compare between AI-assisted and traditional diagnosis.
It’s also unclear how many institutions plan to invest in AI training or governance improvements in the near future.
Why trust remains a major issue
The other side:
Even as AI adoption grows, skepticism remains—particularly around safety, regulation, and reliability.
Only 32% of clinicians said their institution provides adequate access to AI technologies, and even fewer (30%) reported receiving sufficient training. Just 29% felt their workplace had proper oversight or governance structures in place.
Clinicians said their trust in AI tools would improve if:
- The tools automatically cite references (68%)
- They are trained on peer-reviewed, high-quality data (65%)
- They use the most up-to-date medical resources (64%)
In the U.S. and UK, demand for factual accuracy was especially high, at 75% and 81% respectively.
What they're saying:
Jan Herzhoff, President of Elsevier Health, said, "As the healthcare industry continues to grapple with increased demands and limited resources, clinicians have identified many opportunities for AI to provide quality care faster and to help improve patient outcomes.
This is a transformative time and we look forward to working alongside the healthcare community to harness the full potential of AI to deliver for patients."
What's next:
The survey highlights a clear gap between enthusiasm and execution. While many clinicians believe AI could improve care, they lack access, training, and oversight.
Bridging that gap may determine whether AI becomes a transformative force in medicine—or another missed opportunity.
The Source: This article is based on the 2025 Clinician of the Future survey conducted by Elsevier, which gathered responses from over 2,200 clinicians across 109 countries.