Apple Health's integration of ChatGPT Health has been criticized for serious misdiagnosis.
Apple Health's recent integration with ChatGPT Health is giving those already skeptical about the true capabilities of AI even more reason to argue their point. A shocking investigation by The Washington Post revealed a host of errors and worrying inconsistencies in the health assessments provided by this chatbot.
According to the report, the combination of Apple Health and ChatGPT Health not only failed to meet initial expectations but also potentially became a dangerous tool if users placed too much trust in AI-generated diagnoses. Earlier in January, OpenAI introduced ChatGPT Health as a service providing accurate and useful health information. Users could securely connect their personal medical records and health tracking apps like Apple Health, Function, or MyFitnessPal, thereby receiving support in understanding test results, preparing for doctor's appointments, building diet and exercise plans, or considering suitable insurance options.
However, a new investigation by The Washington Post has exposed serious limitations when relying on ChatGPT Health to interpret personal health data, especially data obtained from Apple Health. Reporter Geoffrey Fowler allowed ChatGPT Health access to his entire health data, including approximately 29 million steps and 6 million heart rate measurements, and then asked the chatbot to assess his cardiovascular condition. The result he received was an F – a very poor rating.
Interestingly, when Fowler presented these results to his private doctor, the doctor quickly dismissed them completely. According to the doctor, Fowler had a very low risk of cardiovascular problems, to the point that his insurance company might even refuse to pay for additional tests to "counter" the chatbot's conclusion.
Even more concerning, ChatGPT Health yielded different results when Fowler repeated the same question multiple times. In some instances, his cardiovascular health rating fluctuated between B and F, indicating a significant lack of stability. With such erratic variations, the service is virtually worthless in terms of aiding diagnosis or improving health, at least for the time being.
These findings also raise serious questions about Apple's ambitions to equip Apple Health with AI "superpowers." With health assessments potentially changing unpredictably after just a few follow-up questions, the risk of users misunderstanding their own condition and making inappropriate decisions cannot be underestimated.
You should read it
- ★ 5 potential negative health effects of Generative AI technology
- ★ Decode misconceptions about health
- ★ 5 health risks women should not ignore
- ★ Claude AI can now read health data and provide in-depth, personalized analysis.
- ★ Mistakes that damage your health every day without your knowledge
- ★ Parking Is Quietly Becoming One of the More Interesting Battlegrounds in Transportation FinTech
- ★ How to Enhance Learning with School Technology (And Still Keep It Real)
- ★ 5 Ways Science and Technology Quietly Shape Your Everyday Life
- ★ Windows 11 will soon support the WEBP wallpaper format.
- ★ RAM prices are skyrocketing: Tips for running a smooth PC without upgrading.
- ★ Comparing AirTag 2 and AirTag 1: What upgrades has Apple made after 4 years?