
ChatGPT Health wants to be your family doctor AI systems aiming to operate in the healthcare sector are increasingly common, but there’s a catch
Recently, OpenAI announced the launch of ChatGPT Health, a section of its famous chatbot designed to help people better understand and organize their "medical history". The project stems from the fact that more and more users turn to ChatGPT to interpret reports, symptoms, and other clinical data. The goal of Sam Altman’s company is to provide a tool that helps gather and make sense of this information, potentially making it easier to navigate a possible care pathway.
ChatGPT Health, for example, offers the option to connect your electronic medical records and certain health-dedicated apps to the system, in order to compare data and test results and interpret them over time. OpenAI has clarified, however, that the service will not make diagnoses or suggest therapies: its purpose is rather to help navigate specific information, clarify doubts, and approach consultations with doctors more knowledgeably.
The project – the company specifies in a statement – was developed with the contribution of numerous doctors from various specialties, aiming to provide responses that are understandable, cautious, and context-appropriate. Initial access to the service will be limited to a small group of users, but it is expected to be gradually expanded in the future. Health-related conversational exchanges will take place in a separate space from the traditional ChatGPT interface, and according to OpenAI, specific protections have been implemented to safeguard sensitive data – for example, health information would not be used to train the language models underlying AI systems.
What is Unconvincing About AI in Healthcare
ChatGPT Health. pic.twitter.com/lJCa3N9yni
— AshutoshShrivastava (@ai_for_success) January 8, 2026
Last year, several media outlets specializing in technology and health reported the case of an artificial intelligence system (developed by Google) that, when analyzing images from a brain CT scan, “invented” an anatomical structure of the brain that did not exist, producing a completely incorrect report. The episode became emblematic of the limits of these technologies: although they show great potential in healthcare, AI systems can generate misleading results if not used with adequate caution and human supervision.
However, the major companies developing AI systems – from OpenAI to Google itself – have long claimed that these technologies could, in the future, assist doctors in making diagnoses more accurate and reliable. Around this possibility, however, a significant portion of the scientific community remains skeptical, highlighting the current limitations of AI and the risks associated with premature or insufficiently controlled use.
Why ChatGPT Health and Similar Tools Are Not Liked by Doctors
@doctordevify ChatGPT Health just got announced #health #medicine #ai #fyp #doctor original sound - Dr Dev | Doctor Devify
In the past, some AI-based image analysis systems have achieved remarkable results, detecting details that specialists had missed. However, their functioning is not always reliable: AI can experience "hallucinations", producing incorrect or poorly grounded interpretations, with the risk of inaccurate assessments.
Major tech companies developing the most well-known models, such as ChatGPT and Gemini, emphasize that these tools should be considered support for doctors, not a replacement. Final responsibility for diagnoses and treatments should therefore always remain with healthcare professionals. Developers, in this regard, highlight the rapid improvements achieved in a few years in terms of accuracy, compared to a rate of human error in medicine that remains relatively constant and which, at least in part, could be reduced through the use of AI.
The potential of systems like ChatGPT Health, in short, exists and there are promising examples, but part of the scientific community continues to advise caution against a "race for AI" similar to that already seen in other fields, calling for much more careful timing and adoption criteria.














































