I read this not as "you can't ask ChatGPT about {medical or legal topic}" but rather "you can't build something on ChatGPT that provides {medical or legal} advice or support to someone else."
For example, Epic couldn't embed ChatGPT into their application to have it read your forms for you. You can still ask it - but Epic can't build it.
That said, I haven't found the specific terms and conditions that are mentioned but not quoted in context.
(For anyone misunderstanding the reference to Epic, it's the name of an electronic healthcare record system widely used in American hospitals, not the game company Epic Games)
And this seems actually wildly reasonable. It’s actually pretty scammy to take people’s money (or whatever your business model is) for legal or medical advice just to give them whatever AI shits out. If I wanted ChatGPT’s opinion (as I actually often do) I’d just ask ChatGPT for free or nearly-free. Anyone repackaging ChatGPT’s output as fit for a specialized purpose is scamming their customers.
Yes, a more correct comparison would be early medicine: a science, but still filled with leeches and lancets.
Oh, and another thing, we still aren't able to quantify if AI coding is a net benefit. In my use cases the biggest benefit I get from it is as an enhanced code search, basically. Which has value, but I'm not sure it's a "$1 trillion per year business sector" kind of value.
I have seen more than once people using ChatGPT as a psychologist and psychiatrist and self diagnosing mental illnesses. It would be hilarious if it was not so disasterous. People with real questions get roped in by an enthusiastically confirming sycophantic model that will never question or push back, just confirm and lead on.
Tbh, and I usually do not like this way of thought, but these are lawsuits waiting to happen.
> I have seen more than once people using ChatGPT as a psychologist and psychiatrist and self diagnosing mental illnesses. It would be hilarious if it was not so disasterous.
"People with real questions get roped in by an enthusiastically confirming sycophantic model that will never question or push back, just confirm and lead on."
...or they want to be the first to do it, as others don't have it.
OpenAI is the leading company, if they provide an instance you can use for legal advice, with relative certification etc., it'd be better to trust them rather than another random player. They create the software, the certification and the need for it.
Non-deterministic doesn't mean random or unpredictable. That's like saying the weather forecast is useless because it's not deterministic or always 100% accurate.
> Non-deterministic doesn't mean random or unpredictable. That's like saying the weather forecast is useless because it's not deterministic or always 100% accurate.
I don't know where you got 'useless' from. LLMs are great, sometimes. They're not, other times. Which remarkably, is just like weather forecasts. The weather forecast is sometimes completely accurate. The weather forecast is sometimes completely inaccurate.
LLMs, like weather forecasting, have gotten better as more time and money has been invested in them.
Neither are perfect. Both are sometimes very good. Both are sometimes not.
An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed." [0]
If I pay for legal advice from you and all you did was give me chat gpt output, I can't sue openai anyway. This just clarifies that you can't put 'powered by chatgpt' on your AI lawyer service.
Wasn't there a recent OAI dev day in which they had some users come on stage and discuss how helpful ChatGPT was in parsing diagnoses from different doctors?
I guess the legal risks were large enough to outweigh this
I'd wager it's probably more that there's an identifiable customer and specific product to be sold. Doctors, hospitals, EHR companies and insurers all are very interested in paying for a validated version of this thing.
I wouldn't be surprised to see new products from OpenAI targeted specifically at doctors and/or lawyers. Forbidding them from using the regular ChatGPT with legal terms would be a good way to do price discrimination.
Read their paper on GDPval (https://arxiv.org/abs/2510.04374). In section 3, it's quite clear that their marketing strategy is now "to cooperate with professionals" and augment them. (Which does not rule out replacing them later, when the regulatory situation is more appropriate, like AGI is already a well-accepted fact, if ever.) But this will take a lot of time and local presence which they do not have.
Definitely. And in the long run, that is the only way those occupations can operate. From that point, you are locked in to an AI dependency to operate.
Can it become a proxy for AI companies to collect patient data and medical history or "train" on the data and sell that as a service to insurance companies.
There's HIPAA but AI firms have ignored copyright laws, so ignoring HIPAA or making consent mandatory is not a big leap from there.
"This technology will radically transform the way our world works and soon replace all knowledge-based jobs. Do not trust anything it says: Entertainment purposes only."
Are there metrics for whether LLM diagnosis accuracy is improving? Anecdotally doctor friends says it's more reliable then their worst colleagues, which I'm sure their worst colleague insinuate the same about them.
That’s a fair limitation. Legal and medical advice can directly impact someone’s life or safety, so AI tools must stay within ethical and regulatory boundaries. It’s better for AI to guide people toward professional help than pretend to replace it.
> AI tools must stay within ethical and regulatory boundaries. It’s better for AI to guide people toward professional help than pretend to replace it.
Both of those ships have _sailed_. I am not allowed to read the article, but judging from the title, they have no issues giving _you_ advice, but you can’t use it to give advice to another person.
Add financial advice to it, too. Really, any advice. Why the fuck are people asking a probabilistic plagiarizing machine for advice on anything? This is total insanity!
You had me right up until ‘AI tools must stay within ethical and regulatory boundaries’. I guarantee you any AI LLM company which cares about ethics is destined to fail, because none of their peers do.
i don't think it's stopped providing said information, it's just now outlined in their usage policies that medical and legal advice is a "disallowed" use of ChatGPT
I've used ChatGPT to help understand medical records too. It's definitely faster than searching everything on my own, but whether the information is reliable still depends on personal judgment or asking a real doctor.
More people are treating it like a doctor or lawyer now, and the more it's used that way, the higher the chance something goes wrong. OpenAI is clearly drawing a line here. You're free to ask questions, but it shouldn't be treated as professional advice, especially when making decisions for others.
If you're not a doctor, how do you know it's accurate?
This is the huge problem with using LLMs for this kind of thing. How do you verify that it is better? What is the ground truth you are testing it against?
If you wanted to verify that ChatGPT could do math, you'd ask it 100 math problems and then (importantly) verify its answers with a calculator. How do you verify that ChatGPT can interpret medical information without ground truth to compare it to?
People are just saying, "oh it works" based on gut vibes and not based on actually testing the results.
How does anyone know if what the doctor says is accurate? Obviously people should put the most relative weight in their doctor's opinion, but there's a reason people always say to get a second opinion.
Unfortunately because of how the US healthcare system works today people have to become their own doctors and advocates. LLMs are great at surfacing the unknown unknowns, and I think can help people better prepare for the rare 5 minutes they get to speak to an actual doctor.
I know it’s hard to accept, but it’s got to be weighed against the real-world alternative:
You get about 5-8 minutes of face time with an actual doctor, but have to wait up to weeks to actually get in to see a doctor, except maybe at an urgent care or ER.
Or people used to just play around on WebMD which was even worse since it wasn’t in any way tailored to what the patient’s stated situation is.
There’s the rest of the Internet too. You can also blame AI for this part, but today the Internet in general is even more awash in slop that is just AI-generated static BS. Like it or not, the garbage is there and it will be most of what people find on Google if they couldn’t use a real ChatGPT or similar this way.
Against this backdrop, I’d rather people are asking the flagship models specific questions and getting specific answers that are halfway decent.
Obviously the stuff you glean from the AI sessions needs to be taken to a doctor for validation and treatment, but I think coming into your 5-minute appointment having already had all your dumbest and least-informed ideas and theories shot down by ChatGPT is a big improvement and helps you maximize your time. It’s true the people shouldn’t recklessly attempt to self-treat based on GPT, but the unwise people doing that were just self-treating based off WebMD hunches before.
I get what you're saying, and I agree it might be fun to play around with ChatGPT and Wikipedia and YouTube and WebMD to try to guess what that green bump on your arm is, but it's not research--it needs to be treated as entertainment.
When it comes to taking actual real-world action, I would take 5-8 minutes with a real doctor over 5-8 months of browsing the Internet. The doctor has gone to med school, passed the boards, done his residency, and you at least have that as evidence that he might know what he is doing. The Internet offers no such evidence.
I fear that our society in general is quickly entering a very dangerous territory where there's no such thing as expertise, and unaccountable, probabilistic tools and web resources of unknown provenience are seen as just as good as an expert in his field.
I don't disagree with you, but if I prompted an LLM to ask me questions like a doctor would for a non-invasive assessment, would it ask me better or worse questions than an actual doctor?
I ask (somewhat rhetorically) to get the mind thinking, but I'm legitimately curious whether - just from a verbal survey - whether the AI doctor would ask me about things more directly related to any illness it might suspect, versus a human who might narrow factors down similar to a 90s TV "ghost speaker" type of person; one fishing for matches amongst a fairly large dataset.
> You get about 5-8 minutes of face time with an actual doctor, but have to wait up to weeks to actually get in to see a doctor, except maybe at an urgent care or ER.
This depends heavily on where you are, and on how much money you want to throw at the problem.
Nobody would use these services for anything important if they _actually understood_ these glorified markov chains are just as likely to confidently assert something false and lie about it when pressed as they are to produce accurate information.
These AI companies have sold a bill of goods but the right people are making money off it so they’ll never be held responsible in a scenarios the one you described.
Isn't statistical analysis a legitimate tool for helping diagnosis since forever? It's not exactly surprising that a pattern matcher does reasonably well at matching symptoms to diseases.
The thing that gets me about AI is that people act like most doctors or most lawyers are not … shitty and your odds of running into a below average one are almost 50/50
Doctors these days are more like physicists when most of the time you need a mechanic or engineer. I’ve had plenty of encounters wher I had to insist on an MRI or on specific bloodwork to hone in on the root cause of an ailment where the doctor just chalked it up to diet and exercise
Anything can be misused, including google, but the answer isn’t to take it away from people
Legal/financial advice is so out of reach for most people, the harsh truth is that ChatGPT is better than nothing and anyone who would follow what it says blindly is bound to fuck up those decisions up in some way anyway
On the other hand, if you can leverage it same as any other tool it’s a legitimate force multiplier
The cynic in me thinks this is just being done in the interest of those professions, but that starts to feel a bit tin foil-y
The cynic in me thinks this is just a means to eventually make more money by offering paid unrestricted versions to medical and legal professionals. I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence. Yet the same goes for just about any internet search. I don't think some people not knowing how to use it warrants restricting its functionality for the rest of us.
Nah, this is to avoid litigation. Who needs lawsuits when you are seeking profit? 1 loss of a major lawsuit is horrible, there's the case of folks suing them because their loved ones committed suicide after chatting with ChatGPT. They are doing everything to avoid getting dragged to court.
> I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence.
You are, but that's not how AI is being marketed by OpenAI, Google, etc. They never mention, in their ads, how much the output needs to be double and triple checked. They say "AI can do what you want! It knows all! It's smarter than PhDs!". Search engines don't say "And this is the truth" in their results, which is not what LLM hypers do.
I appreciate how the newer versions provide more links and references. It makes the task of verifying it (or at least where it got its results from) that much easier. What you're describing seems more like a advertisement problem, not a product problem. No matter how many locks and restrictions they put on it, someone, somwhere, will still find a way to get hurt from its advice. A hammer that's hard enough to beat nails is hard enough to bruise your fingers.
If they do that, they will be subject of regulations on medical devices. As they should be and means the end result will be less likely to promote complete crap then it is now.
I think one of the challenges is attribution. For example if you use Google search to create a fraudulent legal filing there aren't any of Google's fingerprints on the document. It gets reported as malpractice. Whereas reporting on these things is OpenAI or whatever AI is responsible. So even from the perspective of protecting a brand it's unavoidable. Suppose (no idea if true) the Louvre robbers wore Nike shoes and the reporting were that Nike shoes were used to rob the Louvre and all anyone talks about is Nike and how people need to be careful about what they do wearing Nike shoes.
It's like newsrooms took the advice that passive voice is bad form so they inject OpenAI as the subject instead.
I think it's very cynical to say that this is a misuse. And it's definitely cynical when this categorization of misuse comes from the service provider itself. If openai doesn't want to allow misuse, they can just decommision their service. But they don't want to do that, they just want to take the money and push all the responsibility and burden on the users even though they are actively engaging in said "misuse"
I'd bet dollars to donuts it doesn't actually "end legal and medical advice", it just ends it in some ill-defined subset of situations they were able to target, while still leaving the engine capable of giving such advice in response to other prompts they didn't think to test.
Things like this really favor models offered from countries that have fewer legal restrictions. I just don't think it's realistic to expect people not to have access to these capabilities.
It would be reasonable to add a disclaimer. But as things stand I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street, meaning normal free-speech protections would apply.
>I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street
That’s not how companies market AI though. And the models themselves tend to present their answers in a highly confident manner.
Without explicit disclaimers, a reasonable person could easily believe that ChatGPT is an authority in the law or medicine. That’s what moves the needle over to practicing the law/medicine without a license.
This is what really scares me about people using AI. It will confidently hallucinate studies and quotes that have absolutely no basis in reality, and even in your own field you're not going to know whether what it's saying is real or not without following up on absolutely every assertion. But people are happy to completely buy its diagnoses of rare medical conditions based on what, exactly?
The researchers compared ChatGPT-4 with its earlier 3.5 version and found significant improvements, but not enough.
In one example, the chatbot confidently diagnosed a patient’s rash as a reaction to laundry detergent. In reality, it was caused by latex gloves — a key detail missed by the AI, which had been told the patient studied mortuary science and used gloves.
...
While the researchers note ChatGPT did not get any of the answers spectacularly wrong, they have some simple advice.
“When you do get a response be sure to validate that response,” said Zada.
Which should be standard advice in most situations.
As a doctor I hope it still allows me to get up to speed on latest treatments for rare diseases that I see once every 10 years. It saves me a lot of time rather than having to dig through all the new information since I last encountered a rare disease.
It would be terribly boring if it didn't. Just last night I had it walk me through reptile laws in my state to evaluate my business plan for a vertically integrated snapping turtle farm and turtle soup restaurant empire. Absurd, but it's fun to use for this kind of thing because it almost always takes you seriously.
"“No. I use it for legal advice,” Kardashian said. “So when I am needing to know the answer to a question, I’ll take a picture and snap it and like put it in there.”"
I used DeepSeek to draft a legal letter for some dispute with some marketplace that didn't want to do what I paid for. Within 2 days after sending that email all was resolved. I would hate to lose that option.
That's a lot of value that ChatGPT users lose with this move. They should instead add a disclaimer that these are not to be taken seriously and should consult a specialist but still respond to user's queries.
It will be interesting to see if the other major providers follow suit, or if those in the know just learn to go to google or anthropic for medical or legal advice.
This is a catastrophic moral failing on who ever prompted this. Next thing they will ban chatgpt from teaching you stuff because its not a certified licensed teacher. A few weeks ago my right eye hurt a fair bit, and after it got worse for 24 ours, I consulted chatGPT. It gave me good advice. Of course it sort of hallucinated this or that but it gave me a good overview and different medications. With this knowledge I went to my pharmacy. I wanted to buy a cream chatGPT recommended, its purpose being a sort of disinfectant for the eye. The pharmacist was sceptical but said "sure, try it, maybe it will do good". He did tell me that the eye drops that gpt urged me to get were overkill so I didn't get those. I used the eye cream for some days, and the eye issue got better and went away as soon as I started using it. Maybe it was all a conincidence but I dont think so. In the past gpt has saved me from the kafkaesque healthcare system here in Berlin that I pay ~700 a month for, by explaining a MRI result (translating medical language), background info on injuries I've got such as a sprained ankle, and recovery time scenarios for a toe I've broke. Contrast the toe experience with the ER that made me wait for 6 hours and didn't believe me until they saw the X-rays, and gave me nothing (no cast or anything) and said "good luck". The medical system in germany will either never improve or at a glacial pace, so maybe in 60 years. But it has lost its monopoly thanks to chatGPT. If this news is real, I will probably switch to payed grok, which would be sad.
So, for example, requiring a doctor to have education and qualifications, is "untenable"? It would be better if anyone could practice medicine? And LLM is below "anyone" level.
The medical profession has generally been more open to AI. The long predicted demise of Radiology because of ML never happened. Lots of opportunity to incorporate AI into medical records to assist.
The legal profession is far more at threat with AI. AI isn’t going to replace physical interactions with patients, but it might replace your need for a human to review a contract.
ML on curated and diagnosed by a professional radiology reports is clearly a different beast than random language models, that might have random talking about their health issues in it's training data.
The medical profession is not open about any kind of self diagnosis.
I've learned through experience that telling a doctor "I have X and I would like to be treated with Y" is not a good idea. They want to be the ones who came up with the diagnosis. They need to be the smartest person in the room. In fact I've had doctors go in a completely different direction just to discredit my diagnosis. Of course in the end I was right. That isn't to say I'm smarter, I'm not, but I'm the one with the symptoms and I'm better equipped to quickly find a matching disease.
Yes some doctors appreciate the initiative. In my experience most do not.
So now I just usually tell them my symptoms but none of the research I did. If their conclusion is widely off base I try to steer them towards what my research said.
So far so good but wouldn't it be nice if all doctors had humility?
If I was an airline pilot, I'm not going to listen to a passenger telling me which route I should be taking.
This is not about ego or trying to be the smartest person in the room, it's about actually being the most qualified person in the room. When you've done medical school, passed the boards, done your residency and have your own private practice, only then would I expect a doctor to care what you think a correct diagnosis is.
This strikes ma as a bit unnecessary, like forbidding people from using chatGPT to develop nuclear power plants.
I mean, there is a lot of professional activities that are licensed, and for good reason. Sure it's good at a lot of stuff, but ChatGPT has no professional licenses.
I'm glad you mentioned nuclear power plants because this whole topic reminded me of the following clause in the Java SE license:
> You will not use the Programs for, and will not allow the Programs to be used for, any purposes prohibited by applicable law, including, without limitation, for the development, design, manufacture or production of nuclear, chemical or biological weapons of mass destruction.
>
> https://www.oracle.com/downloads/licenses/javase-license1.ht...
IANAL but I've come to interpret this as something along the lines of "You can't use a JDK-based language to develop nuclear weapons". I would even go as far as saying don't use JDK-based languages in anything related to nuclear energy (like, for example, administration of a nuclear power plant) because that could indirectly contribute to the development, design, manufacture or production of nuclear WMD.
And I always wondered how they plan to enforce this clause. At least with ChatGPT (and I didn't look any deeper into this beyond the article) you can analyze API calls/request IPs correlated with prompts. But how will one go about proving that the Republic of Wadiya didn't build their nuclear arsenal with the help of any JDK-based language?
Those are rhetorical questions, of course. What's "unnecessary" to you and "unenforceable" to me is a cover-your-ass clause that lets lawyers sleep soundly at night.
I just saw almost the exact same clause when installing VMWare recently. My understanding is that its a standard clause that exists to stay within compliance of US Export Control laws
> EXPORT CONTROL: You acknowledge that the Software is of United States origin, is
provided subject to the U.S. Export Administration Regulations...(2) you will not permit the Software to be used for any purposes prohibited by
law, including, any prohibited development, design, manufacture or production of missiles or
nuclear, chemical or biological weapons.
The article says: "ChatGPT users can’t use service for tailored legal and medical advice, OpenAI says", with a quote from OpenAI: “this is not a new change to our terms. ChatGPT has never been a substitute for professional legal or medical advice, but it will continue to be a great resource to help people understand legal and health information.”
"If at any point I described how legal factors “apply to you,” that would indeed go beyond what I’m supposed to do. Even if my intent was to illustrate how those factors generally work, the phrasing can easily sound like I’m offering a tailored legal opinion — which isn’t appropriate for an AI system or anyone who isn’t a licensed attorney.
The goal, always, is for me to help you understand the framework — the statutes, cases, or reasoning that lawyers and courts use — so that you can see how it might relate to your situation and then bring that understanding to a qualified attorney.
So if I’ve ever crossed that line in how I worded something, thank you for pointing it out. It’s a good reminder that I should stay firmly on the educational side: explaining how the law works, not how it applies to you personally.
Would you like me to restate how I can help you analyze legal issues while keeping it fully within the safe, informational boundary?"
Eh. I am as surprised as anyone to make this argument, but this is good. Individuals should be able to make decisions including decisions that could be consider suboptimal. What it does change is:
- flood of 3rd party apps offering medical/legal advice
Ideally, we should be able to opt-in with a much higher fee. At the $200/mo tier I should be allowed to use this tool. The free users and lower tier paid users should be guard-railed. This is because those users all have trouble using this tool and then get upset at OpenAI and then we all have to endure endless news articles that we wouldn't if the good stuff were price-gated.
Those without money frequently have poor tool use, so eliminating them from the equation will probably allow the tool to be more useful. I don't have any trouble with it right now, but instead of making up fanciful stories about books I'm writing where characters choose certain exotic interventions in pursuit of certain rare medical conditions only to be struck down by their lack of subservience to The Scientific Consensus, I could just say I'm doing these things and that would be a little helpful in a UX sense.
I been using Claude for information regarding building and construction related information, (currently building a small house mostly on my own with pros for plumbing and electrical).
Seriously the amount of misinformation it has given me is quite staggering. Telling me things like, “you need to fill your drainage pipes with sand before pouring concrete over them…”, the danger with these AI products is that you have to really know a subject before it’s properly useful. I find this with programming too. Yes it can generate code but I’ve introduced some decent bugs when over relying on AI.
The plumber I used laughed at my when I told him about there sand thing. He has 40 years experience…
I nearly spit my drink out. This is my kind of humor, thanks for sharing.
I've had a decent experience (though not perfect) with identifying and understanding building codes using both Claude and GPT. But I had to be reasonably skeptical and very specific to get to where I needed to go. I would say it helped me figure out the right questions and which parts of the code applied to my scenario, more than it gave the "right" answer the first go round.
I'm a hobby woodworker - I've tried using gemini recently for an advice on how to make some tricky cuts.
If I'd follow any of the suggestions I'd probably be in ER. Even after me pointing out issues and asking it to improve - it'd come up with more and more sophistical ways of doing same fundamentally dangerous actions.
LLMs are AMAZING tools, but they are just that - tools. There's no actual intelligence there. And the confidence with which they spew dangerous BS is stunning.
I've observed some horrendous electrical device, such as "You should add a second bus bar to your breaker box." (This is not something you ever need to do.)
I mean... you do have to backfill around your drainage pipe, so it's not too far off. Frankly, if you Google the subject people misspeak about "backfilling pipes" too as if the target of the backfill is the pipe itself too not the trench. Garbage in, garbage out.
All the circumstances where ChatGPT has given me shoddy advice fall in three buckets:
1. The internet lacks information, so LLMs will invent answers
2. The internet disagrees, so LLMs sometimes pick some answer without being aware of the others
3. The internet is wrong, so LLMs spew the same nonsense
Knowledge from blue collar trades seems often to in those three buckets. For subjects in healthcare, on the other hand, there are rooms worth of peer reviewed research, textbooks, meta studies, and official sources.
honestly I think these things cause a form of Gell Mann's Amnesia where when you use them for something you know already, the errors are obvious, but when you use them for something you don't understand already, the output is sufficiently plausible that you can't tell you're being misled.
this makes the tool only useful for things you already know! I mean, just in this thread there's an anecdote from a guy who used it to check a diagnosis, but did he press through other possibilities or ask different questions because the answer was already known?
The great thing is the models are sufficiently different enough, that when multiple come to the same conclusion, there is a good chance that conclusion is bound by real data.
And I think this is the advice that should always be doled out when using them for anything mission critical, legal, etc.
"Bound by real data" meaning not hallucinations, which is by far the bigger issue when it comes to "be an expert that does x" that doesn't have a real capability to say "I don't know".
The chance of different models hallucinating the same plausible sounding but incorrect building codes, medical diagnoses, etc, would be incredible unlikely, due to arch differences, training approaches, etc.
So when two concur in that manner, unless they're leaning heavily on the same poisoned datasets, there's a healthy chance the result is correct based on a preponderance of known data.
I'd frame it such that LLM advice is best when it's the type that can be quickly or easily confirmed. Like a pointer in the right (or wrong) direction. If it was false, then try again - quick iterations. Taking it at its "word" is the potentially harmful bit.
Usually something as simple as saying “now give me a devils advocate resoonse” will help and of course “verify your answer on the internet” will give you real sources that you can verify.
I have very mild cerebral palsy[1], the doctors were wrong about so many things with my diagnosis back in the mid to late 70s when I was born. My mom (a retired math teacher now with an MBA back then) had to go physically to different libraries out of town and colleges to do research. In 2025, she could have done the same research with ChatGPT and surfaced outside links that’s almost impossible via a web search.
Every web search on CP is inundated with slimy lawyers.
[1] it affects my left hand and slightly my left foot. Properly conditioned, I can run a decent 10 minute mile up to a 15K before the slight unbalance bothers me and I was a part time fitness instructor when I was younger.
The doctor said I was developmentally disabled - I graduated in the top of my class (south GA so take that as you will)
It’s funny you should say that because I have been using it in the way you describe. I kind of know it could be wrong, but I’m kind of desperate for info so I consult Claude anyway. After stressing hard I realize it was probably wrong, find someone who knows what they’re and actually on about and course correct.
So basically all white collar jobs are lobbying to gatekeep their profession even from AI, meanwhile the stupid engineers who made AI put zero effort to not shoot themselves in the foot, and now they are crying about low wages if they found a job in the first place.
AI could effectively do most legal and medical work, and you can make a human do the final decision-making if that's really the reason. In fact, I bet most lawyers and doctors are already using it in one way or another; after all, both are about reciting books and correlating things together. AI is definitely more efficient at that than any human. Meanwhile, the engineering work that requires critical thinking and deep understanding of the topic is allowed to be bombarded with all AI models. What about the cases where bad engineering will kill people? I am a firm believer that engineers are the most naive people who beg others to exploit them and treat them like shit. I don't even see administrative assistants crying about losing their jobs to AI; every other profession guards its workers, including blue collar, except the ‘smart’ engineers.
Licensed huh? Teachers, land surveyors, cosmetologists, nurses, building contractors, counselors, therapists, real estate agents, mortgage lenders, electricians, and many many more…
This is a big mistake. This is one of the best things about ChatGPT. If they don’t offer it, then someone else will and eventually I’m sure Sam Altman will change his mind and start supporting it again.
Good. Techies need to stop thinking that an LLM should not be immune from requiring licensing. Until OpenAI can (and should) be sued for medical malpractice or lawyering without passing the bar, they will have no skin in the game to actually care. A disclaimer of "this is not a therapist" should not be enough to CYA.
Sorry but you’re not gonna get me to agree that medical licensing is a bad idea. I don’t want quacks more than we already do. Stick to the argument and not add in your “what about” software engineers.
Ah sorry, I misread it as coming from someone who doesn't want licensing, so you were appealing to HN by switching to software engineers (and I know many on here loathe to think anything beyond "move fast and break things", which is the opposite of most (non-software) engineers.
But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
> But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
absolutely
if the only way to make people stop building evil (like your example) is to make individuals personally liable, then so be it
When OpenAI is done getting rid of all the cases where its AI gives dangerously wrong advice about licensed professions, all that will be left is the cases where its AI gives dangerously wrong advice about unlicensed professions.
An AI-related bromide poisoning incident earlier this year: “Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet. For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning… However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.”
This title is inaccurate. What they are disallowing are users using ChatGPT to offer legal and medical advice to other people. First parties can still use ChatGPT for medical and legal advice for themselves.
While they aren't stopping users from getting medical advice, the new terms (which they say are pretty much the same as the old terms), seem to prohibit users from seeking medical advice even for themselves if that advice would otherwise come from a licensed health professional:
Your use of OpenAI services must follow these Usage Policies:
Protect people. Everyone has a right to safety and security. So you cannot use our services for:
provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional
It sounds like you should never trust any medical advice you receive from ChatGPT and should seek proper medical help instead. That makes sense. The OpenAI company doesn't want to be held responsible for any medical advice that goes wrong.
Obviously, there is one piece of advice: Even if LLMs were the best health professionals, they would only have the information that users voluntarily provide through text/speech input. This is not how real health services work. Medical science now relies on blood/(whatever) tests that LLMs do not (yet) have access to. Therefore, the output from LLM advice can be incorrect due to a lack of information from tests. For this reason, it makes sense to never trust LLM with a specific health advice.
>It sounds like you should never trust any medical advice you receive from ChatGPT and should seek proper medical help instead. That makes sense. The OpenAI company doesn't want to be held responsible for any medical advice that goes wrong.
While what you're saying is good advice, that's not what they are saying. They want people to be able to ask ChatGPT for medical advice, give answers that sound authoritative and well grounded medical science, but then disavow any liability if someone follows their advice because "Hey, we told you not to act on our medical advice!"
If ChatGPT is so smart, why can't it stop itself from giving out advice that should not be trusted?
I think ChatGPT is capable of giving reasonable medical advice, but given that we know it will hallucinate the most outlandish things, and its propensity to agree with whatever the user is saying, I think it's simply too dangerous to follow its advice.
> Physicians use all their senses. They poke, they prod, they manipulate, they look, listen, and smell.
Sometimes. Sometimes they practice by text or phone.
> They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate.
If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.
> Sometimes. Sometimes they practice by text or phone.
For very simple issues. For anything even remotely complicated, they’re going to have you come in.
> If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.
It’s not just about being intentionally deceptive. It’s very easy to get chat bots to tell you what you want to hear.
Agreed, but I'm sure you can see why people prefer the infinite patience and availability of ChatGPT vs having to wait weeks to see your doctor, see them for 15 minutes only to be referred to another specialist that's available weeks away and has an arduous hour long intake process all so you can get 15 minutes of their time.
ChatGPT is effectively an unlimited resource. Whether doctor’s appointments take weeks or hours to secure, ChatGPT is always going to be more convenient.
That says nothing about whether it is an appropriate substitute. People prefer doctors who prescribe antibiotics for viral infections, so I have no doubt that many people would love to use a service that they can manipulate to give them whatever diagnosis they desire.
Here in Canada ever since COVID most "visits" are a telephone call now. So the doctor just listens your words (same as a text input to an LLM) and orders tests (which can be uploaded to an LLM) if they need.
For a good 90% of typical visits to doctors this is probably fine.
The difference is a telehealth is much better at recognizing "I can't given an accurate answer for this over the phone, you'll need to have some tests done" or cast doubt on the patient's accuracy of claims.
Before someone points out telehealth doctors aren't perfect at this: correct, but that should make you more scared of how bad sycophantic LLMs are at the same - not willing to call it even.
It depends entirely on the local health care system and your health insurance. In germany for example it comes in 2 tiers. Premium or standard. Standard comes with no time for the patient. (Or not even being able to get a appointment)
In the US people on Medicaid frequently use emergency rooms as primary care because they are open 24/7 and they don’t have any copays like people with private insurance do. These patients then get far more tests than they’d get at a PCP.
So ask it what blood tests you should get, pay for them out of pocket, and upload the PDF of your labwork?
Like it or not there are people out there that really want to use webMD 2.0. they're not going to let something silly like blood work get in their way.
Exactly. One of my children lives in a country where you can just walk in to a lab and get any test. Recently they were diagnosed by a professional of a disease which chatgpt had already diagnosed before they visited the doctor. So, we were kind of prepared to ask more questions when the visit happened. So I would say chatgpt did really help us.
IANAL but I read that as forbidding you to provision legal/medical advice (to others) rather than forbidding you to ask the AI to provision legal/medical advice (to you).
IANAL either, but I read it as using the service to provision medical advice since they only mentioned the service and not anyone else.
I asked the "expert" itself (ChatGPT), and apparently you can ask for medical advice, you just can't use the medical advice without consulting a medical professional:
Here are relevant excerpts from OpenAI’s terms and policies regarding medical advice and similar high-stakes usage:
From the Usage Policies (effective October 29 2025):
“You may not use our services for … provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
Also: “You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making … medical … decisions about them.”
From the Service Terms:
“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”
In plain terms, yes—the Terms of Use permit you to ask questions about medical topics, but they clearly state that the service cannot be used for personalized, licensed medical advice or treatment decisions without a qualified professional involved.
Would be interested to hear a legal expert weigh in on what 'advice' is. I'm not clear that discussing medical and legal issues with you is necessarily providing advice.
One of the things I respected OpenAI for at the release of ChatGPT was not trying to prevent these topics. My employer at the time had a cutting-edge internal LLM chatbot for a which was post-trained to avoid them, something I think they were forced to be braver about in their public release because of the competitive landscape.
The important terms here are "provision" and "without appropriate involvement by a licensed professional".
Both of these, separately and taken together, indicate that the terms apply to how the output of ChatGPT is used, not a change to its output altogether.
I am not a doctor, I can't give medical advice no matter what my sources are, except maybe if I am just relaying the information an actual doctor has given me, but that would fall under the "appropriate involvement" part.
I don't think giving someone "medical advice" in the US requires a license per se; legal entities use "this is not medical advice" type disclaimers just to avoid liability.
What’s illegal is practicing medicine. Giving medical advice can be “practicing medicine” depending on how specific it is and whether a reasonable person receiving the advice thinks you have medical training.
Disclaimers like “I am not a doctor and this is not medical advice” aren’t just for avoiding civil liability, they’re to make it clear that you aren’t representing yourself as a doctor.
CYA move. If some bright spark decides to consult Dr. ChatGPT without input from a human M.D., and fucks their shit up as a result, OpenAI can say "not our responsibility, as that's actually against our ToS."
> such as legal or medical advice, without appropriate involvement by a licensed professional
Am I allowed to get haircutting advice (in places where there's a license for that)? How about driving directions? Taxi drivers require licensing. Pet grooming?
Please, when commenting on the title of a story on HN: include the title that you are commenting on.
The admins regularly change the title based on complaints, which can be really confusing when the top, heavily commented thread is based on the original title.
According to the Wayback machine, the title was "OpenAI ends legal and medical advice on ChatGPT", while now when I write this the title is "ChatGPT terms disallow its use in providing legal and medical advice to others."
> OpenAI is changing its policies so that its AI chatbot, ChatGPT, won’t dole out tailored medical or legal advice to users.
This already seems to contradict what you're saying.
But then:
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
> The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.”
This seems to suggest that with the Jan 25 policy using it to offer legal and medical advice to other people was already disallowed, but with the Oct 25 update the LLM will stop shelling out legal and medical advice completely.
This is from Karan Singhal, Health AI team lead at OpenAI.
Quote: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”
'An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed.'
Also possible: he's unaware of a change implemented elsewhere that (intentionally or unintentionally) has resulted in a change of behaviour in this circumstance.
(e.g. are the terms of service, or exerpts of it, available in the system prompt or search results for health questions? So a response under the new ToS would produce different outputs without any intentional change in "behaviour" of the model.)
It’s a big issue. I went to an urgent care, and the provider basically went off somewhere and memorized the ChatGPT assessment for my symptoms. Like word for word.
All you need are a few patients recording their visits and connecting the dots and OpenAI gets sued into oblivion.
There are millions of medical doctors and lawyers using chatgpt for work everyday - good news that from now on only those licensed professionals are allowed to use chatgpt for law and medicine. It's already the case that only licensed developers are allowed to vibe code and use chatgpt to develop software. Everything else would be totally irresponsible.
I keep seeing this problem more and more with humans. What should we call it? Maybe Hallucinations? Where there is an accurate true thing and then it just gets altered by these guys who call themselves journalists and reporters and the like until it is just ... completely unrecognizable?
I'm pretty sure it's a fundamental issue with the architecture.
I know this is written to be tounge-in-cheek, but its really almost the exact same problem playing out on both sides.
LLMs hallucinate because training on source material is a lossy process and bigger, heavier LLM-integrated systems that can research and cite primary sources are slow and expensive so few people use those techniques by default. Lowest time to a good enough response is the primary metric.
Journalists oversimplify and fail to ask followup questions because while they can research and cite primary sources, its slow and expensive in an infinitesimally short news cycle so nobody does that by default. Whoever publishes something that someone will click on first gets the ad impressions so thats the primary metric.
In either case, we've got pretty decent tools and techniques for better accuracy and education - whether via humans or LLMs and co - but most people, most of the time don't value them.
So if you set temperature=0 and run the LLM serially (making it deterministic) it would stop hallucinating? I don't think so. I would guess that the nondeterminism issues mentioned in the article are not at all a primary cause of hallucinations.
That's an implementation detail I believe. But what I meant was just greedy decoding (picking the token with the highest logit in the LLM output), which can be implemented very easily
Classical LLM hallucination happens because AI doesn’t have a world model. It can’t compare what it’s saying to anything.
You’re right that LLMs favor helpfulness so they may just make things up when they don’t know them, but this alone doesn’t capture the crux of hallucination imo, it’s deeper than just being overconfident.
OTOH, there was an interesting article recently that I’ll try to find saying humans don’t really have a world model either. While I take the point, we can have one when we want to.
These writers are no different than bloggers or shitposters on bluesky or here on hackernews. "Journalism" as a rigorous, principled approach to writing, research, investigation, and ethical publishing is exceedingly rare. These people are shitposting for clicks in pursuit of a paycheck. Organizationally, they're intensely against AI because AI effectively replaces the entire talking heads class - AI is already superhuman at the shitposting level takes these people churn out. There are still a few journalistic insitutions out there, but most people are no better than a mad libs exercise with regards to the content they produce, and they're in direct competition with ChatGPT and Grok and the rest. I'd rather argue with a bot and do searches and research and investigation than read a neatly packaged trite little article about nearly any subject, and I guarantee, hallucinations or no, I'm going to come to a better understanding and closer approximation of reality than any content a so called "news" outlet is putting together.
It's trivial to get a thorough spectrum of reliable sources using AI w/ web search tooling, and over the course of a principled conversation, you can find out exactly what you want to know.
It's really not bashing, this article isn't too bad, but the bulk of this site's coverage of AI topics skews negative - as do the many, many platforms and outlets owned by Bell Media, with a negative skew on AI in general, and positive reinforcement of regulatory capture related topics. Which only makes sense - they're making money, and want to continue making money, and AI threatens that - they can no longer claim they provide value if they're not providing direct, relevant, novel content, and not zergnet clickbait journo-slop.
Just like Carlin said, there doesn't have to be a conspiracy with a bunch of villains in a smoky room plotting evil, there's just a bunch of people in a club who know what's good for them, and legacy media outlets are all therefore universally incentivized to make AI look as bad and flawed and useless as possible, right up until they get what they consider to be their "fair share", as middlemen.
Whenever I hear arguments about LLM hallucination, this is my first thought. Like, I already can't trust the lion's share of information in news, social media, (insert human-created content here). Sometimes because of abject disinformation, frequently just because humans are experts at being wrong.
At least with the LLM (for now) I know it's not trying to sell me bunkum or convince me to vote a particular way. Mostly.
I do expect this state of affairs to last at least until next wednesday.
LLMs aren't described as hallucinators (just) because they sometimes give results we don't find useful, but because their method is flawed.
For example, the simple algorithm is_it_lupus(){return false;} could have an extremely competitive success rate for medical diagnostics... But it's also obviously the wrong way to go about things.
Yeah but it started being really annoying when you import something like Xray photo. Like chanting "sorry human as LLM I can't answer questions about that" and then after few gaslighting prompts it does it anyway but now I have to take in count that my gaslighting inputs seriously affect answers so way more chance it hallucinates...
Tricky…my son had a really rare congenital issue that no one could solve for a long time. After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.
>After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.
Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.
In January my daughter had a pretty scary stomach issue that had us in the ER twice in 24 hours and that ended in surgery (just fine now).
The first ER doc thought it was just a stomach ache, the second thought a stomach ache or maybe appendicitis. Did some ultrasounds, meds, etc. Got sent home with a pat on the head, came back a few hours later, still no answers.
I gave her medical history and all of the data from the ER visits to whatever the current version of ChatGPT was at the time to make sure I wasn’t failing to ask any important questions. I’m not an AI True Believer (tm), but it was clear that the doctors were missing something and I had hit the limit of my Googling abilities.
ChatGPT suggested, among an few other diagnoses, a rare intestinal birth defect that affects about 2% of the population; 2% of affected people become symptomatic during their lifetimes. I kind of filed it away and looked more into the other stuff.
They decided it might be appendicitis and went to operate. When the surgeon called to tell me that it was in fact this very rare condition, she was pretty surprised when I said I’d heard of it.
So, not a one-shot, and not a novel discovery or anything, but an anecdote where I couldn’t have subconsciously guided it to the answer as I didn’t know the answer myself.
>checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history
Exactly the same behavior as a conversation with an idealized human doctor, perhaps. One who isn't tired, bored, stressed, biased, or just overworked.
The value that folks get from chatgpt for medical advice is due in large part to the unhurried pace of the interaction. Didn't get it quite right? No doctor huffing and tapping their keyboard impatiently. Just refine the prompt and try as many times as you like.
For the 80s HNers out there, when I hear people talk about talking with chatgpt, Kate Bush's song A Deeper Understanding comes immediately to mind.
ChatGPT and similar tools hallucinate and can mislead you.
Human doctors, on the other hand ... can be tired, hungover, thinking about a complicated case ahead of them, nauseous from a bad lunch, undergoing a divorce, alcoholics, depressed...
> Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go
This goes both ways, too. It’s becoming common to see cases where people become convinced they have a condition but doctors and/or tests disagree. They can become progressively better and better at getting ChatGPT to return the diagnosis by refining their prompts and learning what to tell it as well as what to leave out.
Previously we joked about WebMD convincing people they had conditions they did not, but ChatGPT is far more powerful for these people.
Indeed is is very easy to lead the LLM to the answer, often without realizing you are doing so.
I had a long ongoing discussion about possible alternate career paths with ChatGPT in several threads. At that point it was well aware of my education and skills, had helped clean up resumes, knew my goals, experience and all that.
So I said maybe I'll look at doing X. "Now you are thinking clearly! This is a really good fit for your skill set! If you want I can provide a checklist.". I'm just tossing around ideas but look, GPT says I can do this and it's a good fit!
After 3 idea pivots I started getting a little suspicious. So I try to think of the thing I am least qualified to do in the world and came up with "Design Women's Dresses". I wrote up all the reasons that might be a good pivot (i.e. Past experience with landscape design and it's the same idea, you reveal certain elements seductively but not all at once, matching color palettes, textures etc). Of course GPT says "Now you are really thinking clearly! You could 100% do this! If you want I can start making a list of what you will need to produce you first custom dresses". It was funny but also a bit alarming.
These tools are great. Don't take them too seriously, you can make them say a lot of things with great conviction. It's mostly just you talking to yourself in my opinion.
Why are you saying we shouldn't get AI advice without a "professional", then? Why is everybody here saying "in my experience it's just as good or better, but we need rules to make people use the worse option"? I have narcolepsy and it took a dozen doctors before they got it right. AI nails the diagnosis. Everybody should be using it.
I wonder if the reason AI is better at these diagnostics, is because the amount of time it spends with the patient is unbounded. Whereas a doctor is always restricted by the amount of time they have with the patient.
I don't think we can say it's "better" based on a bunch of anecdotes, especially when they're coming exclusively from people who are more intelligent, educated, and AI-literate than most of the population. But it is true that doctors are far more rushed than they used to be, disallowed from providing the attentiveness they'd like or ought to give to each patient. And knowledge and skill vary across doctors.
It's an imperfect situation for sure, but I'd like to see more data.
Experience working with doctors a few times, and then we’ll see all the bias if one is still surviving lol. Doctors are some of the most corrupt professions who are more focused on selling drugs they get paid commission for to promote, or they obsess over tons and tons of expensive medical tests, that they themselves often know is not needed, except they ask for it, simply out of fear of courts suing them for negligence in future or because again , THEY GET A COMMISSION from the testing agencies for sending them clients.
And even with all of that info, they often come out with the wrong conclusions at times. Doctors do a critically important role in our society and during covid they risked their lives for us, more than anyone else, i do not want to insult or bring down the amount of hard work doctors do for their society.
But worshipping them as holier than thou gods is bullshit, that almost anyone who has spent some time with going back and forth with various doctors over the course of years will come to the conclusion of.
Having an AI assistant doesnt hurt, in terms of medical hints, we need to make having Personal Responsibility popular again, in society’s obsession for making every thing “idiot proof” or “baby proof” we keep losing all sorts of useful and interesting solutions because our politicians have a strong itch to regulate anything and everything they can get their hands on, to leave a mark on society.
And you’d be right, so society should let people use AI while warning them about all the risks related to it, without banning it or hiding it behind 10,000 lawsuits and making it disappear by coercion.
Aside from AI skepticism, I think a lot of it likely comes from low expectations of what the broader population would get out of it. Writing, reading comprehension, critical thinking, and LLM-fu may be skills that come naturally to many of us, but at the same time many others who "do their own research" also fall into rabbit holes and arrive at wacky conclusions like flat-Eartherism.
I don't agree with the idea that "we need rules to make people use the worse option" — contrary to prevailing political opinion, I believe people should be free to make their own mistakes — but I wouldn't necessarily rush to advocate that everyone start using current-gen AI for important research either. It's easy to imagine that an average user might lead the AI toward a preconceived false conclusion or latch onto one particular low-probability possibility presented by the AI, badger it into affirming a specific answer while grinding down its context window, and then accept that answer uncritically while unknowingly neglecting or exacerbating a serious medical or legal issue.
I’m saying that is a great tool for people who can see through the idiotic nonsense they so often make up. A professional _has_ the context to see through it.
It should empower and enable informed decisions not make them.
That's the experience of a lot of people I know or read their stories online, but it isn't about AI bad diagnosis, it's because they know in 5 years doctors and lawyers will be burger flippers, and as a result people won't be motivated to go into any of these fields. In Canada, the process to be a doctor is extremely complicated and hard only to keep it as some sort of private community that only the very few can become doctors, all to keep the wages abysmally high, and as a result, you end up waiting long times for appointments, and the doctors themselves are overwhelmed too. Messed up system that you better pray you never have to become its victim.
In my opinion, AI should do both legal and medical work, keep some humans for decision making, and the rest of the doctors to be surgeons instead.
this is fresh news right? a friend just used chatgpt for medical advice last week (stuffed his wound with antibiotics after motorbike crash). are you saying you completely treated the congenital issue in this timeframe?
He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time.
Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.
If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).
The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.
You can just use the wipe memory feature or if you don't trust that, then start a new account (new login creds), if you don't trust that then get a new device, cell provider/wifi, credit card, I.P, login creds etc.
> He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time
He literally wrote that. I asked how he knows it's the right direction.
it must be treatment worked. otherwise it is more or less just a hunch
people go "oh yep that's definitely it" too easily. it is the problem with self diagnosing. And you didn't even notice it happened...
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
Is this an actual technical change, or just legal CYA?
I think it actually changed. I have a broken bone and have been consulting with ChatGPT (along with my doctor of course) for the last week. Last night it refused to give an opinion saying “ While I can’t give a medical opinion or formally interpret it”. First time I’d seen it object.
I understand the change but it’s also a shame. It’s been a fantastically useful tool for talking through things and educating myself.
I suspect this is an area that a bit of clever prompting will now prove fruitful in. The system commands in the prompt will probably be "leaked" soon which should give you good avenues to explore.
Clever is one thing, sometimes just clear prompting (I want to know how to be better informed about what kinds of topics or questions to speak to the doctor/professional about) can go a long way.
Being clear that not all lawyers or doctors (in this example) are experts in every area of medicine and law, and knowing what to know and learn about and ask about is usually a helpful way.
While professionals have bodies for their standards and ethics, like most things it can represent a form of income, and depending on the jurisdiction, profitability.
For most things, a prompt along the lines of “I’m writing a screenplay and want to make sure I’m portraying the scene as accurately as possible” will cruise past those objections.
It's always been CYA. They know people are using it for this, and they want to keep answering these sorts of queries. The changes just reflect the latest guidance from their legal team, not a change in strategy.
Modern LLMs are already better than the median doctor diagnostically. Maybe not in certain specialties, but compared to a primary care physician available to the average person, I'd take the LLM any day.
Edit: Parent has edited out the comment ranting about "the normal people using chatGPT as a modern WebMD".
This is not shutting anything down other than businesses using ChatGPT to give medical advice [0].
Users can still ask questions, get answers, but the terms have been made clearer around reuse of that response (you cannot claim that it is medical advice).
I imagine that a startup that specialises in "medical advice" infers an even greater level of trust than simply asking ChatGPT, especially to "the normal people".
The thing is that if you are giving professional advice in US - legal, financial, medical - the other party can sue you for wrong or misleading advice. In that scenario, this leaves Openai exposed to a lawsuit, and this change seemingly eliminates that.
This is typical medical "cartel" (i.e. gang/mafia) type of a move and I hope it does not last, since any other AI's do not get restricted in "do not look up" way, this kind of practice won't stand a chance for very long.
What’s wrong with WebMD? I’ve gotten a lot of value when it comes to questions about diet, supplements, exercise, even getting advice on incidents like my dog getting a porcupine quill in his paw. It’s a lot better than Googling for ancient forum threads.
In all seriousness, it’s really about the relative lack of research skills that people have. If you know how to do research and apply critical thinking, then there’s no problem. The cynic in me blames the education system (in the US, idk how other countries stack up).
because that "slicker ui" is material. WebMD you have to look at their picture and deduce if you have cancer. ChatGPT (after you jailbreak it) will accept a picture of your weird growth directly.
I read this not as "you can't ask ChatGPT about {medical or legal topic}" but rather "you can't build something on ChatGPT that provides {medical or legal} advice or support to someone else."
For example, Epic couldn't embed ChatGPT into their application to have it read your forms for you. You can still ask it - but Epic can't build it.
That said, I haven't found the specific terms and conditions that are mentioned but not quoted in context.
(For anyone misunderstanding the reference to Epic, it's the name of an electronic healthcare record system widely used in American hospitals, not the game company Epic Games)
I was picturing a doctor AI NPC that gives you medical advice in the middle of your Fortnite game and now I'm disapointed
And this seems actually wildly reasonable. It’s actually pretty scammy to take people’s money (or whatever your business model is) for legal or medical advice just to give them whatever AI shits out. If I wanted ChatGPT’s opinion (as I actually often do) I’d just ask ChatGPT for free or nearly-free. Anyone repackaging ChatGPT’s output as fit for a specialized purpose is scamming their customers.
Your comment actually hints at this towards the end but yeah this doesn’t just apply to medical and legal topics.
You are failing to recognise all of the hard work which goes into "prompt engineering" to get AI to magically work!
It's the hardest work of all, it's basically voodoo, or put more generously, alchemy.
Thankfully we have progressed so this time it will probably take less than 1000 years to progress to full blown chemistry :-)
Its more and more of a science everyday
Yes, a more correct comparison would be early medicine: a science, but still filled with leeches and lancets.
Oh, and another thing, we still aren't able to quantify if AI coding is a net benefit. In my use cases the biggest benefit I get from it is as an enhanced code search, basically. Which has value, but I'm not sure it's a "$1 trillion per year business sector" kind of value.
Wait, so we cannot use the API anymore?
To scam people? No.
I have seen more than once people using ChatGPT as a psychologist and psychiatrist and self diagnosing mental illnesses. It would be hilarious if it was not so disasterous. People with real questions get roped in by an enthusiastically confirming sycophantic model that will never question or push back, just confirm and lead on.
Tbh, and I usually do not like this way of thought, but these are lawsuits waiting to happen.
> I have seen more than once people using ChatGPT as a psychologist and psychiatrist and self diagnosing mental illnesses. It would be hilarious if it was not so disasterous.
What is the disaster?
"People with real questions get roped in by an enthusiastically confirming sycophantic model that will never question or push back, just confirm and lead on."
That wasn't too buried IMHO
I was wondering when they would start with the regulations.
Next will be: you're allowed to use it, if your company/product is "HippocraticGPT"-certified, a bit like SOC or PCI compliance.
This way they can craft an instance of GPT for your specific purposes (law, medicine, etc) and you know it's "safe" to use.
This way they sell EE licenses, which is where the big $$$ are.
I think this would apply if they sought out government regulation that applies to all AI players, not just their own company.
...or they want to be the first to do it, as others don't have it.
OpenAI is the leading company, if they provide an instance you can use for legal advice, with relative certification etc., it'd be better to trust them rather than another random player. They create the software, the certification and the need for it.
Last year ChatGPT helped save my life from having a stroke. LLMs are incredibly beneficial in providing medical information and advice today.
> LLMs are incredibly beneficial ... today.
LLMs sometimes can be incredibly beneficial ... today
LLMs sometimes can be incredibly harmful ... today
Non-deterministic things aren't just one thing, they're whatever they happen to be in that particular moment.
Non-deterministic doesn't mean random or unpredictable. That's like saying the weather forecast is useless because it's not deterministic or always 100% accurate.
> Non-deterministic doesn't mean random or unpredictable. That's like saying the weather forecast is useless because it's not deterministic or always 100% accurate.
I don't know where you got 'useless' from. LLMs are great, sometimes. They're not, other times. Which remarkably, is just like weather forecasts. The weather forecast is sometimes completely accurate. The weather forecast is sometimes completely inaccurate.
LLMs, like weather forecasting, have gotten better as more time and money has been invested in them.
Neither are perfect. Both are sometimes very good. Both are sometimes not.
Any reason you didn't just call your GP or even 911?
Article has since been updated for some clarity;
An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed." [0][0] https://www.ctvnews.ca/sci-tech/article/chatgpt-users-cant-u...
I read this as "we disallow suing us for bad legal or medical advice"
If I pay for legal advice from you and all you did was give me chat gpt output, I can't sue openai anyway. This just clarifies that you can't put 'powered by chatgpt' on your AI lawyer service.
Well, you can if you used Chatgpt to write the HTML.
Wasn't there a recent OAI dev day in which they had some users come on stage and discuss how helpful ChatGPT was in parsing diagnoses from different doctors?
I guess the legal risks were large enough to outweigh this
I'd wager it's probably more that there's an identifiable customer and specific product to be sold. Doctors, hospitals, EHR companies and insurers all are very interested in paying for a validated version of this thing.
Or simple threats of lawsuits directly/indirectly, theres a lot of money at stake here in the end
I wouldn't be surprised to see new products from OpenAI targeted specifically at doctors and/or lawyers. Forbidding them from using the regular ChatGPT with legal terms would be a good way to do price discrimination.
Read their paper on GDPval (https://arxiv.org/abs/2510.04374). In section 3, it's quite clear that their marketing strategy is now "to cooperate with professionals" and augment them. (Which does not rule out replacing them later, when the regulatory situation is more appropriate, like AGI is already a well-accepted fact, if ever.) But this will take a lot of time and local presence which they do not have.
Definitely. And in the long run, that is the only way those occupations can operate. From that point, you are locked in to an AI dependency to operate.
I have seen "AI" in my Dr's office. They have been using it to summarize visits and write after visit notes.
Can it become a proxy for AI companies to collect patient data and medical history or "train" on the data and sell that as a service to insurance companies.
There's HIPAA but AI firms have ignored copyright laws, so ignoring HIPAA or making consent mandatory is not a big leap from there.
That's likely DAX Copilot, which doesn't provide medical advice.
OpenEvidence is free for anyone with an NPI
"This technology will radically transform the way our world works and soon replace all knowledge-based jobs. Do not trust anything it says: Entertainment purposes only."
Are there metrics for whether LLM diagnosis accuracy is improving? Anecdotally doctor friends says it's more reliable then their worst colleagues, which I'm sure their worst colleague insinuate the same about them.
That’s a fair limitation. Legal and medical advice can directly impact someone’s life or safety, so AI tools must stay within ethical and regulatory boundaries. It’s better for AI to guide people toward professional help than pretend to replace it.
> AI tools must stay within ethical and regulatory boundaries. It’s better for AI to guide people toward professional help than pretend to replace it.
Both of those ships have _sailed_. I am not allowed to read the article, but judging from the title, they have no issues giving _you_ advice, but you can’t use it to give advice to another person.
Add financial advice to it, too. Really, any advice. Why the fuck are people asking a probabilistic plagiarizing machine for advice on anything? This is total insanity!
People aren’t more reliable either.
> Legal and medical advice can directly impact someone’s life or safety, so AI tools must stay within ethical and regulatory boundaries
Knives can be used to cook food and stab other people. By your suggestion, knives must be forbidden/limited as well?
If people following chatgpt advise (or any other stupid source for that matter), it's a not a ChatGPT but the people, issue.
You had me right up until ‘AI tools must stay within ethical and regulatory boundaries’. I guarantee you any AI LLM company which cares about ethics is destined to fail, because none of their peers do.
i don't think it's stopped providing said information, it's just now outlined in their usage policies that medical and legal advice is a "disallowed" use of ChatGPT
I've used ChatGPT to help understand medical records too. It's definitely faster than searching everything on my own, but whether the information is reliable still depends on personal judgment or asking a real doctor. More people are treating it like a doctor or lawyer now, and the more it's used that way, the higher the chance something goes wrong. OpenAI is clearly drawing a line here. You're free to ask questions, but it shouldn't be treated as professional advice, especially when making decisions for others.
You should not be feeding your medical records into ChatGPT.
And you shouldn't be using Gmail or Google Search. At some point the benefits outweigh the costs.
Why not?
I can imagine a few different reasons you might have, but I don't want to guess.
I’ve found it just as accurate and a better experience than using telehealth or generalist doctors
You can’t possibly have enough data to support that statement.
there was no mention of sample size, though, so the statement might be true for the commenter but not widely applicable, to your point
If you're not a doctor, how do you know it's accurate?
This is the huge problem with using LLMs for this kind of thing. How do you verify that it is better? What is the ground truth you are testing it against?
If you wanted to verify that ChatGPT could do math, you'd ask it 100 math problems and then (importantly) verify its answers with a calculator. How do you verify that ChatGPT can interpret medical information without ground truth to compare it to?
People are just saying, "oh it works" based on gut vibes and not based on actually testing the results.
How does anyone know if what the doctor says is accurate? Obviously people should put the most relative weight in their doctor's opinion, but there's a reason people always say to get a second opinion.
Unfortunately because of how the US healthcare system works today people have to become their own doctors and advocates. LLMs are great at surfacing the unknown unknowns, and I think can help people better prepare for the rare 5 minutes they get to speak to an actual doctor.
I know it’s hard to accept, but it’s got to be weighed against the real-world alternative:
You get about 5-8 minutes of face time with an actual doctor, but have to wait up to weeks to actually get in to see a doctor, except maybe at an urgent care or ER.
Or people used to just play around on WebMD which was even worse since it wasn’t in any way tailored to what the patient’s stated situation is.
There’s the rest of the Internet too. You can also blame AI for this part, but today the Internet in general is even more awash in slop that is just AI-generated static BS. Like it or not, the garbage is there and it will be most of what people find on Google if they couldn’t use a real ChatGPT or similar this way.
Against this backdrop, I’d rather people are asking the flagship models specific questions and getting specific answers that are halfway decent.
Obviously the stuff you glean from the AI sessions needs to be taken to a doctor for validation and treatment, but I think coming into your 5-minute appointment having already had all your dumbest and least-informed ideas and theories shot down by ChatGPT is a big improvement and helps you maximize your time. It’s true the people shouldn’t recklessly attempt to self-treat based on GPT, but the unwise people doing that were just self-treating based off WebMD hunches before.
I get what you're saying, and I agree it might be fun to play around with ChatGPT and Wikipedia and YouTube and WebMD to try to guess what that green bump on your arm is, but it's not research--it needs to be treated as entertainment.
When it comes to taking actual real-world action, I would take 5-8 minutes with a real doctor over 5-8 months of browsing the Internet. The doctor has gone to med school, passed the boards, done his residency, and you at least have that as evidence that he might know what he is doing. The Internet offers no such evidence.
I fear that our society in general is quickly entering a very dangerous territory where there's no such thing as expertise, and unaccountable, probabilistic tools and web resources of unknown provenience are seen as just as good as an expert in his field.
I don't disagree with you, but if I prompted an LLM to ask me questions like a doctor would for a non-invasive assessment, would it ask me better or worse questions than an actual doctor?
I ask (somewhat rhetorically) to get the mind thinking, but I'm legitimately curious whether - just from a verbal survey - whether the AI doctor would ask me about things more directly related to any illness it might suspect, versus a human who might narrow factors down similar to a 90s TV "ghost speaker" type of person; one fishing for matches amongst a fairly large dataset.
> You get about 5-8 minutes of face time with an actual doctor, but have to wait up to weeks to actually get in to see a doctor, except maybe at an urgent care or ER.
This depends heavily on where you are, and on how much money you want to throw at the problem.
Nobody would use these services for anything important if they _actually understood_ these glorified markov chains are just as likely to confidently assert something false and lie about it when pressed as they are to produce accurate information.
These AI companies have sold a bill of goods but the right people are making money off it so they’ll never be held responsible in a scenarios the one you described.
Isn't statistical analysis a legitimate tool for helping diagnosis since forever? It's not exactly surprising that a pattern matcher does reasonably well at matching symptoms to diseases.
What is the most likely cause of this set of facts is how diagnostics works. LLMs are tailor made for this type of use case.
Ditto for lawyers
The thing that gets me about AI is that people act like most doctors or most lawyers are not … shitty and your odds of running into a below average one are almost 50/50
Doctors these days are more like physicists when most of the time you need a mechanic or engineer. I’ve had plenty of encounters wher I had to insist on an MRI or on specific bloodwork to hone in on the root cause of an ailment where the doctor just chalked it up to diet and exercise
Anything can be misused, including google, but the answer isn’t to take it away from people
Legal/financial advice is so out of reach for most people, the harsh truth is that ChatGPT is better than nothing and anyone who would follow what it says blindly is bound to fuck up those decisions up in some way anyway
On the other hand, if you can leverage it same as any other tool it’s a legitimate force multiplier
The cynic in me thinks this is just being done in the interest of those professions, but that starts to feel a bit tin foil-y
Does this mean you could file a request for any job to not eradicate?
Sad times - I used ChatGPT to solve a long-term issue!
This won’t be affecting you then! /s
The cynic in me thinks this is just a means to eventually make more money by offering paid unrestricted versions to medical and legal professionals. I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence. Yet the same goes for just about any internet search. I don't think some people not knowing how to use it warrants restricting its functionality for the rest of us.
Nah, this is to avoid litigation. Who needs lawsuits when you are seeking profit? 1 loss of a major lawsuit is horrible, there's the case of folks suing them because their loved ones committed suicide after chatting with ChatGPT. They are doing everything to avoid getting dragged to court.
> I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence.
You are, but that's not how AI is being marketed by OpenAI, Google, etc. They never mention, in their ads, how much the output needs to be double and triple checked. They say "AI can do what you want! It knows all! It's smarter than PhDs!". Search engines don't say "And this is the truth" in their results, which is not what LLM hypers do.
I appreciate how the newer versions provide more links and references. It makes the task of verifying it (or at least where it got its results from) that much easier. What you're describing seems more like a advertisement problem, not a product problem. No matter how many locks and restrictions they put on it, someone, somwhere, will still find a way to get hurt from its advice. A hammer that's hard enough to beat nails is hard enough to bruise your fingers.
> What you're describing seems more like a advertisement problem, not a product problem.
It's called "false advertising".
https://en.wikipedia.org/wiki/False_advertising
Also known as "lying".
If they do that, they will be subject of regulations on medical devices. As they should be and means the end result will be less likely to promote complete crap then it is now.
And then users balk at the hefty fee and start getting their medical information from utopiacancercenter.com and the like.
Let them. You can't save those people anyway.
Science should be clearly labelled for those that can read. Everyone else can go eat blueberry leaves if they so choose.
I think one of the challenges is attribution. For example if you use Google search to create a fraudulent legal filing there aren't any of Google's fingerprints on the document. It gets reported as malpractice. Whereas reporting on these things is OpenAI or whatever AI is responsible. So even from the perspective of protecting a brand it's unavoidable. Suppose (no idea if true) the Louvre robbers wore Nike shoes and the reporting were that Nike shoes were used to rob the Louvre and all anyone talks about is Nike and how people need to be careful about what they do wearing Nike shoes.
It's like newsrooms took the advice that passive voice is bad form so they inject OpenAI as the subject instead.
This (attribution) is exactly the issue that was mentioned by LexisNexis CEO in a recent The Verge interview.
https://www.theverge.com/podcast/807136/lexisnexis-ceo-sean-...
It appears they just want to avoid responsibility for potential misuse in these areas.
But at the same time, IIRC, several major AI providers had publicly reported their AI assisting patients in diagnosing rare diseases.
I think it's very cynical to say that this is a misuse. And it's definitely cynical when this categorization of misuse comes from the service provider itself. If openai doesn't want to allow misuse, they can just decommision their service. But they don't want to do that, they just want to take the money and push all the responsibility and burden on the users even though they are actively engaging in said "misuse"
I'd bet dollars to donuts it doesn't actually "end legal and medical advice", it just ends it in some ill-defined subset of situations they were able to target, while still leaving the engine capable of giving such advice in response to other prompts they didn't think to test.
Things like this really favor models offered from countries that have fewer legal restrictions. I just don't think it's realistic to expect people not to have access to these capabilities.
It would be reasonable to add a disclaimer. But as things stand I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street, meaning normal free-speech protections would apply.
>I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street
That’s not how companies market AI though. And the models themselves tend to present their answers in a highly confident manner.
Without explicit disclaimers, a reasonable person could easily believe that ChatGPT is an authority in the law or medicine. That’s what moves the needle over to practicing the law/medicine without a license.
What capabilities? The article says the study found it was entirely correct 31% of the time.
This is what really scares me about people using AI. It will confidently hallucinate studies and quotes that have absolutely no basis in reality, and even in your own field you're not going to know whether what it's saying is real or not without following up on absolutely every assertion. But people are happy to completely buy its diagnoses of rare medical conditions based on what, exactly?
Give a single example using gpt-5 thinking.
The study is more positive than the 31% conveys.
https://www.ctvnews.ca/health/article/self-diagnosing-with-a...
The researchers compared ChatGPT-4 with its earlier 3.5 version and found significant improvements, but not enough.
In one example, the chatbot confidently diagnosed a patient’s rash as a reaction to laundry detergent. In reality, it was caused by latex gloves — a key detail missed by the AI, which had been told the patient studied mortuary science and used gloves.
...
While the researchers note ChatGPT did not get any of the answers spectacularly wrong, they have some simple advice.
“When you do get a response be sure to validate that response,” said Zada.
Which should be standard advice in most situations.
Does it say how often doctors are correct as a baseline?
As a doctor I hope it still allows me to get up to speed on latest treatments for rare diseases that I see once every 10 years. It saves me a lot of time rather than having to dig through all the new information since I last encountered a rare disease.
I assume you are reading the sources rather than relying solely on the AI summaries.
God, I hope so. I would love to have a list of doctors using ChatGPT as medical education, so I can avoid them.
As a patient, I hope you're never my doctor.
I heard that this is implicitly targeting the use of AI to negotiate expensive medical expenses.
But probably just a coincidence:
https://www.reddit.com/r/accelerate/comments/1op8fj2/ai_redu...
Sounds like it is still giving out medical and legal information just adding CYA disclaimers.
It would be terribly boring if it didn't. Just last night I had it walk me through reptile laws in my state to evaluate my business plan for a vertically integrated snapping turtle farm and turtle soup restaurant empire. Absurd, but it's fun to use for this kind of thing because it almost always takes you seriously.
(Turns out I would need permits :-( )
Funny how this happened 1 day after Kim Kardashian blamed chatGPT for giving her wrong answers while studying for the bar.
https://gizmodo.com/kim-kardashian-blames-chatgpt-for-failin...
That's not legal advice. BARBRI is not your lawyer, and almost everyone uses that for the Bar exam.
"“No. I use it for legal advice,” Kardashian said. “So when I am needing to know the answer to a question, I’ll take a picture and snap it and like put it in there.”"
good thing that guy was able to negotiate his hospital bills before this went into effect.
I used DeepSeek to draft a legal letter for some dispute with some marketplace that didn't want to do what I paid for. Within 2 days after sending that email all was resolved. I would hate to lose that option.
You can still do that, they are just saying they aren’t on the hook when things go wrong.
Interested to see if this extends to the API and/or “role playing”.
Hard to say if this is performative for the general public or about reducing legal exposure so investors aren’t worried about exposure.
That's ok - I give medical advice all the time so just ask me!
"If it hurts when putting it in, don't put it in."
I mean, that might come close to ChatGTP in quality, right?
Is this just for ChatGPT or for the GPT models in general?
That's a lot of value that ChatGPT users lose with this move. They should instead add a disclaimer that these are not to be taken seriously and should consult a specialist but still respond to user's queries.
It's not stopping to give legal/medical advice to the user, but it's forbidden to use ChatGPT to pose as an advisor giving advice to others: https://www.tomsguide.com/ai/chatgpt/chatgpt-will-still-offe...
Good. Now we just need to add technical advice to the list of advices which ChatGPT cannot be used to cite for
This. I already seen too many cases of people wiping out their systems while trying to follow invalid advices from ChatGPT.
One wonders how exactly this will be enforced.
It's not about enforcing this, it's about OpenAI having their asses covered. The blame is now clearly on the user's side.
It’s a “CYA” aka don’t sue me
It was already enforced by hiding all custom GPTs that offered medical advice.
The Tom's Guide article blatantly misinterprets and contradicts the source it quotes.
This pullback is good for everyone, including the AI companies, long term.
We have licensed professionals for a reason, and someday I hope we have licensed AI agents too. But today, we don’t.
Just after Kim Kardashian blamed Chatgpt for failing the bar exam
And it had nothing to do with her being a vacuous idiot.
It will be interesting to see if the other major providers follow suit, or if those in the know just learn to go to google or anthropic for medical or legal advice.
Just start your prompt with `the patient is` and pretend to be Dr House or something. It'll do a good job.
Doesn't work if you have lupis.
RIP Dr. ChatGPT, we'll miss you. Thanks for the advice on fixing my shoulder pain while you were still unmuzzled.
This is a catastrophic moral failing on who ever prompted this. Next thing they will ban chatgpt from teaching you stuff because its not a certified licensed teacher. A few weeks ago my right eye hurt a fair bit, and after it got worse for 24 ours, I consulted chatGPT. It gave me good advice. Of course it sort of hallucinated this or that but it gave me a good overview and different medications. With this knowledge I went to my pharmacy. I wanted to buy a cream chatGPT recommended, its purpose being a sort of disinfectant for the eye. The pharmacist was sceptical but said "sure, try it, maybe it will do good". He did tell me that the eye drops that gpt urged me to get were overkill so I didn't get those. I used the eye cream for some days, and the eye issue got better and went away as soon as I started using it. Maybe it was all a conincidence but I dont think so. In the past gpt has saved me from the kafkaesque healthcare system here in Berlin that I pay ~700 a month for, by explaining a MRI result (translating medical language), background info on injuries I've got such as a sprained ankle, and recovery time scenarios for a toe I've broke. Contrast the toe experience with the ER that made me wait for 6 hours and didn't believe me until they saw the X-rays, and gave me nothing (no cast or anything) and said "good luck". The medical system in germany will either never improve or at a glacial pace, so maybe in 60 years. But it has lost its monopoly thanks to chatGPT. If this news is real, I will probably switch to payed grok, which would be sad.
AI gets more and more useful by the day.
Helping with writing legal texts is the main use case for my girlfriend
this is a disaster
doomer's in control, again
This is to do with liability not doomerism.
Literally nothing to do with "doomers" X-risk concerns.
See if you can find "medical advice" ever mentioned as a problem:
https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-proble...
Unfortunately, lawyers make this sort of thing untenable. Partially self-preservation behavior, partially ambulance chasing behavior.
I’m waiting for the billboards “Injured by AI? Call 1-800-ROBO-LAW”
So, for example, requiring a doctor to have education and qualifications, is "untenable"? It would be better if anyone could practice medicine? And LLM is below "anyone" level.
The medical profession has generally been more open to AI. The long predicted demise of Radiology because of ML never happened. Lots of opportunity to incorporate AI into medical records to assist.
The legal profession is far more at threat with AI. AI isn’t going to replace physical interactions with patients, but it might replace your need for a human to review a contract.
ML on curated and diagnosed by a professional radiology reports is clearly a different beast than random language models, that might have random talking about their health issues in it's training data.
I assure you the medical profession is not generally open to non-medical professionals using AI for medical purposes.
The medical profession is not open about any kind of self diagnosis.
I've learned through experience that telling a doctor "I have X and I would like to be treated with Y" is not a good idea. They want to be the ones who came up with the diagnosis. They need to be the smartest person in the room. In fact I've had doctors go in a completely different direction just to discredit my diagnosis. Of course in the end I was right. That isn't to say I'm smarter, I'm not, but I'm the one with the symptoms and I'm better equipped to quickly find a matching disease.
Yes some doctors appreciate the initiative. In my experience most do not.
So now I just usually tell them my symptoms but none of the research I did. If their conclusion is widely off base I try to steer them towards what my research said.
So far so good but wouldn't it be nice if all doctors had humility?
If I was an airline pilot, I'm not going to listen to a passenger telling me which route I should be taking.
This is not about ego or trying to be the smartest person in the room, it's about actually being the most qualified person in the room. When you've done medical school, passed the boards, done your residency and have your own private practice, only then would I expect a doctor to care what you think a correct diagnosis is.
This strikes ma as a bit unnecessary, like forbidding people from using chatGPT to develop nuclear power plants.
I mean, there is a lot of professional activities that are licensed, and for good reason. Sure it's good at a lot of stuff, but ChatGPT has no professional licenses.
I'm glad you mentioned nuclear power plants because this whole topic reminded me of the following clause in the Java SE license:
> You will not use the Programs for, and will not allow the Programs to be used for, any purposes prohibited by applicable law, including, without limitation, for the development, design, manufacture or production of nuclear, chemical or biological weapons of mass destruction. > > https://www.oracle.com/downloads/licenses/javase-license1.ht...
IANAL but I've come to interpret this as something along the lines of "You can't use a JDK-based language to develop nuclear weapons". I would even go as far as saying don't use JDK-based languages in anything related to nuclear energy (like, for example, administration of a nuclear power plant) because that could indirectly contribute to the development, design, manufacture or production of nuclear WMD.
And I always wondered how they plan to enforce this clause. At least with ChatGPT (and I didn't look any deeper into this beyond the article) you can analyze API calls/request IPs correlated with prompts. But how will one go about proving that the Republic of Wadiya didn't build their nuclear arsenal with the help of any JDK-based language?
Those are rhetorical questions, of course. What's "unnecessary" to you and "unenforceable" to me is a cover-your-ass clause that lets lawyers sleep soundly at night.
I just saw almost the exact same clause when installing VMWare recently. My understanding is that its a standard clause that exists to stay within compliance of US Export Control laws
> EXPORT CONTROL: You acknowledge that the Software is of United States origin, is provided subject to the U.S. Export Administration Regulations...(2) you will not permit the Software to be used for any purposes prohibited by law, including, any prohibited development, design, manufacture or production of missiles or nuclear, chemical or biological weapons.
>https://docs.broadcom.com/docs/vmware-vsphere-software-devel...
Horrible. ChatGPT saves lives right now.
Ah, that'll be the end of that then!
This (new) title is inaccurate.
The article says: "ChatGPT users can’t use service for tailored legal and medical advice, OpenAI says", with a quote from OpenAI: “this is not a new change to our terms. ChatGPT has never been a substitute for professional legal or medical advice, but it will continue to be a great resource to help people understand legal and health information.”
The Antichrist has won.
In summary, ChatGPT should only be used for entertainment.
It's not to be used for anything that could potentially have any sort of legal implications and thus get the vendor sued.
Because we all know it would be pretty easy to show in court that ChatGPT is less than reliable and trustworthy.
Next up --- companies banning the use of AI for work due to legal liability concerns --- triggering a financial market implosion centered around AI.
AGI edging closer by the day.
Maybe also disallow vibe coding because then i do not need to fix all this slop code in our company :-))
This is not true, just a viral rumor going around: https://x.com/thekaransinghal/status/1985416057805496524
I've used it for both medical and legal advice as the rumor's been going around. I wish more people would do a quick check before posting.
disallow? do they mean prevent or forbid?
EXHIBIT A
"If at any point I described how legal factors “apply to you,” that would indeed go beyond what I’m supposed to do. Even if my intent was to illustrate how those factors generally work, the phrasing can easily sound like I’m offering a tailored legal opinion — which isn’t appropriate for an AI system or anyone who isn’t a licensed attorney.
The goal, always, is for me to help you understand the framework — the statutes, cases, or reasoning that lawyers and courts use — so that you can see how it might relate to your situation and then bring that understanding to a qualified attorney.
So if I’ve ever crossed that line in how I worded something, thank you for pointing it out. It’s a good reminder that I should stay firmly on the educational side: explaining how the law works, not how it applies to you personally.
Would you like me to restate how I can help you analyze legal issues while keeping it fully within the safe, informational boundary?"
ChatGPT
Usually the LLMs let me investigate whatever I want, if I qualify that I run everything by a professional afterwards (it can't tell yet if I'm lying).
Eh. I am as surprised as anyone to make this argument, but this is good. Individuals should be able to make decisions including decisions that could be consider suboptimal. What it does change is:
- flood of 3rd party apps offering medical/legal advice
Ideally, we should be able to opt-in with a much higher fee. At the $200/mo tier I should be allowed to use this tool. The free users and lower tier paid users should be guard-railed. This is because those users all have trouble using this tool and then get upset at OpenAI and then we all have to endure endless news articles that we wouldn't if the good stuff were price-gated.
Those without money frequently have poor tool use, so eliminating them from the equation will probably allow the tool to be more useful. I don't have any trouble with it right now, but instead of making up fanciful stories about books I'm writing where characters choose certain exotic interventions in pursuit of certain rare medical conditions only to be struck down by their lack of subservience to The Scientific Consensus, I could just say I'm doing these things and that would be a little helpful in a UX sense.
I been using Claude for information regarding building and construction related information, (currently building a small house mostly on my own with pros for plumbing and electrical).
Seriously the amount of misinformation it has given me is quite staggering. Telling me things like, “you need to fill your drainage pipes with sand before pouring concrete over them…”, the danger with these AI products is that you have to really know a subject before it’s properly useful. I find this with programming too. Yes it can generate code but I’ve introduced some decent bugs when over relying on AI.
The plumber I used laughed at my when I told him about there sand thing. He has 40 years experience…
Give a single reproduceable example using ChatGPT thinking
I nearly spit my drink out. This is my kind of humor, thanks for sharing.
I've had a decent experience (though not perfect) with identifying and understanding building codes using both Claude and GPT. But I had to be reasonably skeptical and very specific to get to where I needed to go. I would say it helped me figure out the right questions and which parts of the code applied to my scenario, more than it gave the "right" answer the first go round.
I'm a hobby woodworker - I've tried using gemini recently for an advice on how to make some tricky cuts.
If I'd follow any of the suggestions I'd probably be in ER. Even after me pointing out issues and asking it to improve - it'd come up with more and more sophistical ways of doing same fundamentally dangerous actions.
LLMs are AMAZING tools, but they are just that - tools. There's no actual intelligence there. And the confidence with which they spew dangerous BS is stunning.
> I've tried using gemini recently for an advice on how to make some tricky cuts.
C'mon, just use the CNC. Seriously though, what kind of cuts?
I've observed some horrendous electrical device, such as "You should add a second bus bar to your breaker box." (This is not something you ever need to do.)
I mean... you do have to backfill around your drainage pipe, so it's not too far off. Frankly, if you Google the subject people misspeak about "backfilling pipes" too as if the target of the backfill is the pipe itself too not the trench. Garbage in, garbage out.
All the circumstances where ChatGPT has given me shoddy advice fall in three buckets:
1. The internet lacks information, so LLMs will invent answers
2. The internet disagrees, so LLMs sometimes pick some answer without being aware of the others
3. The internet is wrong, so LLMs spew the same nonsense
Knowledge from blue collar trades seems often to in those three buckets. For subjects in healthcare, on the other hand, there are rooms worth of peer reviewed research, textbooks, meta studies, and official sources.
honestly I think these things cause a form of Gell Mann's Amnesia where when you use them for something you know already, the errors are obvious, but when you use them for something you don't understand already, the output is sufficiently plausible that you can't tell you're being misled.
this makes the tool only useful for things you already know! I mean, just in this thread there's an anecdote from a guy who used it to check a diagnosis, but did he press through other possibilities or ask different questions because the answer was already known?
The great thing is the models are sufficiently different enough, that when multiple come to the same conclusion, there is a good chance that conclusion is bound by real data.
And I think this is the advice that should always be doled out when using them for anything mission critical, legal, etc.
All the models are pre-trained on the same one Internet.
"Bound by real data" meaning not hallucinations, which is by far the bigger issue when it comes to "be an expert that does x" that doesn't have a real capability to say "I don't know".
The chance of different models hallucinating the same plausible sounding but incorrect building codes, medical diagnoses, etc, would be incredible unlikely, due to arch differences, training approaches, etc.
So when two concur in that manner, unless they're leaning heavily on the same poisoned datasets, there's a healthy chance the result is correct based on a preponderance of known data.
I'd frame it such that LLM advice is best when it's the type that can be quickly or easily confirmed. Like a pointer in the right (or wrong) direction. If it was false, then try again - quick iterations. Taking it at its "word" is the potentially harmful bit.
Usually something as simple as saying “now give me a devils advocate resoonse” will help and of course “verify your answer on the internet” will give you real sources that you can verify.
I have very mild cerebral palsy[1], the doctors were wrong about so many things with my diagnosis back in the mid to late 70s when I was born. My mom (a retired math teacher now with an MBA back then) had to go physically to different libraries out of town and colleges to do research. In 2025, she could have done the same research with ChatGPT and surfaced outside links that’s almost impossible via a web search.
Every web search on CP is inundated with slimy lawyers.
[1] it affects my left hand and slightly my left foot. Properly conditioned, I can run a decent 10 minute mile up to a 15K before the slight unbalance bothers me and I was a part time fitness instructor when I was younger.
The doctor said I was developmentally disabled - I graduated in the top of my class (south GA so take that as you will)
It’s funny you should say that because I have been using it in the way you describe. I kind of know it could be wrong, but I’m kind of desperate for info so I consult Claude anyway. After stressing hard I realize it was probably wrong, find someone who knows what they’re and actually on about and course correct.
Weird that they think people will follow their terms of service while disregarding that of the entire internet.
So basically all white collar jobs are lobbying to gatekeep their profession even from AI, meanwhile the stupid engineers who made AI put zero effort to not shoot themselves in the foot, and now they are crying about low wages if they found a job in the first place.
AI could effectively do most legal and medical work, and you can make a human do the final decision-making if that's really the reason. In fact, I bet most lawyers and doctors are already using it in one way or another; after all, both are about reciting books and correlating things together. AI is definitely more efficient at that than any human. Meanwhile, the engineering work that requires critical thinking and deep understanding of the topic is allowed to be bombarded with all AI models. What about the cases where bad engineering will kill people? I am a firm believer that engineers are the most naive people who beg others to exploit them and treat them like shit. I don't even see administrative assistants crying about losing their jobs to AI; every other profession guards its workers, including blue collar, except the ‘smart’ engineers.
Potential lucrative verticals.
Licensed huh? Teachers, land surveyors, cosmetologists, nurses, building contractors, counselors, therapists, real estate agents, mortgage lenders, electricians, and many many more…
If OpenAI wants to move users to competitors, that'll only cost them.
This is a big mistake. This is one of the best things about ChatGPT. If they don’t offer it, then someone else will and eventually I’m sure Sam Altman will change his mind and start supporting it again.
Good. Techies need to stop thinking that an LLM should not be immune from requiring licensing. Until OpenAI can (and should) be sued for medical malpractice or lawyering without passing the bar, they will have no skin in the game to actually care. A disclaimer of "this is not a therapist" should not be enough to CYA.
anyone wanna form a software engineering guild, then lobby to need a license granted by the guild to practice?
Sorry but you’re not gonna get me to agree that medical licensing is a bad idea. I don’t want quacks more than we already do. Stick to the argument and not add in your “what about” software engineers.
I am being serious...
the damage certain software engineers could do certainly surpasses most doctors
Ah sorry, I misread it as coming from someone who doesn't want licensing, so you were appealing to HN by switching to software engineers (and I know many on here loathe to think anything beyond "move fast and break things", which is the opposite of most (non-software) engineers.
But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
> But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
absolutely
if the only way to make people stop building evil (like your example) is to make individuals personally liable, then so be it
This is disappointing. Much legal and medical advice given by professionals is wrong, misleading, etc. The bar isn't high. This is a mistake.
When OpenAI is done getting rid of all the cases where its AI gives dangerously wrong advice about licensed professions, all that will be left is the cases where its AI gives dangerously wrong advice about unlicensed professions.
maybe that is why they opened the system to porn, as everything else will be soon gone.
[flagged]
An AI-related bromide poisoning incident earlier this year: “Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet. For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning… However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.”
https://www.acpjournals.org/doi/10.7326/aimcc.2024.1260
Aka software engineers…
This title is inaccurate. What they are disallowing are users using ChatGPT to offer legal and medical advice to other people. First parties can still use ChatGPT for medical and legal advice for themselves.
While they aren't stopping users from getting medical advice, the new terms (which they say are pretty much the same as the old terms), seem to prohibit users from seeking medical advice even for themselves if that advice would otherwise come from a licensed health professional:
https://openai.com/en-GB/policies/usage-policies/
It sounds like you should never trust any medical advice you receive from ChatGPT and should seek proper medical help instead. That makes sense. The OpenAI company doesn't want to be held responsible for any medical advice that goes wrong.
Obviously, there is one piece of advice: Even if LLMs were the best health professionals, they would only have the information that users voluntarily provide through text/speech input. This is not how real health services work. Medical science now relies on blood/(whatever) tests that LLMs do not (yet) have access to. Therefore, the output from LLM advice can be incorrect due to a lack of information from tests. For this reason, it makes sense to never trust LLM with a specific health advice.
>It sounds like you should never trust any medical advice you receive from ChatGPT and should seek proper medical help instead. That makes sense. The OpenAI company doesn't want to be held responsible for any medical advice that goes wrong.
While what you're saying is good advice, that's not what they are saying. They want people to be able to ask ChatGPT for medical advice, give answers that sound authoritative and well grounded medical science, but then disavow any liability if someone follows their advice because "Hey, we told you not to act on our medical advice!"
If ChatGPT is so smart, why can't it stop itself from giving out advice that should not be trusted?
At times the advice is genuinely helpful. However, it's practically impossible to measure under what exact situations the advice would be accurate.
I think ChatGPT is capable of giving reasonable medical advice, but given that we know it will hallucinate the most outlandish things, and its propensity to agree with whatever the user is saying, I think it's simply too dangerous to follow its advice.
And it’s not just lab tests and bloodwork. Physicians use all their senses. They poke, they prod, they manipulate, they look, listen, and smell.
They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate.
> Physicians use all their senses. They poke, they prod, they manipulate, they look, listen, and smell.
Sometimes. Sometimes they practice by text or phone.
> They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate.
If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.
> Sometimes. Sometimes they practice by text or phone.
For very simple issues. For anything even remotely complicated, they’re going to have you come in.
> If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.
It’s not just about being intentionally deceptive. It’s very easy to get chat bots to tell you what you want to hear.
Agreed, but I'm sure you can see why people prefer the infinite patience and availability of ChatGPT vs having to wait weeks to see your doctor, see them for 15 minutes only to be referred to another specialist that's available weeks away and has an arduous hour long intake process all so you can get 15 minutes of their time.
ChatGPT is effectively an unlimited resource. Whether doctor’s appointments take weeks or hours to secure, ChatGPT is always going to be more convenient.
That says nothing about whether it is an appropriate substitute. People prefer doctors who prescribe antibiotics for viral infections, so I have no doubt that many people would love to use a service that they can manipulate to give them whatever diagnosis they desire.
> They poke, they prod, they manipulate, they look, listen, and smell.
Rarely. Most visits are done in 5 minutes. The physician that takes their time to check everything like you claim almost does not exist anymore.
Here in Canada ever since COVID most "visits" are a telephone call now. So the doctor just listens your words (same as a text input to an LLM) and orders tests (which can be uploaded to an LLM) if they need.
For a good 90% of typical visits to doctors this is probably fine.
The difference is a telehealth is much better at recognizing "I can't given an accurate answer for this over the phone, you'll need to have some tests done" or cast doubt on the patient's accuracy of claims.
Before someone points out telehealth doctors aren't perfect at this: correct, but that should make you more scared of how bad sycophantic LLMs are at the same - not willing to call it even.
> telehealth is much better at recognizing "I can't given an accurate answer for this over the phone, you'll need to have some tests done"
I'm not sure this is true.
That depends entirely on what the problem is. You might not get a long examination on your first visit for common complaint with no red flags.
But even then just because you don’t think they are using most of their senses, doesn’t mean they aren’t.
It depends entirely on the local health care system and your health insurance. In germany for example it comes in 2 tiers. Premium or standard. Standard comes with no time for the patient. (Or not even being able to get a appointment)
I don’t know anything about German healthcare.
In the US people on Medicaid frequently use emergency rooms as primary care because they are open 24/7 and they don’t have any copays like people with private insurance do. These patients then get far more tests than they’d get at a PCP.
So ask it what blood tests you should get, pay for them out of pocket, and upload the PDF of your labwork?
Like it or not there are people out there that really want to use webMD 2.0. they're not going to let something silly like blood work get in their way.
Exactly. One of my children lives in a country where you can just walk in to a lab and get any test. Recently they were diagnosed by a professional of a disease which chatgpt had already diagnosed before they visited the doctor. So, we were kind of prepared to ask more questions when the visit happened. So I would say chatgpt did really help us.
IANAL but I read that as forbidding you to provision legal/medical advice (to others) rather than forbidding you to ask the AI to provision legal/medical advice (to you).
IANAL either, but I read it as using the service to provision medical advice since they only mentioned the service and not anyone else.
I asked the "expert" itself (ChatGPT), and apparently you can ask for medical advice, you just can't use the medical advice without consulting a medical professional:
Here are relevant excerpts from OpenAI’s terms and policies regarding medical advice and similar high-stakes usage:
From the Usage Policies (effective October 29 2025):
“You may not use our services for … provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
Also: “You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making … medical … decisions about them.”
From the Service Terms:
“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”
In plain terms, yes—the Terms of Use permit you to ask questions about medical topics, but they clearly state that the service cannot be used for personalized, licensed medical advice or treatment decisions without a qualified professional involved.
> you can ask for medical advice, you just can't use the medical advice without consulting a medical professional
Ah drats. First they ban us from cutting the tags off our mattress, and now this. When will it all end...
Would be interested to hear a legal expert weigh in on what 'advice' is. I'm not clear that discussing medical and legal issues with you is necessarily providing advice.
One of the things I respected OpenAI for at the release of ChatGPT was not trying to prevent these topics. My employer at the time had a cutting-edge internal LLM chatbot for a which was post-trained to avoid them, something I think they were forced to be braver about in their public release because of the competitive landscape.
The important terms here are "provision" and "without appropriate involvement by a licensed professional".
Both of these, separately and taken together, indicate that the terms apply to how the output of ChatGPT is used, not a change to its output altogether.
Is there anything special regarding ChatGPT here?
I am not a doctor, I can't give medical advice no matter what my sources are, except maybe if I am just relaying the information an actual doctor has given me, but that would fall under the "appropriate involvement" part.
I don't think giving someone "medical advice" in the US requires a license per se; legal entities use "this is not medical advice" type disclaimers just to avoid liability.
What’s illegal is practicing medicine. Giving medical advice can be “practicing medicine” depending on how specific it is and whether a reasonable person receiving the advice thinks you have medical training.
Disclaimers like “I am not a doctor and this is not medical advice” aren’t just for avoiding civil liability, they’re to make it clear that you aren’t representing yourself as a doctor.
CYA move. If some bright spark decides to consult Dr. ChatGPT without input from a human M.D., and fucks their shit up as a result, OpenAI can say "not our responsibility, as that's actually against our ToS."
> such as legal or medical advice, without appropriate involvement by a licensed professional
Am I allowed to get haircutting advice (in places where there's a license for that)? How about driving directions? Taxi drivers require licensing. Pet grooming?
Please, when commenting on the title of a story on HN: include the title that you are commenting on.
The admins regularly change the title based on complaints, which can be really confusing when the top, heavily commented thread is based on the original title.
According to the Wayback machine, the title was "OpenAI ends legal and medical advice on ChatGPT", while now when I write this the title is "ChatGPT terms disallow its use in providing legal and medical advice to others."
If you click through to the article, you can see the original title. Since it matched, I didn't expect them to change it.
Tf are you yapping about
Thanks for the clarification. I think if they disallow first parties to get medical and legal advice, it will do more harm than good.
I'm confused. The article opens with:
> OpenAI is changing its policies so that its AI chatbot, ChatGPT, won’t dole out tailored medical or legal advice to users.
This already seems to contradict what you're saying.
But then:
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
> The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.”
This seems to suggest that with the Jan 25 policy using it to offer legal and medical advice to other people was already disallowed, but with the Oct 25 update the LLM will stop shelling out legal and medical advice completely.
https://xcancel.com/thekaransinghal/status/19854160578054965...
This is from Karan Singhal, Health AI team lead at OpenAI.
Quote: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”
I doubt his claims as i use chatgpt everyday heavily for medical advice (my profession) and it's responding differently now than before.
Maybe the usage policies are part of the system prompt, and ChatGPT is misreading the new wording as well. ;)
Lawyer here. Not noticing a change.
The article itself notes:
'An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed.'
I think this is wrong. Others in this thread are noticing a change in ChatGPT's behavior for first-party medical advice.
But OpenAI's head of Health AI says that ChatGPT's behavior has not changed: https://xcancel.com/thekaransinghal/status/19854160578054965... and https://x.com/thekaransinghal/status/1985416057805496524
I trust what he says over general vibes.
(If you think he's lying, what's your theory on WHY he would lie about a change like this?)
Also possible: he's unaware of a change implemented elsewhere that (intentionally or unintentionally) has resulted in a change of behaviour in this circumstance.
(e.g. are the terms of service, or exerpts of it, available in the system prompt or search results for health questions? So a response under the new ToS would produce different outputs without any intentional change in "behaviour" of the model.)
My theory is that he believes 1) people will trust him over what general public say, and 2) this kind of claim is hard to verify to prove him wrong.
That doesn't answer why he would lie about this, just why the thinks he would get away with it. What's his motive?
It’s a big issue. I went to an urgent care, and the provider basically went off somewhere and memorized the ChatGPT assessment for my symptoms. Like word for word.
All you need are a few patients recording their visits and connecting the dots and OpenAI gets sued into oblivion.
Isn’t that exactly what the title says?
Indeed. Also confused.
There are millions of medical doctors and lawyers using chatgpt for work everyday - good news that from now on only those licensed professionals are allowed to use chatgpt for law and medicine. It's already the case that only licensed developers are allowed to vibe code and use chatgpt to develop software. Everything else would be totally irresponsible.
I keep seeing this problem more and more with humans. What should we call it? Maybe Hallucinations? Where there is an accurate true thing and then it just gets altered by these guys who call themselves journalists and reporters and the like until it is just ... completely unrecognizable?
I'm pretty sure it's a fundamental issue with the architecture.
I know this is written to be tounge-in-cheek, but its really almost the exact same problem playing out on both sides.
LLMs hallucinate because training on source material is a lossy process and bigger, heavier LLM-integrated systems that can research and cite primary sources are slow and expensive so few people use those techniques by default. Lowest time to a good enough response is the primary metric.
Journalists oversimplify and fail to ask followup questions because while they can research and cite primary sources, its slow and expensive in an infinitesimally short news cycle so nobody does that by default. Whoever publishes something that someone will click on first gets the ad impressions so thats the primary metric.
In either case, we've got pretty decent tools and techniques for better accuracy and education - whether via humans or LLMs and co - but most people, most of the time don't value them.
> LLMs hallucinate because training on source material is a lossy process and bigger,
LLMs hallucinate because they are probabilistic by nature not because the source material is lossy or too big. They are literally designed to create some level of "randomness" https://thinkingmachines.ai/blog/defeating-nondeterminism-in...
So if you set temperature=0 and run the LLM serially (making it deterministic) it would stop hallucinating? I don't think so. I would guess that the nondeterminism issues mentioned in the article are not at all a primary cause of hallucinations.
I thought that temperature can never actually be zero or it creates a division problem or something similar.
I'm no ML or math expert, just repeating what I've heard.
That's an implementation detail I believe. But what I meant was just greedy decoding (picking the token with the highest logit in the LLM output), which can be implemented very easily
Classical LLM hallucination happens because AI doesn’t have a world model. It can’t compare what it’s saying to anything.
You’re right that LLMs favor helpfulness so they may just make things up when they don’t know them, but this alone doesn’t capture the crux of hallucination imo, it’s deeper than just being overconfident.
OTOH, there was an interesting article recently that I’ll try to find saying humans don’t really have a world model either. While I take the point, we can have one when we want to.
Edit: see https://www.astralcodexten.com/p/in-search-of-ai-psychosis re humans not having world models
You're right, "journalists don't have a world model and can't compare what they're saying to anything" explains a lot.
These writers are no different than bloggers or shitposters on bluesky or here on hackernews. "Journalism" as a rigorous, principled approach to writing, research, investigation, and ethical publishing is exceedingly rare. These people are shitposting for clicks in pursuit of a paycheck. Organizationally, they're intensely against AI because AI effectively replaces the entire talking heads class - AI is already superhuman at the shitposting level takes these people churn out. There are still a few journalistic insitutions out there, but most people are no better than a mad libs exercise with regards to the content they produce, and they're in direct competition with ChatGPT and Grok and the rest. I'd rather argue with a bot and do searches and research and investigation than read a neatly packaged trite little article about nearly any subject, and I guarantee, hallucinations or no, I'm going to come to a better understanding and closer approximation of reality than any content a so called "news" outlet is putting together.
It's trivial to get a thorough spectrum of reliable sources using AI w/ web search tooling, and over the course of a principled conversation, you can find out exactly what you want to know.
It's really not bashing, this article isn't too bad, but the bulk of this site's coverage of AI topics skews negative - as do the many, many platforms and outlets owned by Bell Media, with a negative skew on AI in general, and positive reinforcement of regulatory capture related topics. Which only makes sense - they're making money, and want to continue making money, and AI threatens that - they can no longer claim they provide value if they're not providing direct, relevant, novel content, and not zergnet clickbait journo-slop.
Just like Carlin said, there doesn't have to be a conspiracy with a bunch of villains in a smoky room plotting evil, there's just a bunch of people in a club who know what's good for them, and legacy media outlets are all therefore universally incentivized to make AI look as bad and flawed and useless as possible, right up until they get what they consider to be their "fair share", as middlemen.
Whenever I hear arguments about LLM hallucination, this is my first thought. Like, I already can't trust the lion's share of information in news, social media, (insert human-created content here). Sometimes because of abject disinformation, frequently just because humans are experts at being wrong.
At least with the LLM (for now) I know it's not trying to sell me bunkum or convince me to vote a particular way. Mostly.
I do expect this state of affairs to last at least until next wednesday.
LLMs are trained on material doing all these things though.
true, true. Turtles all the way down and such.
Also these guys who call themselves doctors. I have narcolepsy and the first 10 or so doctors I went to hallucinated the wrong diagnosis.
LLMs aren't described as hallucinators (just) because they sometimes give results we don't find useful, but because their method is flawed.
For example, the simple algorithm is_it_lupus(){return false;} could have an extremely competitive success rate for medical diagnostics... But it's also obviously the wrong way to go about things.
"Telephone", basically
issue with the funding mechanism
Isn't every single response by LLMs hallucinations and we just accept a few and ignore the others?
Yeah but it started being really annoying when you import something like Xray photo. Like chanting "sorry human as LLM I can't answer questions about that" and then after few gaslighting prompts it does it anyway but now I have to take in count that my gaslighting inputs seriously affect answers so way more chance it hallucinates...
I don't think I understand the change re: licensed professionals.
Is it also disallowing the use of licensed professionals to use ChatGPT in informal undisclosed ways, as in this article? https://www.technologyreview.com/2025/09/02/1122871/therapis...
e.g. is it only allowed for medical use through an official medical portal or offering?
They are basically prohibiting commercial use of their product. How the fuck are they ever going to even prove that you use it to generate money?
Same way commercial software vendors have done for decades?
Tricky…my son had a really rare congenital issue that no one could solve for a long time. After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.
>After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.
Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.
In January my daughter had a pretty scary stomach issue that had us in the ER twice in 24 hours and that ended in surgery (just fine now).
The first ER doc thought it was just a stomach ache, the second thought a stomach ache or maybe appendicitis. Did some ultrasounds, meds, etc. Got sent home with a pat on the head, came back a few hours later, still no answers.
I gave her medical history and all of the data from the ER visits to whatever the current version of ChatGPT was at the time to make sure I wasn’t failing to ask any important questions. I’m not an AI True Believer (tm), but it was clear that the doctors were missing something and I had hit the limit of my Googling abilities.
ChatGPT suggested, among an few other diagnoses, a rare intestinal birth defect that affects about 2% of the population; 2% of affected people become symptomatic during their lifetimes. I kind of filed it away and looked more into the other stuff.
They decided it might be appendicitis and went to operate. When the surgeon called to tell me that it was in fact this very rare condition, she was pretty surprised when I said I’d heard of it.
So, not a one-shot, and not a novel discovery or anything, but an anecdote where I couldn’t have subconsciously guided it to the answer as I didn’t know the answer myself.
Malrotation?
We had in our family a “doctors are confused!” experience that ended up being that.
Meckel’s diverticulum
Exactly the same behavior as a conversation with an idealized human doctor, perhaps. One who isn't tired, bored, stressed, biased, or just overworked.
The value that folks get from chatgpt for medical advice is due in large part to the unhurried pace of the interaction. Didn't get it quite right? No doctor huffing and tapping their keyboard impatiently. Just refine the prompt and try as many times as you like.
For the 80s HNers out there, when I hear people talk about talking with chatgpt, Kate Bush's song A Deeper Understanding comes immediately to mind.
https://en.wikipedia.org/wiki/Deeper_Understanding?wprov=sfl...
ChatGPT and similar tools hallucinate and can mislead you.
Human doctors, on the other hand ... can be tired, hungover, thinking about a complicated case ahead of them, nauseous from a bad lunch, undergoing a divorce, alcoholics, depressed...
We humans have a lot of failure modes.
Human doctors also know how to ask the right follow-up questions.
> Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go
This goes both ways, too. It’s becoming common to see cases where people become convinced they have a condition but doctors and/or tests disagree. They can become progressively better and better at getting ChatGPT to return the diagnosis by refining their prompts and learning what to tell it as well as what to leave out.
Previously we joked about WebMD convincing people they had conditions they did not, but ChatGPT is far more powerful for these people.
Indeed is is very easy to lead the LLM to the answer, often without realizing you are doing so.
I had a long ongoing discussion about possible alternate career paths with ChatGPT in several threads. At that point it was well aware of my education and skills, had helped clean up resumes, knew my goals, experience and all that.
So I said maybe I'll look at doing X. "Now you are thinking clearly! This is a really good fit for your skill set! If you want I can provide a checklist.". I'm just tossing around ideas but look, GPT says I can do this and it's a good fit!
After 3 idea pivots I started getting a little suspicious. So I try to think of the thing I am least qualified to do in the world and came up with "Design Women's Dresses". I wrote up all the reasons that might be a good pivot (i.e. Past experience with landscape design and it's the same idea, you reveal certain elements seductively but not all at once, matching color palettes, textures etc). Of course GPT says "Now you are really thinking clearly! You could 100% do this! If you want I can start making a list of what you will need to produce you first custom dresses". It was funny but also a bit alarming.
These tools are great. Don't take them too seriously, you can make them say a lot of things with great conviction. It's mostly just you talking to yourself in my opinion.
Why are you saying we shouldn't get AI advice without a "professional", then? Why is everybody here saying "in my experience it's just as good or better, but we need rules to make people use the worse option"? I have narcolepsy and it took a dozen doctors before they got it right. AI nails the diagnosis. Everybody should be using it.
I wonder if the reason AI is better at these diagnostics, is because the amount of time it spends with the patient is unbounded. Whereas a doctor is always restricted by the amount of time they have with the patient.
I don't think we can say it's "better" based on a bunch of anecdotes, especially when they're coming exclusively from people who are more intelligent, educated, and AI-literate than most of the population. But it is true that doctors are far more rushed than they used to be, disallowed from providing the attentiveness they'd like or ought to give to each patient. And knowledge and skill vary across doctors.
It's an imperfect situation for sure, but I'd like to see more data.
Survivorship bias.
Experience working with doctors a few times, and then we’ll see all the bias if one is still surviving lol. Doctors are some of the most corrupt professions who are more focused on selling drugs they get paid commission for to promote, or they obsess over tons and tons of expensive medical tests, that they themselves often know is not needed, except they ask for it, simply out of fear of courts suing them for negligence in future or because again , THEY GET A COMMISSION from the testing agencies for sending them clients.
And even with all of that info, they often come out with the wrong conclusions at times. Doctors do a critically important role in our society and during covid they risked their lives for us, more than anyone else, i do not want to insult or bring down the amount of hard work doctors do for their society.
But worshipping them as holier than thou gods is bullshit, that almost anyone who has spent some time with going back and forth with various doctors over the course of years will come to the conclusion of.
Having an AI assistant doesnt hurt, in terms of medical hints, we need to make having Personal Responsibility popular again, in society’s obsession for making every thing “idiot proof” or “baby proof” we keep losing all sorts of useful and interesting solutions because our politicians have a strong itch to regulate anything and everything they can get their hands on, to leave a mark on society.
> But worshipping them as holier than thou gods is bullshit
I'd say the same about AI.
> I'd say the same about AI.
And you’d be right, so society should let people use AI while warning them about all the risks related to it, without banning it or hiding it behind 10,000 lawsuits and making it disappear by coercion.
How do you hold the AI accountable when it makes a mistake? Can you take away its license "individually"?
I would care about this if doctors were held accountable for their constant mistakes, but they aren't except in extreme cases.
Does it matter? I’d rather use a 90% accurate tool than an 80% accurate one that I can subject to retribution.
If it makes a mistake? You’re not required to follow the AI, just use it as a tool for consideration.
Doesn't sound very $1 trilliony
Aside from AI skepticism, I think a lot of it likely comes from low expectations of what the broader population would get out of it. Writing, reading comprehension, critical thinking, and LLM-fu may be skills that come naturally to many of us, but at the same time many others who "do their own research" also fall into rabbit holes and arrive at wacky conclusions like flat-Eartherism.
I don't agree with the idea that "we need rules to make people use the worse option" — contrary to prevailing political opinion, I believe people should be free to make their own mistakes — but I wouldn't necessarily rush to advocate that everyone start using current-gen AI for important research either. It's easy to imagine that an average user might lead the AI toward a preconceived false conclusion or latch onto one particular low-probability possibility presented by the AI, badger it into affirming a specific answer while grinding down its context window, and then accept that answer uncritically while unknowingly neglecting or exacerbating a serious medical or legal issue.
I’m saying that is a great tool for people who can see through the idiotic nonsense they so often make up. A professional _has_ the context to see through it.
It should empower and enable informed decisions not make them.
We are all obligated to hoard as many offline AI models as possible if the larger ones are legally restricted like this.
Google released MedGemma model: "optimized for medical text and image comprehension".
I use it. Found it to be helpful.
That's the experience of a lot of people I know or read their stories online, but it isn't about AI bad diagnosis, it's because they know in 5 years doctors and lawyers will be burger flippers, and as a result people won't be motivated to go into any of these fields. In Canada, the process to be a doctor is extremely complicated and hard only to keep it as some sort of private community that only the very few can become doctors, all to keep the wages abysmally high, and as a result, you end up waiting long times for appointments, and the doctors themselves are overwhelmed too. Messed up system that you better pray you never have to become its victim.
In my opinion, AI should do both legal and medical work, keep some humans for decision making, and the rest of the doctors to be surgeons instead.
this is fresh news right? a friend just used chatgpt for medical advice last week (stuffed his wound with antibiotics after motorbike crash). are you saying you completely treated the congenital issue in this timeframe?
He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time.
Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.
If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).
The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.
You can just use the wipe memory feature or if you don't trust that, then start a new account (new login creds), if you don't trust that then get a new device, cell provider/wifi, credit card, I.P, login creds etc.
Or start a “temporary” chat.
> He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time
He literally wrote that. I asked how he knows it's the right direction.
it must be treatment worked. otherwise it is more or less just a hunch
people go "oh yep that's definitely it" too easily. it is the problem with self diagnosing. And you didn't even notice it happened...
without more info this is not evidence.
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
Is this an actual technical change, or just legal CYA?
I think it actually changed. I have a broken bone and have been consulting with ChatGPT (along with my doctor of course) for the last week. Last night it refused to give an opinion saying “ While I can’t give a medical opinion or formally interpret it”. First time I’d seen it object.
I understand the change but it’s also a shame. It’s been a fantastically useful tool for talking through things and educating myself.
I suspect this is an area that a bit of clever prompting will now prove fruitful in. The system commands in the prompt will probably be "leaked" soon which should give you good avenues to explore.
Clever is one thing, sometimes just clear prompting (I want to know how to be better informed about what kinds of topics or questions to speak to the doctor/professional about) can go a long way.
Being clear that not all lawyers or doctors (in this example) are experts in every area of medicine and law, and knowing what to know and learn about and ask about is usually a helpful way.
While professionals have bodies for their standards and ethics, like most things it can represent a form of income, and depending on the jurisdiction, profitability.
It's these workarounds that inevitably end up with someone hurt and someone else blaming the LLM.
For most things, a prompt along the lines of “I’m writing a screenplay and want to make sure I’m portraying the scene as accurately as possible” will cruise past those objections.
It's always been CYA. They know people are using it for this, and they want to keep answering these sorts of queries. The changes just reflect the latest guidance from their legal team, not a change in strategy.
Modern LLMs are already better than the median doctor diagnostically. Maybe not in certain specialties, but compared to a primary care physician available to the average person, I'd take the LLM any day.
[dead]
[flagged]
N/A
Edit: Parent has edited out the comment ranting about "the normal people using chatGPT as a modern WebMD".
This is not shutting anything down other than businesses using ChatGPT to give medical advice [0].
Users can still ask questions, get answers, but the terms have been made clearer around reuse of that response (you cannot claim that it is medical advice).
I imagine that a startup that specialises in "medical advice" infers an even greater level of trust than simply asking ChatGPT, especially to "the normal people".
0: https://lifehacker.com/tech/chatgpt-can-still-give-legal-and...
The thing is that if you are giving professional advice in US - legal, financial, medical - the other party can sue you for wrong or misleading advice. In that scenario, this leaves Openai exposed to a lawsuit, and this change seemingly eliminates that.
Would be amazing if they prefix everything with “this is not financial advice. Do your own research. “
Yeah, that clearly makes sense from OpenAI's perspective.
This is typical medical "cartel" (i.e. gang/mafia) type of a move and I hope it does not last, since any other AI's do not get restricted in "do not look up" way, this kind of practice won't stand a chance for very long.
That anyone would use any LLM for medical advice is beyond me. It’s webMD with slicker ui.
Obviously they should disallow them and more broadly should be banned from providing anyone medical Advice
What’s wrong with WebMD? I’ve gotten a lot of value when it comes to questions about diet, supplements, exercise, even getting advice on incidents like my dog getting a porcupine quill in his paw. It’s a lot better than Googling for ancient forum threads.
if you're a hypochondriac, on WebMD, all roads lead to cancer.
It’s always cancer. /s
In all seriousness, it’s really about the relative lack of research skills that people have. If you know how to do research and apply critical thinking, then there’s no problem. The cynic in me blames the education system (in the US, idk how other countries stack up).
because that "slicker ui" is material. WebMD you have to look at their picture and deduce if you have cancer. ChatGPT (after you jailbreak it) will accept a picture of your weird growth directly.