I never said it can't be useful, just that it isn't very useful right now and it certainly isn't going to replace doctors any time soon. I said it in another comment that eventually I think AI will be a tool that could be used to help doctors.
TooMuchDog
I mean, sure. I know people who have used ChatGTP to write their discharges. It'll definitely be tried as a crutch by the lazy in the short term, but I think it'll end up being used as a actual tool in the long term (not just in medicine, but in a wide variety of fields). However, I also think that's an entirely different discussion than the one this article presents. I think the conversation of how AI can be used as a tool to assist existing and future professionals is an entirely different conversation than wether or not AI is going to replace any given profession. I also think it's a wildly more productive conversation because I don't believe there are many professions that can be completely phased out by AI.
I also think that the point you raised about codes is another entirely different discussion that could be had about the pitfalls of modern day medicine. I'm actually going to argue hard in favor of the doctors who told you to "choose whatever code is most appropriate" because in my experience and opinion, knowing specific billing codes is wildly outside the scope of knowledge needed and expected for a doctor. Their job should be first and foremost to treat their patients. Navigating the unnecessarily complicated and red tape filled maze that is billing and insurance codes is not only an unrelated skill set, but also a necessity brought out of a flawed and predatory system built by those who seek to profit at the cost of healthcare (i.e. insurance companies) rather than those who seek to make a living by providing healthcare.
This is a big flashy headline that isn't as big of a deal as it presents itself. AI is still extremely far from assisting doctors, let alone replacing them.
"Diagnosis a 1 in 100,000 condition in seconds" is an absolutely meaningless statement.
What was the condition? Does it present with vague and difficult to assess symptoms or does it have a pathognomonic clinical sign that identifies it immediately, or is it somewhere in between? Did the AI diagnose it correctly, if so was it on the first try? Is it repeatable, could it diagnose it again? How prone is it to false positives, can we be sure it wouldn't diagnose a healthy patient or a patient with a similarly presenting problem? What about false negatives? It caught it this time, do we know how many times it missed it? What about a treatment plan? Does it know how best to treat it and can it work to personalize a treatment to fit that patient specifically with any comorbidities or conflicting medications taken into account? When planning treatments does it stick strictly the drug label or does it factor in published research on dosing?
That's interesting. Adderall is the only medication that helps with my ADHD. It has its side effects that I'm not always a fan of, but I've titrated down to a low enough dose that those don't really bother me anymore. Vivance on the other hand was awful, I hated being on that.
But no one in the study actually received Adderall? If I'm understanding this right they cannot claim caffeine is as effective as Adderall. All their study shows is that caffeine is as or more effective than an Adderall placebo.
That's a weird way to say "I don't understand the difficulties that people with ADD/ADHD face and how those difficulties still exist during unproductive time."
Volkor X is one of my favorites.
I'm trying to like it, but it's hard. It doesn't quite scratch the doom scrolling itch like Reddit did. I'm using Jerboa and it's missing a lot of features that I relied heavily on with Relay. Ultimately I'm just going to have to adapt though because it looks like Reddit isn't backing down and I'm not going to use the official app.
In good news, I always hated my Reddit username so it's nice to finally get to change it lol.
Yeah, I think we overall are on the same page in regards to the role AI is going to play in our futures and the consequences that could come with the greed of bad actors. (Though I have to say I really hate the word "normie". I feel the use of it instantly weakens an argument because it's so associated with the stereotype of a basement dwelling know-it-all.)
I am going to stand my ground somewhat on the point of medical codes, not as an attempt to be adversarial though but because I'm enjoying the conversation.
I admittedly don't know much about how it works in the human medical world because I'm in veterinary medicine. In my experience though there isn't a difference between billing codes and test order codes from a clinicians perspective. I order a test, and to do so I have to put in a code that tells the software we use both what the test is and how much it costs, and then both applies it to the bill and sends a request to clin path, which is why I just referred to them as billing codes. With our software (and all others that I've used for that matter) there are an unreasonable number of different codes that order tests that can differ very minimally, and they usually aren't named clearly. I'm pretty sure this is because the people organizing and naming the tests are not clinicians, and possibly aren't even medically trained as it's more of an IT responsibility.
For example, if I'm concerned about the function of a patients liver and kidneys, the I want a test that will tell me what their AST, ALP, GGT, Albumin, Cholesterol, Glucose, BUN, Creatinine, and SDMA are, or at least some relevant combination of those plus some others. The problem is that I don't order a panel with a drop-down list of what values I want. Instead I have to choose from a Chemistry, Chem 6, Chem 8, Chem 10, Chem 12, Senior Panel, Adult Wellness Panel, Profile, Mini Profile, Full Profile, NOVA, NOVA lytes, etc. All of those have their own codes and their own names, and the same tests can differ based on if I'm ordering in house vs ordering from any of multiple external labs. I know exactly what values I want to see, but juggling the various different non-descript names of the dozens or more possible test options is a nightmare, and that's just when dealing with lab work that I run routinely. When it comes to codes that I very rarely use, or have never had to order before, then the chances I get it wrong are much higher. The worst part is that many of the available options overlap significantly, and sometimes I can get the same diagnostic value out of several of the options, but for some reason one of the options costs $50 to run while another costs $300 and the rest fall somewhere in-between.
Bottom line, knowing what I want and knowing how to ask for what I want are often very unrelated.