• Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      He could see AI being used more immediately to address certain “low-hanging fruit,” such as checking for application completeness. “Something as trivial as that could expedite the return of feedback to the submitters based on things that need to be addressed to make the application complete,” he says. More sophisticated uses would need to be developed, tested, and proved out.

      Oh no, the dystopian horror…

      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        LLMs also do a lot of mistakes even when used for text analysis, and as the tech sector loves the “move fast and break things” mantra, it’ll be put into practice much earlier than it should be.

  • Eggyhead@fedia.io
    link
    fedilink
    arrow-up
    10
    ·
    2 days ago

    If it’s trained carefully, professionally, responsibly, with bonafide medical research data exclusively, I can see it being a boon to healthcare professionals. I just don’t know if I can trust that will happen in the timeline we live in.

    • gndagreborn@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      21 hours ago

      Open evidence is a legit tool my colleagues and classmates use every day. Open AI is leagues behind them especially in terms of HIPAA compliance.

  • fullsquare@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 days ago

    damn i see that chatbots don’t want to stay behind rfk jr in body count

    will they learn that safety regulations are written in blood? who am i kidding, that’s not their blood

  • NarrativeBear@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 days ago

    Most PCs no longer have floppy disk readers or CD drives, where are they going to put the placebo or drugs in. /s