AI models hallucinate, and doctors are OK with that
Eggheads call for comprehensive rules to govern machine learning in medical settings The tendency of AI models to hallucinate – aka confidently making stuff up – isn't sufficient to disqualify them from use in healthcare settings. So, researchers have set out to enumerate the risks and formulate a plan to do no harm while still allowing medical professionals to consult with unreliable software assistants.…

Eggheads call for comprehensive rules to govern machine learning in medical settings
The tendency of AI models to hallucinate – aka confidently making stuff up – isn't sufficient to disqualify them from use in healthcare settings. So, researchers have set out to enumerate the risks and formulate a plan to do no harm while still allowing medical professionals to consult with unreliable software assistants.…