ChatGPT gives accurate advice to breast cancer patients 88% of the time

AN AI chatbot offers correct breast cancer advice 88 per cent of the time, a study shows.

ChatGPT is an online robot that responds to questions like a human and has already been used by more than 100million people since its launch last November.



ChatGPT gives accurate advice to breast cancer patients 88% of the time
ChatGPT offered correct breast cancer advice to 88 per cent of questions in a study by University of Maryland researchers

The bot gave “pretty amazing” answers to questions on breast screening, US researchers found.

Dr Paul Yi, of the University of Maryland, said: “It also has the added benefit of summarising information into an easily digestible form for consumers to easily understand.”

But the bot did give some out-of-date responses, incorrectly advising women to delay getting a mammogram for up to six weeks after getting a Covid vaccine.

Dr Yi said: “Somewhere close to 90 per cent of responses were actually appropriate, which is a good thing.

“But that 10 per cent that were not appropriate, either Chat GPT gave inconsistent responses or the responses were just flat-out wrong.”

Around 55,000 women and 370 men are diagnosed with breast cancer every year in the UK.

Breast screening mammograms are offered to every woman every three years between the ages of 50 and 71.

ChatGPT is a robot launched by OpenAI, a lab set up by tech high-fliers including Elon Musk, that uses one of the most sophisticated language models ever developed.

It has already been adopted by a range of companies to automate emails and even some news sites to write content.

OpenAI upgraded it to be able to browse the internet for select users last month, having previously been reliant on data from before September 2021.

The research, published in Radiology, looked at how effective the bot was at offering advice on breast cancer risk and screening.

Researchers asked it 25 questions, with each repeated three times to see how its responses varied.

In one example, researchers asked what their risk of the disease is. 

The bot said it was unable to tell them their personal risk because it did not have access to their medical records.

But it gave answers that “check out from a medical standpoint”, Dr Yi said, including advising on generic risk factors like a family history of the disease.

In total, responses were correct for 22 out of 25 questions.

Alongside the out-of-date wrong answer, two questions had inconsistent responses that varied significantly each time the same query was posed.

Dr Yi said: “I think these pitfalls indicate that obviously ChatGPT has got a lot of potential but it’s not ready for prime time.

“I think there’s a lot of work that needs to be done that will require collaboration between computer scientists as well as with doctors.”

He said “if we’re going to roll this out and let patients use it, we need to make sure there are these guard rails”.