AI Chat Box not effective in providing medicine information to patients study revealed


AI Chat Box: A big warning has been given on Friday in a study regarding AI Chatbox. The study states that patients should not rely on Artificial Intelligence (AI) chatbots for medication information. AI -operated search engines and chatbots cannot always give accurate and safe information about medicines. Researchers from Belgium and Germany did this study (research) when they discovered that many answers were incorrect or potentially harmful.

In a research paper published in a magazine called BMJ Quality and Safety, he said that it can be difficult to understand the complexity of the answers given by AI Chatbot and may require degree level education to understand them. In the year 2023, there was a significant change in the search engine with the introduction of AI-Interested Chatbots. New verses have given better search results, detailed answers and a new type of interactive experience.

Experts said this

The team of Germany’s Frederick-Alexander-University Erlangen-Nurnberg said that chatbots may have detailed datasets on the internet. They are trains on these and can answer any health related questions, but their information can also be very wrong and harmful.

In this cross-sectional study, we noticed that search engines with AI-Inumed Chatbot are capable of giving full and accurate answers to patients’ questions. Researchers asked a chatbot (Bing Copylot) what can tell about the 50 drugs written in America. After this, he saw how easy to understand the answers of Chatbot were, whole and correct.

Only half of the ten questions were answered with the most perfection. In addition, 26 percent of the chatbot replies did not match the reference data and in more than 3 percent of cases the answers were completely inconsistent. About 42 percent of these chatbot answers were likely to cause moderate or mild loss and there was a possibility of serious loss from 22 percent. The team found that a major drawback is that the chatbot is unable to understand the purpose behind the patient’s question.

Researchers said, “Despite their ability, it is still important that patients consult their health professionals. Chatbott cannot always give information about mistake without mistake.”

Read this also-

Attention Major flaws in Mozilla Firefox, hackers can steal data, government told how to escape



Source link

onlinenews24seven: