Home
News

Study Finds ChatGPT on Par with Humans in Providing Healthcare-Related Answers

A recent study conducted by researchers from New York University has shed light on the challenges people face in distinguishing between responses generated by OpenAI's chatbot, ChatGPT, and those provided by human healthcare providers.

The study aimed to explore the potential of chatbots in assisting patient-provider communication and evaluate the trust and reliability placed in their responses.

Study: ChatGPT Equals Humans in Healthcare Answers

The Study's Findings

The study presented participants with a series of patient questions and responses, half of which were generated by ChatGPT and the other half by human providers. The results revealed the following key findings:

Difficulty in Differentiating Responses

On average, participants correctly identified the chatbot responses 65.5% of the time and the provider responses 65.1% of the time. This indicates that participants had difficulty distinguishing between the two sources of information. The ability to identify the source varied across different questions, suggesting that certain topics or complexities posed more challenges in differentiation.

Trust in Chatbot Responses

Participants generally expressed mild trust in chatbot responses, with an average score of 3.4 on a 5-point scale. However, trust in chatbot responses was lower for tasks involving higher health-related complexity, such as diagnostic and treatment advice. On the other hand, logistical questions and preventative care received higher trust ratings.

Implications and Recommendations

The findings of the study have important implications for the use of chatbots in healthcare communications:

Potential for Administrative Tasks and Chronic Disease Management

Chatbots have the potential to assist in patient-provider communication, particularly for administrative tasks and common chronic disease management. These areas often involve straightforward information and can benefit from the efficiency and accessibility offered by chatbot interactions.

Caution in Clinical Roles

However, the study highlights the need for caution and critical judgment when relying on chatbot-generated advice for more complex clinical tasks. Diagnostic and treatment advice, in particular, should be approached with care, as chatbots may not possess the same level of expertise and nuanced judgment as human healthcare providers.

Limitations and Further Research

The study emphasizes the limitations of current chatbot technology and the potential biases of AI models. Further research is necessary to refine and improve chatbot capabilities, ensuring their reliability, accuracy, and suitability for different healthcare tasks. Ongoing evaluation and validation processes should be implemented to enhance the performance and trustworthiness of chatbots in healthcare settings.

Source

Via

Best Mobiles in India

Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+
X