Skip to main content
  • Letter to the Editor
  • Open access
  • Published:

Evaluate the accuracy of ChatGPT’s responses to diabetes questions and misconceptions

Letter to the editor:


Epidemiology data reveals a fourfold increase in the global number of diabetes patients over the past three decades, with projections indicating a reach of 642 million by 2040 [1]. More and more patients and their families are relying on the internet for diabetes-related information. In the last 10 years, significant advancements in artificial intelligence have emerged, and ChatGPT has emerged as a prominent AI model. Developed by OpenAI, ChatGPT is part of the Generative Pre-trained Transformer (GPT) model family and was launched on November 30, 2022. By employing deep learning techniques, ChatGPT generates human-like responses to natural language inputs [2]. The primary challenge faced by ChatGPT lies in ensuring the precision and dependability of the system, especially when providing medical advice and information.

Methods

In this research, we employed web searches to identify prevalent inquiries and misunderstandings regarding diabetes-related knowledge. Frequently encountered questions were selected and presented to ChatGPT (ChatGPT-3.5-turbo, mode), with the generated responses being recorded (Fig. 1A, B). The quality of these responses was independently evaluated by experienced professionals in the field of endocrinology. Evaluation scores ranged from 0 to 10, with 10 indicating high accuracy, 8 ≤ score < 10 representing fairly accurate responses, 6 ≤ score < 8 indicating average accuracy, and scores below 6 indicating inaccuracies. From July 1, 2023, to July 3, 2023, five experts assessed and analyzed the limitations of ChatGPT's answers.

Fig. 1
figure 1figure 1

Diabetes knowledge misconceptions and ChatGPT’s answers. A Q1–Q6 and ChatGPT’s answers. B Q7–Q12 and ChatGPT’s answers

Results

Each response provided by ChatGPT consists of approximately 157 ± 29 words, and the Flesch-Kincaid Grade Level averages at 13.8 ± 1.1. Out of the 12 answers evaluated, 3 received a rating of 10, indicating a high level of accuracy. The remaining 9 answers had an average ± standard deviation rating of 9.5 ± 0.2, indicating a consistently high level of accuracy (Fig. 2). To examine the impact of repeated questions on the output answers, we conducted five runs for each of the 12 questions. The results revealed slight variations in sentence structure, but the answers remained consistent.

Fig. 2
figure 2

Evaluation ChatGPT’s answers’ scores and suggestions

The findings indicate that ChatGPT generally provides reasonably accurate information regarding misconceptions about diabetes. However, experts have identified certain instances where ChatGPT's responses lack completeness and precision. For instance, in response to question 3: "Does consuming sugar substitutes affect blood sugar levels in diabetic patients?", ChatGPT's answer stating that it does not cause blood sugar elevation is not sufficiently accurate. Research conducted by Mathur et al. [3] suggests that the use of sugar substitutes can increase insulin resistance. Similarly, for question 4: "Can diabetes be ruled out if fasting blood sugar is normal?", ChatGPT's response stating that the normal range for fasting blood sugar is 70–100 mg per deciliter (mg/dL) is incorrect. The current standard for fasting blood sugar has been adjusted to 79–110 mg/dL. Additionally, in response to question 12: "Can diabetes be cured?", ChatGPT's answer stating that it cannot be cured is not entirely accurate. Research indicates that surgical treatment can lead to remission of diabetes in obese patients [4].

ChatGPT’s answers may lack completeness and precision due to its reliance on existing information and text, without real-time updating capabilities. As ChatGPT is not connected to the internet and has limited knowledge, there is a possibility of generating inaccurate or biased content. However, users have the option to provide feedback using the "Not Satisfied" button, enabling ChatGPT to learn and enhance its responses.

Limitations

This study encountered several limitations. Firstly, we did not evaluate patients’ ratings of ChatGPT’s answers, and their empathetic assessments may vary from those of healthcare professionals. Secondly, ChatGPT does not provide citations, preventing users from verifying the accuracy of the information or delving deeper into the topic. Thirdly, although we assessed a range of questions, the selected ones may not encompass all the diabetes-related issues comprehensively.

Conclusion

Our study revealed that ChatGPT has the potential to offer valuable and precise health information regarding diabetes. Nonetheless, further investigation is necessary to ascertain the consistent accuracy of artificial intelligence in providing diabetes-related information. Additionally, it is crucial to establish supervisory mechanisms to evaluate the quality of information delivered by chatbots and other AI systems. Furthermore, real-time updates of health information are essential to cater to the requirements of individuals with diabetes.

Availability of data and materials

Not applicable.

References

  1. Zheng Y, Ley SH, Hu FB. Global aetiology and epidemiology of type 2 diabetes mellitus and its complications. Nat Rev Endocrinol. 2018;14(2):88–98.

    Article  PubMed  Google Scholar 

  2. Radford A, Narasimhan K. Improving language understanding by generative pre-training. 2018.

  3. Mathur K, et al. Effect of artificial sweeteners on insulin resistance among type-2 diabetes mellitus patients. J Fam Med Primary Care. 2020;9(1):69–71.

    Article  Google Scholar 

  4. Zhou X, Zeng C. Diabetes remission of bariatric surgery and nonsurgical treatments in type 2 diabetes patients who failure to meet the criteria for surgery: a systematic review and meta-analysis. BMC Endocr Disord. 2023;23(1):46.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This study was supported by the startup fund for scientific research, Fujian Medical University (2022QH1126) and Fujian Provincial Young and Middle-aged Teachers’ Education Research Project (JAT220109).

Author information

Authors and Affiliations

Authors

Contributions

HH designed the study. CH and LC contributed to the acquisition and analysis of data, as well as the drafting of the manuscript. QC, RL, XW, YZ, and ZJ contributed to the statistical analysis and interpretation of the data.

Corresponding author

Correspondence to Huibin Huang.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declared no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, C., Chen, L., Huang, H. et al. Evaluate the accuracy of ChatGPT’s responses to diabetes questions and misconceptions. J Transl Med 21, 502 (2023). https://doi.org/10.1186/s12967-023-04354-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12967-023-04354-6