Skip to main content
  • Letter to the Editor
  • Open access
  • Published:

Development of search strategies for systematic reviews in health using ChatGPT: a critical analysis

Dear editor

To guide clinical decision-making systematic reviews, need to present transparent, reproducible, and standardized methods for identifying, synthesizing, and describing all scientific literature based on the previously developed central question. After structuring the research question, the strategies for searching for the information are in elaborate sequence. However, it is observed that one of the biggest challenges in designing systematic reviews and meta-analyses in the face of the ascending body of scientific literature is how to be assertive in searching for all scientific information.

ChatGPT is a type of chatbot, developed by free artificial intelligence (AI) model that uses the Generative Pre-trained Transformer language model capable of translating formal and extremely technical information into a clear and simple text in a few minutes [1]. This AI has been trained to access an enormity of data, texts, newspapers and scientific articles been evaluated for validity and execution in scientific research [2,3,4]. Wang and collaborators [5] tested Boolean operators in 2023, but the structuring a search strategy in different databases has not been critically evaluated yet. In this letter, we present the critical evaluation of ChatGPT’s ability regarding decoding core questions to search the entire literature in three used around the world databases used to guide researchers and methodologists.

To perform the analysis comparative search strategy, we used the record available by the PROSPERO platform (#CRD42023391396) aiming to answer the central question: “When does weight regain occur in obese individuals after bariatric surgery?” The PICOT components of the question were: population—Obese individuals with age major than 18 years; intervention—bariatric surgery; comparator—diet, drugs or placebo; outcomes—time of weight regains and study type—trials.

After structuring the central question, we asked ChatGPT to create the search strategy for the MEDLINE database (Additional file 1: Fig S1) and, after this guidance, we requested the development of search strategies that reflected the central question adapted for two other databases widely used in systematic reviews, LILACS and Embase, with the specific inclusion of descriptor bases (MeSH, DeCS, and Emtree) (Additional file 1: Fig S2). We present the manual search performed by a methodologist and validated by the librarian (Additional file 1: Fig S3).

Despite the quick return, we observed as to the constitution of the search strategies created by ChatGPT that this AI does not insert the synonymous terms (Entry Terms) and the jargon used in the clinical practice of the researchers. As for the structuring, we observed that the search strategies created by ChatGPT do not organize, in a correct manner, the groups of acronyms in the same search key. For example, obese people could not be related as an alternative to weight regain. We also observed that the ChatGPT inserted additional points to the objective of the intervention that was structured by the central question, that is, bariatric surgery and other surgeries are not objects of the central question. The insertion of the search deadline was another important point observed since if not justified, we cannot insert it. At last, we observed that ChatGPT did not insert a validated filter for the limitation of randomized clinical trials. The problems evaluated and the guidelines to work around them are available in Table 1.

Table 1 Problems and guidance about the methodological errors observed in the search by ChatGPT

In conclusion, we recommend caution for conducting information search strategies using ChatGPT exclusively. Despite being a simple-to-run tool and having ease in response, content and structuring problems are reported and searchers should be aware of these problems.

Availability of data and materials

All data generated or analysed during this study are included in this published article and its supplementary information files. This study utilized data available on public websites and electronic databases. The Embase platform was accessed through the Brazilian government (CAPES website).



Artificial intelligence


Descritores em Ciências da Saúde


Elsevier´s authoritative live science thesaurus


Literatura Latino-Americana e do Caribe em Ciências da Saúde


Medical literature analysis and retrieval system online


Medical subject headings


  1. Thorp HH. ChatGPT is fun, but not an author. Science. 2023;79:313.

    Article  Google Scholar 

  2. Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care. 2023;27:75.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Najafali D, Camacho JM, Reiche E, Galbraith L, Morrison SD, Dorafshar AH. Truth or lies? The pitfalls and limitations of ChatGPT in systematic review creation. Aesthet Surg J. 2023.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare. 2023;11:887.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Wang S, Scells H, Zuccon G, Koopman B. Can ChatGPT write a good boolean query for systematic review literature search. arXiv: 230203495v3. 2023.

    Article  Google Scholar 

Download references


The authors thank the research group Observatory of Epidemiology, Nutrition and Health Research (OPENS).


Not applicable.

Author information

Authors and Affiliations




NSG performed wrote the primary draft and analyzed the data. All authors contributed to the interpretation and reproduction of the data. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Nathalia Sernizon Guimarães.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interest. These analyses rely on aggregated and non-identifiable data and therefore were deemed exempt from human subject’s review.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1

: Fig S1. General guidance on building search strategies. Fig S2. Specific orientation for the construction of search strategies for information in electronic databases: MEDLINE,, and LILACS. Fig S3. Manual Search Strategies.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guimarães, N.S., Joviano-Santos, J.V., Reis, M.G. et al. Development of search strategies for systematic reviews in health using ChatGPT: a critical analysis. J Transl Med 22, 1 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: