Abstract
Generative Artificial Intelligence (AI) models, such as Large Language Models (LLMs), are opening new avenues in the healthcare sector. These technologies can facilitate clinical documentation, enhance patient education, improve professional training, and streamline the review of scientific literature. However, they may occasionally produce “hallucinations,” i.e., responses not grounded in evidence and potentially misleading. In this context, prompt engineering—the careful and strategic design of instructions provided to these models—plays a critical role in guiding outputs, reducing errors, and ensuring greater reliability. This article examines the importance of prompt engineering in healthcare, describing the iterative process, fundamental principles, and various prompting strategies (zero-shot, few-shot, chain-of-thought, auto-consistency). It further outlines practical applications, addresses ethical challenges, and discusses future perspectives in integrating this approach into healthcare practice and research.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright (c) 2025 Antonio Alemanno, Michele Carmone, Leonardo Priore