Definitions of Terms
Below are key terms and concepts in generative AI that are relevant to health professions education:
-
AI literacy: The knowledge and skills required to effectively, responsibly, and critically evaluate and use AI tools, a core competency for future healthcare professionals.
-
Algorithm: A set of step-by-step instructions that tells a computer how to solve a problem or complete a task.
-
Application programming interface (API): A set of rules and tools that allows different software applications to communicate with each other.
-
Artificial intelligence (AI): Computer systems designed to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.
-
Artificial general intelligence (AGI): A hypothetical form of AI that would match or exceed human-level cognition across virtually all domains.
-
Artificial super intelligence (ASI): A theoretical form of AI that would surpass human intelligence across all domains and continue to self-improve.
-
Augmented intelligence: AI systems designed to enhance rather than replace human intelligence and decision-making, particularly relevant in healthcare settings.
-
Bard: A large language model developed by Google, that has been rebranded to Gemini.
-
Bias: Systematic errors in AI outputs that reflect prejudices present in training data or model design. In healthcare, bias can lead to inequitable care recommendations.
-
Chatbot: A computer program designed to simulate conversation with human users, especially over the internet.
-
ChatGPT: A conversational interface developed by OpenAI that utilizes GPT models to engage in dialogue with users.
-
Claude: An AI assistant developed by Anthropic, known for its attempt at more ethical and controlled responses.
-
DALL-E: An AI system developed by OpenAI that creates images from text descriptions.
-
Fine-tuning: The process of further training a pre-trained AI model on a specific dataset to adapt it for particular applications, such as medical knowledge.
-
Foundation model: Large-scale AI models trained on broad data that can be adapted to many different tasks through fine-tuning or prompting.
-
Gemini: Google’s latest large language model, successor to Bard.
-
Generative adversarial network (GAN): A type of AI system where two neural networks compete to generate realistic content, often used in medical image synthesis.
-
Generative artificial intelligence (gAI): AI systems that can generate new content including text, images, audio, and code, based on patterns learned from training data.
-
Generative pre-trained transformer (GPT): A type of large language model architecture developed by OpenAI that learns to predict and generate text through pre-training on vast datasets.
-
Hallucination: When an AI model generates information that appears plausible but is factually incorrect or made up. This is particularly concerning in healthcare contexts where accuracy is vital.
-
Large language model (LLM): Neural network-based systems trained on vast text datasets to predict and generate human-like text. Examples include GPT-4, Claude, and Llama.
-
LLaMA: Meta’s open-source large language model series, designed to facilitate AI research and development.
-
Machine learning (ML): A subset of AI where computers learn from data and improve their performance without being explicitly programmed.
-
Med-PaLM: Google’s medical-focused version of their PaLM language model, specifically trained on healthcare data.
-
Midjourney: An AI system that generates images from text descriptions, known for its artistic and creative outputs.
-
Multimodality: The ability of AI systems to work with multiple types of input and output data (text, images, audio, etc.). Increasingly important for clinical applications.
-
Natural language processing (NLP): Technology that helps computers understand, interpret, and respond to human language in a useful way.
-
OpenChat: An open-source chatbot platform designed to compete with proprietary solutions.
-
PaLM: Google’s Pathways Language Model, a large language model preceding Gemini.
-
Perplexity: A metric used to evaluate language models’ performance by measuring how well they predict text.
-
Prompt: The text input provided to a language model that elicits a response. In healthcare education, crafting effective prompts is key to getting useful outputs.
-
Retrieval-augmented generation (RAG): A technique that enhances language model outputs by incorporating information from external knowledge sources, improving accuracy for domain-specific tasks.
-
Stable Diffusion: An open-source AI system for generating images from text descriptions.
-
Strong AI: Another term for artificial general intelligence (AGI), referring to AI systems that can match human-level cognition.
-
Supervised machine learning: A type of machine learning where the algorithm learns from labeled training data.
-
Unsupervised machine learning: A type of machine learning where the algorithm finds patterns in unlabeled data.
-
Vicuna: An open-source large language model trained on conversational data.
-
Weak AI: Also called narrow AI, refers to AI systems designed for specific tasks without general human-like intelligence.