Context Window
The maximum amount of text an LLM can process in a single interaction - inputs plus outputs combined.
11 results
The maximum amount of text an LLM can process in a single interaction - inputs plus outputs combined.
A numerical vector that represents the meaning of text, enabling AI to compare and retrieve semantically similar content.
Fine-tuning adapts a pre-trained LLM to a specific task or domain by continuing training on a curated dataset of examples.
A foundation model is a large AI model trained on broad data at scale, designed to be adapted to many downstream tasks rather than one specific use case.
Inference is the process of running a trained AI model on new inputs to generate predictions or outputs, as opposed to training the model on data.
An LLM is a deep learning model trained on massive text datasets to generate, summarize, translate, and reason with human language.
AI models that can process and generate multiple types of data - text, images, audio, and video - within a single system.
RAG is an AI architecture that combines a retrieval system with an LLM, giving the model access to external knowledge at query time.
Artificially generated data that mimics real data - used to train, test, and fine-tune AI models when real data is scarce or private.
A database optimized for storing and searching vector embeddings - the backbone of AI-powered search and RAG systems.
An AI model's ability to perform a task it was never explicitly trained on, guided only by a natural language description.