Agentic Workflow
A multi-step AI process where a model autonomously plans, uses tools, and executes tasks without human input at each step.
18 results
A multi-step AI process where a model autonomously plans, uses tools, and executes tasks without human input at each step.
An AI agent is a system that uses an LLM to autonomously plan, make decisions, use tools, and take actions to complete a goal.
The challenge of ensuring AI systems behave in ways that match human intentions, values, and goals.
When an AI model generates confident-sounding but factually incorrect or fabricated information.
An AI wrapper is a product built on top of a foundation model API with a custom UI, workflow, or niche focus, rather than novel AI model development.
The maximum amount of text an LLM can process in a single interaction - inputs plus outputs combined.
A Chinese AI lab and open-source model family that trained frontier-level LLMs at a fraction of Western competitors' reported costs.
Fine-tuning adapts a pre-trained LLM to a specific task or domain by continuing training on a curated dataset of examples.
A foundation model is a large AI model trained on broad data at scale, designed to be adapted to many downstream tasks rather than one specific use case.
Inference is the process of running a trained AI model on new inputs to generate predictions or outputs, as opposed to training the model on data.
AI models that can process and generate multiple types of data - text, images, audio, and video - within a single system.
AI models whose weights, architecture, and training details are publicly released - enabling free use, modification, and self-hosting.
Prompt engineering is the practice of crafting LLM inputs to reliably produce accurate, useful, and correctly formatted outputs for a given task.
Alibaba's open-source large language model family - multilingual, high-performing, and available in sizes from 0.5B to 72B parameters.
RAG is an AI architecture that combines a retrieval system with an LLM, giving the model access to external knowledge at query time.
A token is the basic unit of text an LLM processes - roughly 3–4 characters or 0.75 words - used to measure input length, output length, and API cost.
An AI model's ability to perform a task it was never explicitly trained on, guided only by a natural language description.
Comparing the three leading AI API providers for startup use cases - pricing, strengths, weaknesses, and when to choose each.