fastmcp.server.context
Functions
set_transport
reset_transport
set_context
Classes
LogData
Data object for passing log arguments to client-side handlers.
This provides an interface to match the Python standard library logging,
for compatibility with structured logging.
Context
Context object providing access to MCP capabilities.
This provides a cleaner interface to MCP’s RequestContext functionality.
It gets injected into tool and resource functions that request it via type hints.
To use context in a tool function, add a parameter with the Context type annotation:
on_initialize middleware will persist to subsequent tool
calls when using the same session object (STDIO, SSE, single-server HTTP).
For distributed/serverless HTTP deployments where different machines handle
the init and tool calls, state is isolated by the mcp-session-id header.
The context parameter name can be anything as long as it’s annotated with Context.
The context is optional - tools that don’t need it can omit the parameter.
Methods:
is_background_task
task_id
fastmcp
request_context
get_http_request() from fastmcp.server.dependencies,
which works whether or not the MCP session is available.
Example in middleware:
lifespan_context
report_progress
progress: Current progress value e.g. 24total: Optional total value e.g. 100message: Optional status message describing current progress
list_resources
- List of Resource objects available on the server
list_prompts
- List of Prompt objects available on the server
get_prompt
name: The name of the prompt to getarguments: Optional arguments to pass to the prompt
- The prompt result
read_resource
uri: Resource URI to read
- ResourceResult with contents
log
fastmcp.server.context.to_client logger with a level of DEBUG.
Args:
message: Log messagelevel: Optional log level. One of “debug”, “info”, “notice”, “warning”, “error”, “critical”, “alert”, or “emergency”. Default is “info”.logger_name: Optional logger nameextra: Optional mapping for additional arguments
transport
client_supports_extension
extensions extra field on ClientCapabilities
sent by the client during initialization.
Returns False when no session is available (e.g., outside a
request context) or when the client did not advertise the extension.
Example::
from fastmcp.server.apps import UI_EXTENSION_ID
@mcp.tool
async def my_tool(ctx: Context) -> str:
if ctx.client_supports_extension(UI_EXTENSION_ID):
return “UI-capable client”
return “text-only client”
client_id
request_id
session_id
- The session ID for StreamableHTTP transports, or a generated ID
- for other transports.
session
debug
DEBUG-level message to the connected MCP Client.
Messages sent to Clients are also logged to the fastmcp.server.context.to_client logger with a level of DEBUG.
info
INFO-level message to the connected MCP Client.
Messages sent to Clients are also logged to the fastmcp.server.context.to_client logger with a level of DEBUG.
warning
WARNING-level message to the connected MCP Client.
Messages sent to Clients are also logged to the fastmcp.server.context.to_client logger with a level of DEBUG.
error
ERROR-level message to the connected MCP Client.
Messages sent to Clients are also logged to the fastmcp.server.context.to_client logger with a level of DEBUG.
list_roots
send_notification
notification: An MCP notification instance (e.g., ToolListChangedNotification())
close_sse_stream
retry_interval milliseconds)
and resume receiving events from where it left off via the EventStore.
This is useful for long-running operations to avoid load balancer timeouts.
Instead of holding a connection open for minutes, you can periodically close
and let the client reconnect.
sample_step
messages: The message(s) to send. Can be a string, list of strings, or list of SamplingMessage objects.system_prompt: Optional system prompt for the LLM.temperature: Optional sampling temperature.max_tokens: Maximum tokens to generate. Defaults to 512.model_preferences: Optional model preferences.tools: Optional list of tools the LLM can use.tool_choice: Tool choice mode (“auto”, “required”, or “none”).execute_tools: If True (default), execute tool calls and append results to history. If False, return immediately with tool_calls available in the step for manual execution.mask_error_details: If True, mask detailed error messages from tool execution. When None (default), uses the global settings value. Tools can raise ToolError to bypass masking.tool_concurrency: Controls parallel execution of tools:- None (default): Sequential execution (one at a time)
- 0: Unlimited parallel execution
- N > 0: Execute at most N tools concurrently If any tool has sequential=True, all tools execute sequentially regardless of this setting.
- SampleStep containing:
-
- .response: The raw LLM response
-
- .history: Messages including input, assistant response, and tool results
-
- .is_tool_use: True if the LLM requested tool execution
-
- .tool_calls: List of tool calls (if any)
-
- .text: The text content (if any)
sample
sample
sample
final_response tool is
created. The LLM calls this tool to provide the structured response,
which is validated against the result_type and returned as .result.
For fine-grained control over the sampling loop, use sample_step() instead.
Args:
messages: The message(s) to send. Can be a string, list of strings, or list of SamplingMessage objects.system_prompt: Optional system prompt for the LLM.temperature: Optional sampling temperature.max_tokens: Maximum tokens to generate. Defaults to 512.model_preferences: Optional model preferences.tools: Optional list of tools the LLM can use. Accepts plain functions or SamplingTools.result_type: Optional type for structured output. When specified, a syntheticfinal_responsetool is created and the LLM’s response is validated against this type.mask_error_details: If True, mask detailed error messages from tool execution. When None (default), uses the global settings value. Tools can raise ToolError to bypass masking.tool_concurrency: Controls parallel execution of tools:- None (default): Sequential execution (one at a time)
- 0: Unlimited parallel execution
- N > 0: Execute at most N tools concurrently If any tool has sequential=True, all tools execute sequentially regardless of this setting.
- SamplingResult[T] containing:
-
- .text: The text representation (raw text or JSON for structured)
-
- .result: The typed result (str for text, parsed object for structured)
-
- .history: All messages exchanged during sampling
elicit
elicit
elicit
elicit
elicit
elicit
elicit
message: A human-readable message explaining what information is neededresponse_type: The type of the response, which should be a primitive type or dataclass or BaseModel. If it is a primitive type, an object schema with a single “value” field will be generated.
set_state
serializable=False. These values are stored in a request-scoped
dict and only live for the current MCP request (tool call, resource
read, or prompt render). They will not be available in subsequent
requests.
The key is automatically prefixed with the session identifier.
get_state
serializable=False),
then falls back to the session-scoped state store.
Returns None if the key is not found.
delete_state
enable_components
names: Component names or URIs to match.keys: Component keys to match (e.g., ).version: Component version spec to match.tags: Tags to match (component must have at least one).components: Component types to match (e.g., ).match_all: If True, matches all components regardless of other criteria.
disable_components
names: Component names or URIs to match.keys: Component keys to match (e.g., ).version: Component version spec to match.tags: Tags to match (component must have at least one).components: Component types to match (e.g., ).match_all: If True, matches all components regardless of other criteria.

