Assistants
Creating Assistants
Introduction
QuantConnect Assistants are agentic AI systems. They don't just generate a single response and stop. They take a goal you give them, decide what to do next, call a tool to make it happen, observe what came back, and decide what to do after that. This loop runs until the goal is met or the assistant decides it can't proceed.
This recursion is what makes assistants useful for real work. A non-agentic model can describe how to fit a statistical model, but an assistant can actually load the data, fit the model, read the residuals, and report a verdict. The catch is that an assistant is only as good as the system prompt you write, the task you assign it, and the tools you let it use.
Create System Prompts
An assistant has two kinds of prompts. The system prompt is the standing brief, set when you create the assistant and read on every run. The task prompt comes with each deployment and describes the specific work for that single run. The system prompt defines the assistant's role and method while the task prompt defines what to do right now.
Write the system prompt like a job description. List the steps the assistant should work through, the tools it should use at each step, and what counts as success. Think of it as teaching the assistant how to approach the problem, not just what to do. Spell out the order to gather evidence, what to check at each step, and how to decide when the work is done.
A well-written system prompt names the inputs the assistant should look at, the constraints it should respect, the output it should produce, and any caps on how much work to do. Keep the system prompt focused on the assistant's role. Put per-run specifics into the task prompt instead, since those change every deployment and the assistant has no memory of prior runs.
To make one system prompt work across project languages, wrap language-specific instructions in <python> and <csharp> tags.
The assistant only sees the contents of <python>...</python> when running on a Python project, and only sees <csharp>...</csharp> when running on a C# project.
This keeps the active context free of rules that don't apply to the current run.
<csharp> C# prompt details </csharp> <python> Python prompt details </python>
Manage Context
Context is everything the model sees when it runs, including the system prompt, the task prompt, results from tool calls, and any data passed in. Every word counts against the model's context budget. Long contexts cost more, slow the model down, and crowd out the actual reasoning it needs to do. The goal is to give the assistant the minimum it needs to do the work well.
Keep the system prompt tight. Focus on the parts that change outcomes like the specific steps, the named tools, the hard constraints, and the format of the output. Avoid motivational language, repeated reminders, and paragraphs of background the assistant doesn't need.
Keep task prompts focused on the specific work for this run. Include the inputs the assistant needs and the constraints unique to this deployment, and don't repeat what is already in the system prompt.
Select Tools
Tools are the capabilities you grant an assistant. For example, reading project files, running notebook cells, creating backtests, fetching recent news, and sending notifications are all tools you can enable. Each tool you enable expands what the assistant can do and what it can choose between when planning a step.
Be selective about which tools to enable for an assistant. The model considers every available tool at every step, so an assistant with twenty tools is slower and more likely to misroute work than one with the four tools it actually needs. Withholding tools matters as much as enabling them.
When you write the system prompt, name the tools the assistant should use at each step. This tells the model which capability you intended for which task and reduces the chance it picks a related-looking tool that isn't the right tool for the job.
Schedule Work
Organizations on the Trading Firm or Institution tier can schedule assistant tasks to run using the same Scheduled Event system that you use in QuantConnect algorithms. These schedules includ market open, fifteen minutes before close, the first trading day of the month, and more. Since Scheduled Events are aware of the market hours, you can ensure your assistants won't run on weekends or US market holidays if that's your intention. This saves you from maintaining a holiday calendar or weekday filter in your prompts.
Structured Outputs
A structured output is a JSON schema that constrains what the assistant returns.
When you specify one, the assistant produces a response that conforms to the schema instead of free-form prose.
This is the difference between an assistant that says "the residuals look stationary, with a p-value around 0.03" and one that returns {"stationary": true, "p_value": 0.03}.
Use a structured output when something downstream consumes the result. Some examples include another assistant in a chain, a custom API response handler, or just a human who needs to quickly scan for the key insight. Without a schema, the next step has to parse prose, which fails the moment the wording shifts.
To use one, define the JSON schema in the Output Schema field of the assistant configuration. Keep the schema as flat as the next step needs. Nested schemas are valid but add room for the assistant to drift from the shape you wanted.
For example, the schema that produces the preceding stationarity output would look like this:
{
"type": "object",
"properties": {
"stationary": {
"type": "boolean",
"description": "Whether the residuals pass an ADF stationarity test."
},
"p_value": {
"type": "number",
"description": "The p-value from the ADF test."
}
},
"required": ["stationary", "p_value"]
}
Examples
The following example assistant configurations demonstrate some common use cases.
Example 1: News Summarizer
This example assistant reads the latest oil news once a day and emails you a summary.
System prompt:
Read the financial news from the past 24 hours about crude oil, OPEC, and US energy policy. Filter to the items that materially affect the oil price. Write a summary of three to five bullets, each one sentence, naming the event and why it matters. Send the summary by email.
Tools:
financial_data_news_articlesfinancial_data_web_getsend_email_notification
Schedule: Daily 30 minutes before US market open.
Exmaple 2: ML Trainer
This example assistant runs a research notebook daily, evaluates the fit, and saves the model to the Object Store if it passes a quality bar.
System prompt:
Execute the model_training notebook in the current project. Read the fit metrics from the last cell's output. If the R-squared is above 0.4 and the residuals pass an ADF stationarity test, serialize the trained model and save it to the Object Store under the key "latest_model". If the fit fails either check, do not overwrite the existing model and send an email summarizing the failure.
Tools:
jupyter_read_notebookjupyter_execute_notebookjupyter_read_cellobject_store_setsend_email_notification
Schedule: Daily after US market close.
Example 3: Optimization Stability Monitor
This example assistant runs a weekly optimization on the project, measures how much the Sharpe ratios vary across the parameter combinations, and sends a Telegram alert only when that variation has grown enough to suggest the strategy is becoming sensitive to its parameters.
System prompt:
Run an optimization on the current project. Read the resulting backtests and compute the standard deviation of their Sharpe ratios across all the parameter combinations. Compare to the previous standard deviation stored in the Object Store under the key "param_stability_std". If the new value is more than 30% above the previous, classify the parameters as less_stable and send a Telegram alert with the structured diagnosis. Otherwise, classify as stable and do not send any notification. Save the new standard deviation to the Object Store either way.
Tools:
create_optimizationread_optimizationobject_store_getobject_store_setsend_telegram_notification
Schedule: Weekly on Sunday at noon.
Output schema:
{
"type": "object",
"properties": {
"verdict": {
"type": "string",
"enum": ["stable", "less_stable"]
},
"current_sharpe_std": {
"type": "number",
"description": "Standard deviation of Sharpe ratios across this week's optimization."
},
"previous_sharpe_std": {
"type": "number",
"description": "Standard deviation from the previous run, read from the Object Store."
},
"recommendation": {
"type": "string",
"description": "What action to take, in one sentence."
}
},
"required": ["verdict", "current_sharpe_std", "previous_sharpe_std", "recommendation"]
}