Hugging Face

Key Concepts

Introduction

Hugging Face is a leading company in natural language processing (NLP), providing powerful tools and pre-trained models that can significantly enhance your trading algorithms on QuantConnect. By integrating Hugging Face's models, you can easily leverage advanced NLP capabilities for tasks such as sentiment analysis, news classification, and market forecasting without defining and training the model yourself. Hugging Face provides an easy way to leverage the research of community members to increase the sophistication of your trading algorithms.

Supported Models

Hundreds of thousands of Hugging Face models are publicly available and ready to use. To view all of them, see the Model Hub on the Hugging Face website. The Model card tab of each model repository explains an overview of how the model works, its requirements, and a quick start guide.

QuantConnect Cloud caches some of the most popular models to speed up your development workflow. The following table shows the cached models:

NameDocumentation
ahmedrachid/FinancialBERT-Sentiment-Analysis
amazon/chronos-t5-base
amazon/chronos-t5-large
amazon/chronos-t5-small
amazon/chronos-t5-tiny
autogluon/chronos-t5-base
autogluon/chronos-t5-large
autogluon/chronos-t5-tiny
bardsai/finance-sentiment-fr-base
cardiffnlp/twitter-roberta-base-sentiment-latest
distilbert/distilbert-base-uncased
FacebookAI/roberta-base
google-bert/bert-base-uncased
google/gemma-7b
microsoft/deberta-base
mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis
nickmuchi/deberta-v3-base-finetuned-finance-text-classification
nickmuchi/distilroberta-finetuned-financial-text-classification
nickmuchi/sec-bert-finetuned-finance-classification
openai-community/gpt2
ProsusAI/finbert
Salesforce/moirai-1.0-R-base
Salesforce/moirai-1.0-R-large
Salesforce/moirai-1.0-R-small
StephanAkkerman/FinTwitBERT-sentiment
yiyanghkust/finbert-tone

To see the commit hash of the cached models, run the following algorithm in QC Cloud and then view the logs:

from huggingface_hub import scan_cache_dir

class HuggingFaceModelHashAlgorithm(QCAlgorithm):

    def initialize(self):
        cache_info = scan_cache_dir()
        cached_models_log = ''
        for entry in cache_info.repos:
            revisions = [revision.commit_hash for revision in entry.revisions]
            cached_models_log += f'\nRepo: {entry.repo_id}. Revisions {str(revisions)}'
        self.quit(cached_models_log)

Train Models

Hugging Face models are pre-trained, so you can quickly load them into your algorithms and use their output to inform your trading decisions. To customize the model for your specific use case, you can fine-tune the pre-trained model with your own training data.

Examples

The following example algorithm demonstrates how to load a Hugging Face model into a trading algorithm, fine-tune it, and use its output to inform trading decisions:

You can also see our Videos. You can also get in touch with us via Discord.

Did you find this page helpful?

Contribute to the documentation: