Skip to content

Introduction to the LlamaHub

LlamaHub is a registry of hundreds of integrations, agents and tools that you can use within LlamaIndex.

LlamaHub

We will be using various integrations in this course, so let's first look at the LlamaHub and how it can help us.

Let's see how to find and install the dependencies for the components we need.

Installation

LlamaIndex installation instructions are available as a well-structured overview on LlamaHub. This might be a bit overwhelming at first, but most of the installation commands generally follow an easy-to-remember format:

pip install llama-index-{component-type}-{framework-name}

Let's try to install the dependencies for an LLM and embedding component using the Hugging Face inference API integration.

pip install llama-index-llms-huggingface-api llama-index-embeddings-huggingface

Usage

Once installed, we can see the usage patterns. You'll notice that the import paths follow the install command! Underneath, we can see an example of the usage of the Hugging Face inference API for an LLM component.

from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI
import os
from dotenv import load_dotenv

# Load the .env file
load_dotenv()

# Retrieve HF_TOKEN from the environment variables
hf_token = os.getenv("HF_TOKEN")

llm = HuggingFaceInferenceAPI(
    model_name="Qwen/Qwen2.5-Coder-32B-Instruct",
    temperature=0.7,
    max_tokens=100,
    token=hf_token,
    provider="auto"
)

response = llm.complete("Hello, how are you?")
print(response)
# I am good, how can I help you today?

Wonderful, we now know how to find, install and use the integrations for the components we need. Let's dive deeper into the components and see how we can use them to build our own agents.