October 19, 2024

About

We were using ServiceContext to convert documents into text, to feed text into LLMs, to make nodes from text.

But there was drastically update in LlamaIndex.

Settings

Now that we just need Settings, not ServiceContext

from llama_index.core import Settings
Settings.llm = OpenAI(model="gpt-3.5-turbo", temperature=0.1)
Settings.embed_model = HuggingfaceEmbedding(model_name)
Settings.node_parser = SentenceSplitter(chunk_size=512, chunk_overlap=30)
debug_handler = LlamaDebugHandler()
Settings.callback_manager = callbackManager([debug_handler])

It seems like global setting, and it makes coder clearer than before. I am happy.