Label Studio 1.12.0 🚀Automate & Evaluate Labeling Predictions Using LLMs & ML Models
Guide

Fine-Tuning an LLM to Create a ChatBot with Enterprise-Specific Data

Label Studio Team

Our Data Scientist in Residence, Jimmy Whitaker, recently outlined how to fine-tune foundation models for AI applications. We did a follow-up webinar with Jimmy on creating a chatbot using OpenAI, Gradio, LangChain, and LabelStudio to fine-tune the chatbot using our own LabelStudio documentation. In case you missed it, the full recording of the webinar is available on YouTube. Along with the From Foundation Models to Fine Tuned Applications tutorial, you can follow along and build your own expert Question/Answer chatbot.

LLMS are Like Libraries Stuck in Time

OpenAI is excellent, so why would we need to fine-tune it? Doesn’t it know everything? Here’s Jimmy’s analogy to help illustrate how LLMs work and why they should be tuned.

Imagine a Large Language Model (LLM) as a librarian in a vast library. LLMs act as librarians in this library who have read every available book. You can approach these librarians with any question, and they'll provide an answer based on their extensive reading. While their responses are generally impressive, they're not always perfect.

LLMs are trained through a process similar to teaching someone to speak. They are exposed to billions of sentences and are tasked with predicting the next word or character in a sequence. They learn to recognize patterns through continuous feedback and corrections, making them adept at tasks like text generation, question-answering, and even coding assistance. If they don’t know the answer, they can infer the answer to the questions.

However, when it comes to facts, they need additional data. That’s why you need to keep filling the library with knowledge to use an analogy or to continue to train LLMs.

That’s essentially what our exercise was about. It also uncovered some interesting questions from the crowd that might help others as they fine-tune their models.

Q&A Fine-Tuning LLMS with Label Studio

We had a great discussion after Jimmy’s presentation. Here are some questions and answers from the webinar that might help you as you use Label Studio to fine-tune your LLMs.

How do you know it's citing the embeddings versus using knowledge from the pre-trained model?

The large language model is definitely biased based on its training data. The system has been engineered with prompts to only answer things based on the context it's given, trying to prevent the model from relying too heavily on its pre-existing knowledge.

How do you prevent the model from answering questions outside the desired domain?

The prompt provided to the model is designed to prevent it from answering questions outside the desired domain.

Are there clear performance benefits to using a vector database for information retrieval instead of just fine-tuning the information?

If you have the data, tuning the model typically provides better results. However, tuning can be complex and may lead to "catastrophic forgetting." It's often good to start simple and move to more complex methods when needed.

If we're interested in querying specific portions of documents from our database, are there any special considerations to keep in mind?

It's essential to provide the best context. You might consider splitting your data based on titles, sections, or other relevant markers for structured documents. It is also important to consider the context length limit of the model.

Where can one get data, especially for targeting a particular application?

Owning and having access to the right data is crucial. The data used for training should be relevant to the application. For instance, if you’re building a chatbot about sports, you might consider sourcing data from sports forums or communities.

How do you update the model while it's running and live?

Updating a live model is a challenge in machine learning pipelines. It's crucial to manage the complexities of retraining and restarting the model. Updating a large language model (LLM) while it's running and live is a complex process, and it typically isn't done in real-time because of the intricacies involved. Instead, models are often updated periodically, with newer versions replacing the older ones.

However, here's a general overview of how you might approach the process:

  1. Collect New Data: The first step is to gather additional training data. This data might include new text, corrections to the model’s previous errors, user feedback, etc.
  2. Retraining/Fine-Tuning: Use the new data to fine-tune the model. This process doesn't usually involve retraining the entire model from scratch but adjusting it based on the latest data. This computationally intensive step requires careful monitoring to ensure the model doesn't "forget" its previous knowledge or introduce new biases.
  3. Validation: Before deploying the updated model, it's essential to validate its performance. This step involves testing the model on a held-out dataset and comparing its performance to the previous version. Ensuring the updated model is safe and offers improved performance is essential.
  4. Deployment: Once validated, the new model can be deployed. Typically, this doesn't involve "updating" the old model while it's live. Instead:
    1. The old model continues to run.
    2. The new model is initialized and started in parallel.
    3. Once the new model is confirmed to be running smoothly, traffic is shifted from the old model to the new one. This switchover can be done gradually or all at once.
    4. After the transition is complete, the old model can be decommissioned.
  5. Monitoring & Feedback Loop: The model's performance should be continuously monitored after deployment. Note any issues, anomalies, or areas of improvement for the next update cycle.
  6. Rollback Strategy: Always have a strategy to revert to the previous model if unexpected issues arise with the new one. A rollback strategy ensures service continuity and user safety.

How do you effectively manage your data for training large language models?

Owning the data and having access to it is critical. The data should be relevant and come from reliable sources. Tools that manage data effectively are just as necessary as those that retrain the bot.

The Label Studio Community is here to Accelerate Your AI Development

Large Language Models offer an exciting frontier for AI applications, particularly in developing sophisticated chatbots tailored to specific enterprise needs. As Jimmy illustrated in our webinar, while the foundational knowledge of LLMs is vast and remarkable, the real magic happens when we fine-tune and tailor them with specific datasets and applications in mind. Our recent webinar demonstrated the technical nuances of this process and highlighted the community's curiosity and commitment to harnessing the full potential of AI. The collaboration between data scientists, developers, and end-users will undeniably shape the next chapter of AI-driven solutions as we continue exploring, experimenting, and evolving this rapidly changing space.  We encourage you to join our Slack for more conversations on how to use Label Studio.

Related Content

  • Strategies for Evaluating LLMs

    Sure, benchmarks are cool, but they don’t give you the feel or the intuition of how a model actually works. To get that, you’ve got to hack around with the model and throw real-world prompts at it — like you’d do in day-to-day tasks.

    Label Studio Team

    April 23, 2024

  • A Step-by-Step Guide to Preparing Data for Labeling

    In the world of machine learning, data labeling plays a central role in training accurate and reliable models, and properly preparing your data for labeling sets the foundation for the success of your machine learning projects.

    Label Studio Team

    March 6, 2024

  • Interactive Data Labeling with LLMs and Label Studio’s Prompt Interface

    Learn how to build a prompt-centric workflow in Label Studio, which combines real-time prompt engineering with data labeling. The goal of this prompter is to make working with data and LLMs seamless and dynamic.

    Jimmy Whitaker

    Data Scientist in Residence at HumanSignal