NEWOpenAI Structured Outputs with Label Studio 🚀
5

Review and What's Next

Hooray! You’ve successfully built out an annotation project in Label Studio, and you’re ready to collect data and train or evaluate your models. It’s important to remember that data annotation, be it by model or by a human, is an iterative process. You’ll likely need to refine and adjust your annotation schema as you learn from the annotation process – what edge cases did you miss? What in your original instructions can be clearer? You can (and should!) update your schema and your UI as you go through the annotation process so that your final data is as clean as it can be.

We have many other tutorials available on our blog, including many examples of how to use Label Studio with Large Language Models.

There’s some advanced functionality you may want to incorporate into your labeling process.

Did you know that you can connect a model directly to Label Studio using our ML backend to easily pre-annotate your data or use your newly annotated data for retraining? This is a great way to streamline your process.

There’s also more tools in the enterprise version of our product that can further improve your labeling experience. Inter-annotator agreement, or a series of metrics that tell you how your annotators are labeling compared to each other, can help build confidence in your labels as well as help identify places that your schema may need more refinement.