NEWDark Mode is Here 🌓 Label Studio 1.18.0 Release

Object Detection With Ultralytics YOLOv8

Guide

This is part of our series highlighting Label Studio’s integration with Ultralytics YOLO.

One of the most popular tasks for YOLO models is bounding box detection, also known as "object detection." Label Studio supports this with the RectangleLabels control tag. YOLO OBB (oriented bounding boxes) are also supported. To get a quick overview of how this integration works in Label Studio, check out this quick video:

To get started with this YOLO task, first you’ll need to install the ML Backend and connect your YOLO model. Here’s a quick start guide to help you if you need it.

Once your model is installed, you’ll want to create a new Project in Label Studio with the following labeling config:

<View>
  <Image name="image" value="$image"/>
  <RectangleLabels name="label" toName="image" model_score_threshold="0.25" opacity="0.1">
    <Label value="Person" background="red"/>
    <Label value="Car" background="blue"/>
  </RectangleLabels>
</View>

You can use the following parameters in the labeling config to customize your labeling experience:

ParameterTypeDefaultDescription
model_score_thresholdfloat0.5Sets the minimum confidence threshold for detections. Objects detected with confidence below this threshold will be disregarded. Adjusting this value can help reduce false positives.
model_pathstringNonePath to the custom YOLO model. See more in the section Your own custom YOLO models.
model_obbboolFalseEnables Oriented Bounding Boxes (OBB) mode. Typically it uses *-obb.pt yolo models.

For example:

<RectangleLabels name="label" toName="image" model_score_threshold="0.25" model_path="my_model.pt">

And that’s it! You should now be able to open unlabeled images in the Data Manager and your YOLO model will automatically label the image for you to accept, reject, or modify, depending on your use case. Stay tuned as we continue to go through the other tasks enabled by YOLOv8 in Label Studio.

Related Content

  • Everybody Is (Unintentionally) Cheating

    AI benchmarks are quietly failing us. Studies reveal that data leakage, leaderboard manipulation, and misaligned incentives are inflating model performance. This blog explores four pillars of reform, governance, transparency, broad-spectrum metrics, and oversight, and outlines how enterprises can build trust through a centralized benchmark management platform.

    Nikolai Liubimov

    May 13, 2025

  • 3 Annotation Team Playbooks to Boost Label Quality and Speed

    Not every ML team looks the same and your labeling workflow shouldn’t either. In this guide, we break down three common annotation team setups and how to tailor your tools and processes to boost quality, speed, and scale.

    Alec Harris

    May 7, 2025

  • Seven Ways Your RAG System Could be Failing and How to Fix Them

    RAG systems promise more accurate AI responses, but they often fall short due to retrieval errors, hallucinations, and incomplete answers. This post explores seven common RAG failures—from missing top-ranked documents to incorrect formatting—and provides practical solutions to improve retrieval accuracy, ranking, and response quality. Learn how to optimize your RAG system and ensure it delivers reliable, context-aware AI responses.

    Micaela Kaplan

    March 19, 2025