Open Source
Data Labeling
Platform
The most flexible data labeling platform to fine-tune LLMs, prepare training data or validate AI models.
Last Commit:
Latest version:
# Install the package
# into python virtual environmentpip install -U label-studio
# Launch it!
label-studio
# Install the cask
brew install humansignal/tap/label-studio
# Launch it!
label-studio
# clone repo
git clone https://github.com/HumanSignal/label-studio.git
# install dependencies
cd label-studio
pip install poetry
poetry install# apply db migrations
poetry run python label_studio/manage.py migrate
# collect static files
poetry run python label_studio/manage.py collectstatic
# launch
poetry run python label_studio/manage.py runserver
# Run latest Docker version
docker run -it -p 8080:8080 -v `pwd`/mydata:/label-studio/data heartexlabs/label-studio:latest
# Now visit http://localhost:8080/
Label every data type.
GenAI
LLM Fine-Tuning
Label data for supervised fine-tuning or refine models using RLHF
LLM Evaluations
Response moderation, grading, and side-by-side comparison
RAG Evaluation
Use Ragas scores and human feedback
Quick Start
Computer Vision
Image Classification
Put images into categories
Object Detection
Detect objects on image, boxes, polygons, circular, and keypoints supported
Semantic Segmentation
Partition image into multiple segments. Use ML models to pre-label and optimize the process
Quick Start
Audio & Speech Applications
Classification
Put audio into categories
Speaker Diarization
Partition an input audio stream into homogeneous segments according to the speaker identity
Emotion Recognition
Tag and identify emotion from the audio
Audio Transcription
Write down verbal communication in text
Quick Start
NLP, Documents, Chatbots, Transcripts
Classification
Classify document into one or multiple categories. Use taxonomies of up to 10000 classes
Named Entity
Extract and put relevant bits of information into pre-defined categories
Question Answering
Answer questions based on context
Sentiment Analysis
Determine whether a document is positive, negative or neutral
Quick Start
Robots, Sensors, IoT Devices
Classification
Put time series into categories
Segmentation
Identify regions relevant to the activity type you're building your ML algorithm for
Event Recognition
Label single events on plots of time series data
Quick Start
Multi-Domain Applications
Dialogue Processing
Call center recording can be simultaneously transcribed and processed as text
Optical Character Recognition
Put an image and text right next to each other
Time Series with Reference
Use video or audio streams to easier segment time series data
Quick Start
Video
Classification
Put videos into categories
Object Tracking
Label and track multiple objects frame-by-frame
Assisted Labeling
Add keyframes and automatically interpolate bounding boxes between keyframes
Quick Start
Flexible and configurable
Configurable layouts and templates adapt to your dataset and workflow.
Integrate with your ML/AI pipeline
Webhooks, Python SDK and API allow you to authenticate, create projects, import tasks, manage model predictions, and more.
ML-assisted labeling
Save time by using predictions to assist your labeling process with ML backend integration.
Connect your cloud storage
Connect to cloud object storage and label data there directly with S3 and GCP.
Explore & understand your data
Prepare and manage your dataset in our Data Manager using advanced filters.
Multiple projects and users
Support multiple projects, use cases and data types in one platform.
From the Blog
View All Articles-
Who Watches the Watchdogs? Evaluating LLM-as-a-Judge
LLM-as-a-Judge promises speed, scale, and lower costs—but at what price? This post explores where automated evaluation works, where it fails, and how to make it trustworthy. From subtle biases to hybrid evaluation strategies, we break down how to evaluate the evaluator itself.
Micaela Kaplan
May 21, 2025
-
How to Use Krippendorff’s Alpha to Measure Annotation Agreement
Krippendorff’s alpha is one of the most flexible ways to measure inter-annotator agreement. Unlike other metrics, it works with incomplete data and supports various label types. This guide walks through a real example—showing you how to calculate observed and expected agreement step by step, and what to do when scores are low.
Micaela Kaplan
May 19, 2025
-
Dark Mode is Here in Label Studio 1.18
The 1.18.0 release includes the much anticipated Dark Mode setting, the ability to export polygons in COCO format, and a bunch of usability improvements for annotators.
Label Studio Team
May 14, 2025
Trusted by companies large and small
Global Community
Join the largest community of Data Scientists working on enhancing their models.