NewTemplates and Tutorials for Evaluating Agentic AI Traces

How Does Encord Handle Annotation Metadata vs. Comments?

Metadata and comments serve distinct purposes in annotation, and conflating them creates problems. Metadata is structured information about a task or annotation; data type, source, creation time, confidence score, or custom fields; that machines and dashboards consume. Comments are human-to-human communication about the annotation itself.

Using comments to capture information that should be metadata, or encoding reviewer feedback as a field value no annotator will interpret correctly, creates cleanup debt that compounds as projects scale. Encord handles both, but understanding the separation matters before building workflows around them.

TL;DR

  • Encord automatically captures file-level and annotation-level metadata; custom fields can be added to encode project-specific structured information.
  • The Comments and Issues system (2025 beta) allows threaded reviewer-annotator communication attached to tasks, frames, or canvas locations.
  • Conflating comments with metadata creates machine-unqueryable data that becomes a liability at scale.
  • Label Studio's data model supports more configurable task-level metadata fields and can populate them programmatically via ML backend integrations.

What metadata means in Encord

In Encord, metadata refers to structured information attached to data assets and annotations. When data is uploaded or synced, file-level metadata, including filename, source, upload date, and cloud storage path, is captured automatically. Custom metadata fields can be added to datasets to encode project-specific information such as capture conditions, annotator ID, confidence tier, or any other structured attribute the downstream pipeline needs.

Annotation-level metadata includes the label structure itself: the ontology, attribute values, nested classification responses, as well as automatically generated fields like creation timestamp and annotator ID.

This metadata is exportable in JSON format and queryable through the SDK. Encord Active's data curation features, including embedding search, outlier detection, and quality analysis, operate on metadata fields attached to assets and annotations.

Comments and Issues: Encord's collaboration layer

Encord added a Comments and Issues system in beta in 2025. This is the human communication layer distinct from structured metadata.

Issues can be attached to an entire task, a specific frame in a video, or a precise location on the annotation canvas. They support threaded comment conversations, making back-and-forth between annotator and reviewer possible within the task rather than in external channels like Slack or email.

When a reviewer rejects a task, rejection issues provide structured feedback. Annotators receive specific reasons rather than a binary fail status. A notification badge tracks unresolved issues, and issues can be filtered by resolved or unresolved status.

This closes a meaningful gap that previously required taking communication outside the platform entirely.

When the distinction matters operationally

The metadata vs. comment distinction becomes important in a few scenarios: when the downstream pipeline needs to filter or query annotations by criteria that annotators or reviewers are responsible for capturing; when reviewer feedback needs to be structured enough to drive systematic improvements rather than ad-hoc corrections; and when annotation provenance must be tracked for regulated use cases where audit trails need to capture both decisions and reasoning.

Limitations of the current system

The Comments and Issues system is still in beta as of early 2026. Beta features in enterprise annotation platforms can carry stability risks and incomplete documentation. Teams building production workflows around this feature should validate its behavior before committing.

Custom metadata field flexibility has limits. The depth of schema customization available through Encord's metadata system is less than what teams can achieve through platforms with more open data models.

How Label Studio handles metadata and collaboration

Label Studio's data model allows highly configurable task-level metadata fields. Teams can define custom structured fields that map directly to downstream pipeline requirements without adapting to a vendor-imposed schema.

Annotation-level metadata, including per-annotator confidence scores, reviewer decisions, and custom attribute values, is exportable in multiple formats. ML backend integration means metadata fields can be populated programmatically by connected models as part of pre-annotation workflows.

For collaboration, Label Studio Enterprise supports reviewer feedback workflows with configurable review interfaces, not just pass-fail, but custom review schemas that match the complexity of the annotation task.

You can check out our in-depth comparison of Label Studio and Encord here, or talk to an expert at HumanSignal about metadata and workflow configuration

Frequently Asked Questions

What types of metadata does Encord capture automatically?

Encord automatically captures file-level metadata including filename, source, upload date, and cloud storage path when data is uploaded or synced. Annotation-level metadata includes ontology structure, attribute values, creation timestamps, and annotator ID.

Can custom metadata fields be added in Encord?

Yes. Custom metadata fields can be added to datasets to capture project-specific structured information. These fields are exportable and queryable through the SDK, which makes them useful for downstream filtering and pipeline logic.

How does Encord's Comments and Issues system work?

Issues can be attached to a task, a specific video frame, or a precise canvas location. They support threaded conversations between annotators and reviewers. When a reviewer rejects a task, rejection issues provide structured feedback with specific reasons. A notification badge tracks unresolved issues.

Is the Comments and Issues feature in Encord production-ready?

The feature is in beta as of early 2026. Teams building production workflows that depend on this feature should validate its stability before committing. Beta features may have incomplete documentation and can change as they mature.

What is the risk of conflating metadata and comments in annotation workflows?

Using comments to capture information that should be structured metadata creates data that is human-readable but not machine-queryable. This becomes a problem at scale when pipelines need to filter or report on annotation attributes that were recorded informally in comment threads.

How does Label Studio's metadata system compare to Encord's?

Label Studio's data model supports highly configurable task-level metadata fields that teams define directly, rather than adapting to a vendor-imposed schema. ML backend integration also allows metadata fields to be populated programmatically during pre-annotation workflows.

Related Content