Advertisement

Active Hackathon

Checklist Of Things That Might Go Wrong While Labeling Data And How To Fix Them

Interpolation

Labelling data for NLP training is not a simple task; it necessitates a wide range of skills, expertise, and effort. High-quality annotated data is required to train the algorithm, which aids the model in recognizing numerous identifiable items.

However, while labelling various forms of data, businesses run into a variety of issues, making labelling jobs more time consuming and ineffective. We need to grasp these issues in order to make data labelling more effective and productive. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

As a result, in this blog, we’ll talk about four data labelling issues with entity annotations for NLP and offer some solutions offered by Cogito Tech LLC. 

1. Nesting Annotations

Nested Annotations” is a crucial topic of contention. The phrase “President of the United States Joe Biden” could, for example, be labelled in a variety of ways. The reason for this issue is simple: language is hierarchical, not linear, and hence linear annotations like highlighted spans don’t always match correctly.

Allowing the annotator to construct nested annotations, such as in Brat, or annotating tree structures, is a straightforward solution. While these methods are user-friendly, they necessitate downstream models that can manage these complicated, non-linear structures in both the model’s input and output.

Outside of the linguistic community, we haven’t seen widespread adoption of structured annotations among the customers. This is primarily owing to the increased model and engineering complexity that working with them necessitates. Annotation initiatives that instruct their team to annotate at the highest resolution feasible and then apply post-processing to capture the inherent structure at a later stage are frequent.

2. Adding New Entity Types in the Middle of a Project

In the early stages of an annotation project, you’ll frequently discover that you require entity types that you hadn’t anticipated. For example, a pizza chatbot’s collection of tags might begin with “Size,” “Topping,” and “Drink” before someone learns that Garlic Bread and Chicken Wings also require a “Side Dish” tag.

Adding these tags and continuing to work on documents that haven’t been labelled yet puts the project in jeopardy. All documents annotated before the new tags were applied will be missing the new tags. As a result, your test set will be incorrect for those tags, and your training data will be missing the new tags, resulting in a model that will not capture them.

Starting anew and ensuring that all tags are collected is the pedantic solution. Starting again every time you require a new tag, on the other hand, is a tremendously inefficient use of resources. Starting anew but using the previous annotations as “pre-annotations” that are displayed to the annotator is a suitable middle ground. 

3. Tags in Long Lists

Forcing your annotators to go through extensive lists of tags is a certain way to increase project expenses and reduce data quality. ImageNet is famous for having 20,000 different categories, like Strawberry, Hot Air Balloon, and Dog. 

Increasing the number of options an annotator must make in an annotation process slows them down and leads to poor data quality. The order of the tags in the annotation UX will have an impact on the distribution of annotations. This is due to the availability bias, which makes it much simpler for us to notice concepts that are immediately in front of our minds.

4. There’s a lot of empty space.

The varied categorization of trailing and preceding whitespaces and punctuation is perhaps the most prevalent source of annotator dispute.

These conflicts result in poorer agreement scores and uncertainty in the golden set when assessing annotator agreement or settling on a golden source of annotation. These mistakes are especially aggravating because the annotation is essentially right, and a person would not notice or care about the difference.

In this case, the answer is straightforward: your annotation tool should clearly signal to annotators when they have captured following and leading white spaces, allowing them to determine whether or not this is right according to the criteria you’ve established.

Conclusion

Data labelling must be done quickly and accurately at scale, with none of these factors jeopardizing the other. Anticipating and adapting for typical difficulties is the first step in developing a high-quality annotation workflow. Four of the most prevalent problems in-text annotation projects were discussed in this post, along with how annotation companies like Cogito Tech LLC or Anolytics may assist fix them.

More Great AIM Stories

Roger Max
Hi, My name is Roger Max. I am a technology writer that specializes in understanding and processing training data requirements for businesses in a variety of industries and sectors that are using Machine Learning, AI, or NLP. At Cogito, I manage highly motivated teams of data annotators, labellers, and content moderators in processing numerous data sets during the day, and by night, I write about my experiences and offer ideas and solutions.

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR
[class^="wpforms-"]
[class^="wpforms-"]