MITB Banner

Meta Introduces FACET To Evaluate Computer Vision Models 

FACET’s dataset is made up of 32,000 images containing 50,000 people

Share

Listen to this story

Meta on Thursday announced that DINOv2, its computer vision model trained through self-supervised learning to produce universal features, is now available under the Apache 2.0 license

The company further added that it is also releasing a collection of DINOv2-based dense prediction models for semantic image segmentation and monocular depth estimation, giving developers and researchers even greater flexibility to explore its capabilities on downstream tasks.

Alongside DINOv2’s announcement, Meta introduced FACET (FAirness in Computer Vision EvaluaTion), a new comprehensive benchmark for evaluating the fairness of computer vision models across classification, detection, instance segmentation, and visual grounding tasks. 

https://twitter.com/MetaAI/status/1697233910135148562

Meta said this move is in response to the challenging nature of benchmarking fairness in computer vision, which has often been hampered by potential mislabeling and demographic biases.

Meta in their  blog post said that the FACET’s dataset is made up of 32,000 images containing 50,000 people, which is labeled by expert human annotators for demographic attributes. Additionally, FACET also contains person, hair, and clothing labels for 69,000 masks from SA-1B.

Meta evaluated Dinov2 using FACET which revealed nuances in its performance, particularly in gender-biased classes. 

Meta said it hopes that FACET can become a standard fairness evaluation benchmark for computer vision models and help researchers evaluate fairness and robustness across a more inclusive set of demographic attributes. For the same purpose, Meta released the FACET dataset and a dataset explorer

To make FACET effective, Meta said it hired expert reviewers to manually annotate person-related demographic attributes like perceived gender presentation and perceived age group as well as correlating visual features like perceived skin tone, hair type, and accessories. 

Additionally, the dataset includes labels for person-related classes like “basketball player” and “doctor,” as well as attributes related to clothing and accessories.

“By releasing FACET, our goal is to enable researchers and practitioners to perform similar benchmarking to better understand the disparities present in their own models and monitor the impact of mitigations put in place to address fairness concerns,” Meta wrote in the blog post.

Share
Picture of Siddharth Jindal

Siddharth Jindal

Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India