What is Datasaurus Dozen and its relevancy in data science

Researchers designed 12 shapes to direct the dots towards creating the Datasaurus Dozen.

Summary statistics are useful because they condense a huge number of observations into a single figure that is simple to understand and share. This property explains why averages and correlations are so widely used, from introductory statistics courses to newspaper stories to scholarly papers. The caveat is that they are frequently insufficient to describe the entire picture, as exemplified by the “datasaurus dozen,” a collection of datasets.

There’s a reason data scientists spend so much time using visualisations to explore data. It’s risky to rely solely on data summaries like means, variances, and correlations because vastly diverse data sets can produce identical conclusions. This is a notion that has been proven in statistics lectures for decades with Anscombe’s Quartet: four scatterplots with the same mean and variance and the same correlation between them, although being qualitatively distinct. (You can validate this in R by using data (Anscombe) to load the data.) What you might not realise is that bivariate data with a given mean, median, and correlation can be generated in any shape you want – even a dinosaur.

What is Datasaurus Dozen? 

Alberto Cairo created the initial datasaurus as a toy example to highlight the necessity of charting data. There are only two variables in the dataset (x and y), and the summary statistics aren’t particularly noteworthy.

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Justin Matejka and George Fitzmaurice, in their research paper “Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing”, analyse 13 datasets (the Datasaurus and 12 others) that all contain the same summary statistics (x/y mean, x/y standard deviation, and Pearson’s correlation) to two decimal places, yet their appearances are vastly different. The paper is significant as it explains the method that data scientists used to produce a particular dataset, as well as others like it.

Also Read: A Deep Dive into PyeCharts, A Python Tool For Data Visualization


Download our Mobile App



Methodology 

The important idea underlying the authors’ method is that, while it’s difficult to create a dataset with specific statistical features from the start, it’s extremely simple to take an existing dataset, tweak it a little, and keep those statistical properties. The researchers do this by picking a random point, shifting it a little, and then confirming that the set’s statistical attributes haven’t wandered outside of acceptable bounds (in this particular case, we are ensuring that the means, standard deviations, and correlations remain the same to two decimal places.)

A completely distinct dataset emerges when this tiny “perturbation” process is repeated enough times. However, as previously said, these datasets must be visually unique and visibly different in order to be useful tools for emphasising the necessity of displaying your data. This is accomplished by skewing the random point movements toward a specific shape.

Source: https://www.autodesk.com/research/publications/same-stats-different-graphs 

How was the datasaurus generated? 

Researchers designed 12 shapes to direct the dots towards creating the Datasaurus Dozen. Each of the subsequent charts, and indeed all of the intermediate frames, have the same summary statistics as the original Datasaurus. Of course, the strategy isn’t restricted to a particular format; any grouping of line segments might be used as a target. From this, the researchers can observe how the data points morph from one shape to another as it is iterated through the datasets consecutively while preserving the same summary statistical values to two decimal places throughout the process.

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Abhishree Choudhary
Abhishree is a budding tech journalist with a UGD in Political Science. In her free time, Abhishree can be found watching French new wave classic films and playing with dogs.

Our Upcoming Events

24th Mar, 2023 | Webinar
Women-in-Tech: Are you ready for the Techade

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023 [AI100 Awards]

21 Jul, 2023 | New York
MachineCon USA 2023 [AI100 Awards]

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Council Post: The Rise of Generative AI and Living Content

In this era of content, the use of technology, such as AI and data analytics, is becoming increasingly important as it can help content creators personalise their content, improve its quality, and reach their target audience with greater efficacy. AI writing has arrived and is here to stay. Once we overcome the initial need to cling to our conventional methods, we can begin to be more receptive to the tremendous opportunities that these technologies present.