What is Datasaurus Dozen and its relevancy in data science

Researchers designed 12 shapes to direct the dots towards creating the Datasaurus Dozen.

Summary statistics are useful because they condense a huge number of observations into a single figure that is simple to understand and share. This property explains why averages and correlations are so widely used, from introductory statistics courses to newspaper stories to scholarly papers. The caveat is that they are frequently insufficient to describe the entire picture, as exemplified by the “datasaurus dozen,” a collection of datasets.

There’s a reason data scientists spend so much time using visualisations to explore data. It’s risky to rely solely on data summaries like means, variances, and correlations because vastly diverse data sets can produce identical conclusions. This is a notion that has been proven in statistics lectures for decades with Anscombe’s Quartet: four scatterplots with the same mean and variance and the same correlation between them, although being qualitatively distinct. (You can validate this in R by using data (Anscombe) to load the data.) What you might not realise is that bivariate data with a given mean, median, and correlation can be generated in any shape you want – even a dinosaur.

What is Datasaurus Dozen?

Alberto Cairo created the initial datasaurus as a toy example to highlight the necessity of charting data. There are only two variables in the dataset (x and y), and the summary statistics aren’t particularly noteworthy.

Justin Matejka and George Fitzmaurice, in their research paper “Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing”, analyse 13 datasets (the Datasaurus and 12 others) that all contain the same summary statistics (x/y mean, x/y standard deviation, and Pearson’s correlation) to two decimal places, yet their appearances are vastly different. The paper is significant as it explains the method that data scientists used to produce a particular dataset, as well as others like it.

Methodology

The important idea underlying the authors’ method is that, while it’s difficult to create a dataset with specific statistical features from the start, it’s extremely simple to take an existing dataset, tweak it a little, and keep those statistical properties. The researchers do this by picking a random point, shifting it a little, and then confirming that the set’s statistical attributes haven’t wandered outside of acceptable bounds (in this particular case, we are ensuring that the means, standard deviations, and correlations remain the same to two decimal places.)

A completely distinct dataset emerges when this tiny “perturbation” process is repeated enough times. However, as previously said, these datasets must be visually unique and visibly different in order to be useful tools for emphasising the necessity of displaying your data. This is accomplished by skewing the random point movements toward a specific shape.

How was the datasaurus generated?

Researchers designed 12 shapes to direct the dots towards creating the Datasaurus Dozen. Each of the subsequent charts, and indeed all of the intermediate frames, have the same summary statistics as the original Datasaurus. Of course, the strategy isn’t restricted to a particular format; any grouping of line segments might be used as a target. From this, the researchers can observe how the data points morph from one shape to another as it is iterated through the datasets consecutively while preserving the same summary statistical values to two decimal places throughout the process.

Our Upcoming Events

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

How this Govt Initiative Found Moat in Oracle

Thanks to Oracle, CGTMSE has benefited approximately 12 million borrowers, guaranteeing loans totalling \$53 billion, of which \$21 billion are currently active

GPU has an Energy Problem

If energy costs for running GPUs are climbing, what is the endgame?

Oracle’s Billion-Dollar Baby

Cohere service will form the basis for generative AI capabilities embedded across Oracle’s suite of SaaS and industry applications

It’s High Time Apple Bought Stability AI

Apple is the only big-tech that does not have its own research lab

Why Microsoft is Using Nuclear to Fuel Its Data Centers?

Microsoft takes the helm in pioneering the development and execution of a worldwide strategy for small modular reactors (SMRs) and microreactor energy systems.

Why Google will Not Rank Websites Just Based on SEOs

Google shifted SEO focus to quality over origin, embracing AI-generated content. This sparked concerns and debates among content creators

AI Trust Grows When Hidden

While companies are heavily criticised for being secretive the approach is working in their favour

When Sam Met Ollie

Silicon Valley has a new power couple

Oracle Doesn’t Want You to Care About Data Anymore

“Nobody comes in to work in the morning and says, ‘today, I’m going to look at data’,” says TK Anand

OpenAI is the New Apple

‘Joining OpenAI right now is probably like joining Apple pre iPod and iPhone’