Advertisement

Zillow’s Great Data Science Disaster

In November, Zillow decided to stop buying new houses, decided to sell all its inventory and laid off 25 per cent of its employees.
Zillow

Initially starting its journey as a media company, making money by selling ads on its websites, American company Zillow Group Inc., or simply Zillow, later turned into an online real estate company. 

Founded in 2006 by ex-Microsoft execs and founders of Expedia— Rich Barton and Llyod Frink; co-founder of Hotwire.com Spencer Rascoff; David Beitel and Kristin Acker, Zillow started purchasing houses in 2018, claiming to leverage data to make house flipping profitable at scale. In 2019, the company generated $2.7 billion. 

However, automating everything does not always make sense. In November this year, CEO Barton announced that Zillow would stop purchasing homes – at a time when it already owned 7,000 houses. Additionally, the real estate company has decided to sell all its inventory and lay off about 25 per cent of its 8,000 employees. As a result of this decision, its house acquiring and selling arm— Zillow Offers, lost $420 million in 2021’s third quarter. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Also Read:

Today, we dive deep to explore what exactly went wrong with Zillow’s tech. 


Download our Mobile App



History says…

Flipping houses involves buying a property at a lower value, spending on improvements and renovations and then selling it at a higher price. What’s tricky here is analysing and predicting the potential price of the house or property. Zillow wanted to eliminate the whole bidding and closing process when it came to buying and selling houses.

When it first started operations, Zillow built Zestimate, a tool that used data sources to create an approximate value of properties. In 2006 itself, Zillow had a database of approximately 43 million homes. Using this, the real estate company was able to predict the price of housing property at a 14 per cent median absolute per cent error. Going ahead, Zillow acquired data of about 110 million homes, reducing the error rates to five per cent. 

While automated valuation tools or methods were not new in the market, Zillow was able to do this on a large scale, and that disrupted the real estate market. 

The great fall and reason behind it 

Things started going south once Zillow’s prediction model started degrading. This resulted in the company buying properties at a much higher price than they were able to sell them for. In November this year, the company stopped buying houses stating ‘labour-and-supply-constrained economy’ as the reason. 

However, Zillow’s downfall can be termed as a data science failure. We evaluate what went wrong with its price prediction model, or as the company calls it, Zestimate

Machine learning (ML) models perform effectively when they are trained on quality data. When the algorithm is fed with substandard data, the results or predictions will be likewise. In most likelihood, Zillow’s price prediction model was what went wrong. The model was injected with either publicly available data or ones that were made available by its users. 

For instance, for property ‘X’ listed for sale, Zestimate might predict the buying price of X as $50,000. Since the model is not 100 per cent accurate, a 10 per cent error would mean the actual price of X being $45,000. The company already lost $5,000 there. To top that, it would spend all the more on X’s repairs and improvements and then sell it. 

Additionally, incorrect data in terms of the number of rooms in X, the size of the property, its distance from schools, hospitals and markets, etc., will all affect the valuation of X. Thus, the company should have put greater focus on the quality of data being used to train the ML model.

Secondly, while algorithms are great to derive helpful insights, they should rely on 100 per cent, especially in cases where chances of uncertainty are on the higher end. The housing property market is volatile and involves a huge monetary impact. A 10 per cent error might lead to a lot of differences. Therefore, when solving problems that come with uncertainty, it is essential to test the changes before relying on algorithms to predict the outcome. Thus, companies solving a data science problem involving high-risk impacts should always have a team overlooking the model outputs. 

Summing up 

Automation and data science models are extremely helpful in analysing and providing insights. However, some of them fail as well, like in the case of Zillow.

Earlier, in an interview with media company ZDNet, Chief Analytics Officer at Zillow— Stan Humphries himself said that on any given day, half of all the homes that the company transacted were above the Zestimate value, and half were below. 

Zillow’s failure, however, does not point towards the challenges associated with the buying and selling of houses at profits, but at how AI and ML might just go wrong when solving real-world problems.

More Great AIM Stories

Debolina Biswas
After diving deep into the Indian startup ecosystem, Debolina is now a Technology Journalist. When not writing, she is found reading or playing with paint brushes and palette knives. She can be reached at debolina.biswas@analyticsindiamag.com

AIM Upcoming Events

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Early Bird Passes expire on 10th Feb

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
AIM TOP STORIES