Housing.com has been investing in AI and ML to help its users in every step of their home buying or home renting journey. “As an organisation, we believe in AI and its impact on consumer experience on the platform,” said Vipin Kumar Singh, head of technology, Housing.com, PropTiger.com and Makaan.com.
Sign up for your weekly dose of what's up in emerging technology.
AIM: How does Housing.com leverage AI?
Vipin Kumar Singh: Here are some of the major use cases where we harness the power of AI:
- Boost Efficiency and Productivity with Digital Automation: For audits of property images uploaded by users. These capabilities help find competitors’ logos, blurred images, unwanted text and NSFW content.
- Automated Valuation Model: To predict the price of a property in resale and rent which is used in various product workflows to help sellers to list their properties at the right price and the property seekers to avoid paying too much.
- Audio Processing: To deal with audio data in terms of conversations between consumers and our pre-sales and sales teams.
- Personalised Customer Experience: The recommendation engine is used for both the search and discovery as well as self-serve package subscription for the paying customers.
- Text Processing and NLP: For keyword extraction from user-generated property descriptions.
- Data Insights: We process the data from various sources in near real-time via a well-architected Data Platform to provide business insight. This helps in competitive analysis, helps in determining customer churn and improves renewal rates.
- AI-Powered Chatbots: The customer interaction for property search and customer support is based on easily customised decision tree-based workflows.
- Marketing Efficiency Improvement: Multi-channel marketing can have higher ROI via marketing intelligence like Market Mix Modeling.
- Fintech Fraud Detection: AI helps in detecting the frauds with much better accuracy and the underlying ML models can evolve with changing patterns in fraud activities.
AIM: Could you elaborate on Housing.com’s AI governance framework?
Vipin Kumar Singh: From an approach POV, we use the CRISP-DM methodology to ensure each data science problem is solved holistically and meets business expectations and standards.
We have implemented a data governance policy that ensures consistent and transparent data availability for AI projects via data platforms. Moreover, we periodically track key performance metrics such as recall, precision and F1 scores on production deployments. This helps us identify data drift in time and recalibrate and retrain models as and when required.
Our fraud detection models are prone to shifts in data patterns as potential fraudsters try to game the system. Putting the right frameworks and governance methods around performance tracking has helped us ensure optimal standards.
For us, ethical issues like data privacy, security and transparency in AI development are of the highest priority as it’s part of the organisation’s risk framework. Our core values empower our data scientists and ML engineers with an open culture where failures are appreciated as part of growing. Sustainable and right methods are always given priority over faster results.
AIM: What explains the growing conversation around AI ethics, responsibility, and fairness?
Vipin Kumar Singh: Ethical AI guarantees that an organisation’s AI systems respect human dignity and do not hurt individuals in any way. This covers a wide range of issues, including justice, non-weaponisation, and responsibility, like in the case of self-driving cars that cause accidents.
Creating awareness and literacy around AI ethics is increasingly becoming the need of the hour as the field is rapidly moving from more of a research area to a key business function that supports the day-to-day business processes.
As AI-driven processes are not always explainable such as in the case of neural networks, it becomes very important to ensure the right data and methods are used to build AI capabilities. Poorly designed projects– built on faulty inadequate or biased data–can have unintended, potentially harmful, consequences.
AIM: How do you mitigate biases in AI systems?
Vipin Kumar Singh: We live in a world of biases, which often get reflected in the data sets used to train AI models. In a recent example, Twitter‘s image cropping algorithm favoured white people’s faces over black which raised many eyebrows. We do mindful data analysis and preparation to eliminate the bias introduced by data. There is also the possibility of the ML algorithm being biased in some way, therefore the requisite checks are made before picking any algorithm. On top of that, there is always the possibility of human bias based on who is working on the problem. This gets minimised by doing multiple rounds of the solution, code and data review by different members of the team. As a final step, we deploy all our models with feedback gathering framework so that any biases affecting users get highlighted and corrected proactively.
AIM: What processes do you have in place to protect consumer data?
Vipin Kumar Singh: Here are the ways we ensure data privacy:
- By anonymising and encrypting data at the source, rest and in motion.
- Defined and implemented a data privacy and security policy. Subsequently, through periodic internal and third-party audits conducting assessments to identify improvement areas.
- Be vigilant of customer feedback on privacy around our business and trends throughout the industry as well. Using privacy by design principles in our system design processes.
We’ve used open models for use cases like object detection. Our data science leadership does a thorough review of the literature around models before picking them up. We test these models internally to look out for issues around our AI development principles before taking them into production.