Listen to this story
Founded in March 2018, Turing.com is an AI-powered deep jobs platform. It currently covers 100 technologies and 15 job titles, ranging from entry-level roles to engineering directors and CTOs. “Our mission is to unleash the world’s untapped human potential. Companies all over the world are going remote and are on the lookout for top-notch engineering talent,” said Vijay Krishnan, co-founder and CTO at Turing.com.
In an exclusive interview with Analytics India Magazine, Vijay spoke about how their AI-powered platform enables companies to hire and manage remote developers with a push of a button.
Sign up for your weekly dose of what's up in emerging technology.
AIM: What impact did “the great resignation” have on the recruitment market?
Vijay Krishnan: In the wake of the pandemic, we’re seeing a transition in how software developers are switching jobs for better compensation and work-life balance. Job openings have hit a new record level and more and more companies are switching to an all-remote or hybrid work setup. As a result, companies are offering competitive salaries to attract and retain top talent.
This offers tremendous opportunities for companies hiring for new positions but of course, creates challenges with retention. It also makes it clear that the recruiting industry has to be able to take developer preferences into account, not merely hiring company preferences.
Remote work would be a great way to satisfy both preferences since as a result of remote work, candidates have a large pool of companies/jobs to choose from and companies/jobs too have a large pool of candidates to choose from, which in turn maximises the change of companies hiring ideal candidates and vice versa.
AIM: What’s your game plan for the Indian market? What are the opportunities and challenges you see?
Vijay Krishnan: India is a key focus market for us. We have seen Indian developers bringing significant value to the clients and the opportunity to work with top US companies, the flexibility to work from anywhere and get very attractive salaries is welcomed by the Indian developers. We have started India focused community events, marketing activities and developer upskilling workshops to enhance our penetration and focus here.
AIM: How does Turing.com leverage AI?
Vijay Krishnan: We are an AI-backed Intelligent Talent Cloud that helps customers source, vet, match, and manage the world’s best developers remotely. We are a 100% data-driven organisation that leverages, analyses, and exploits various data sources. We build models to serve our customers and developers worldwide.
Our AI team includes various sub-teams that solve the analytics and ML problems for needs classified into supply, vetting, growth, demand, operations and matching. We use AI to solve challenging problems, including demand forecasting, pricing optimisation, automated vetting, and adaptive search/ranking, to name a few.
Turing gets around 25,000 new developer registrations in a week. These developers upload their CVs and share data about their experiences on our portal. They also take various tests on the portal, including skill-specific MCQs and coding challenges.
Tracking the developers in each of the vetting stages, predicting who can be fast-tracked, and extracting maximum signals about the developer’s profile to match them in near future customer job requirements is key. This process involves multiple projects handling different steps and components during this journey using their own analytics techniques and ML models. Ensuring that we build a rich trove of this data continuously and collaboratively is an engineering and governance challenges.
Turing’s solution to handle this challenge includes
a. Building a robust data engineering pipeline and lake with a library of common analytical views and components.
b. Leveraging a hybrid data store comprising cloud storage, databases, and feature stores.
c. Continuously evolving shared feature store to have versioned developer features and updates into the same.
d. Versioned ML Models and techniques having the ability to apply them to historical data quickly and measure the impact on key business metrics.
e. Use of frameworks like mode and Jupiter hub to ensure easy and seamless collaboration.
f. A/B testing and feature flag-driven product development to quickly check the hypothesis.
g. Fast and easy logging infrastructure integrated into every component
h. Heavy collaboration between product engineering and AI/ML teams where the data needs of every feature getting built get vetoed by the latter.
AIM: Tell us about your tech stack.
Vijay Krishnan: AWS, GCP, BigQuery, Node, React, Python. We build our services with tech stacks that are scalable and reliable. We design with a distributed application framework and the microservices architecture. They allow us to easily scale any service component on the platform and continue to support high throughput and low latency service delivery.
This empowers Turing’s engineering teams to continue building functionalities into our product with fast iteration and better quality control. We run both service stacks and AI workloads on multiple clouds including AWS and GCP to leverage the best cloud technologies to support our product offerings. Our tech stacks continue to evolve as we see the need for business growth. Today, we run node.js with react, however, we have taken one step further by looking into solutions like gRPC and GraphQL to prepare us for the next phase of the business expansion.
AIM: What explains the growing conversation around Responsible AI?
Vijay Krishnan: While AI today (esp. Deep AI) does its own feature selection and modelling, these steps work towards the goals set by humans using the data collected for human purposes. If the data is biased, it can amplify injustice even by accident.
Discrimination against a sub-population can be created unintentionally, and that’s the fairness issue at the core of AI ethics. The responsibility of ensuring this does not happen lies on the teams before deploying any model to test for such bias.
Before answering how one can handle this issue at scale, it is crucial to understand how the problems originate and manifest.
- Many tools and libraries make creating and applying the models very easy. The business pressures of quickly launching features can make the developers overlook the possibility of biases.
- Some models typically get linked and form a chain where an output of one is used by the other. In this case, the bias propagates and amplifies. The developers working towards the chain may not have a view of the bias of the input data.
- Biases happen at every layer from presentation to data, data to model, and model to user interaction via algorithm. So this is not just a data cleaning/governance issue.
Handling this issue at scale requires
- Creating awareness at the developer level about bias
- Catch possible biases in exploratory data analysis
- Validate models on various data sets and compare the behaviour
- Trying to use models which are explainable, auditable, and transparent.
AIM: How do you ensure your data science and AI/ML teams are aligned with the company’s AI governance policies and best practices?
Vijay Krishnan: At Turing, each of our teams is led by very experienced senior AI/ML professionals who primarily do the job of asking the right questions to the teams working with data. They ensure that the testing is adequate and they collaborate to discuss what the models have uncovered rather than focusing just on their outputs.
We are constantly trying to understand what the data is saying to us, which features are correlated, why the behaviour does not match the hypothesis, etc. Detecting, spotting, and questioning any trend helps us identify the biases. We also give ample time to experiment before finalizing and deploying things into engineering. The A/B testing helps us track unwanted outcomes even if it gets accidentally introduced despite all this care. There are just no deviations allowed from this iron fist process.
Since our data is proprietary and not third-party data, our data privacy issues are simpler. We follow all the best practices for data protection measures for securing the database and cloud access.