Applying any machine learning technique requires huge amounts of data. And with the human-machine interaction on the rise in speech technologies, chatbots and photo gallery organisers, among others, there is an immediate need to protect the privacy of the data owners. In this age of data protection, a deep tech startup called Eder AI is doing its best to solve the problem.
Analytics India Magazine got in touch with Eder AI’s CTO, Utkarsh Saxena, to know more about their work in the area.
Sign up for your weekly dose of what's up in emerging technology.
How Did It All Begin
The whole idea of Eder AI was around decentralising artificial intelligence. Today, the data owners have to relinquish absolute control over their data to whatever organisation’s servers they upload it to. The organisation can potentially do whatever they want with this data. This is not just a fundamental problem with the centralised nature of AI, it is a fundamental limitation of AI. Limitation, because there are many industries, organisations, and even the people, who can generate valuable data that can solve much bigger problems, like curing diseases, better governance, better distribution of resources, and advance research in all fields in general; but since this data could be very sensitive in nature, the owners are sceptical about sharing it with third parties. That is the fundamental problem that Eder AI is trying to solve.
The team currently has seven people working on the core product, out of which four are AI and cryptography engineers. They are also working with research associates from IISc, CMU, Cornell and Stanford who guide the research aspect of our technology, while the core team figures out the application and engineering of that research. The expertise lies in converting research papers into optimised solutions even if it means deploying a Deep Neural Network on hardware.
Over the last year plus, they have been extensively interviewing professionals to validate the user-journey and build a pipeline of potential customers, based on which, they have built various modules and are testing a part of their stack with real world customers and are revenue positive.
The first offering of Eder is Fluid Protocol. It defines a set of working rules for an organisation to collaborate with an AI service provider, to make use of the organisation’s data without revealing it to the AI vendor. The team believes that it will enable the AI ecosystem and unlock the massive potential hidden in the untapped data. The implementation is based on new architectural frameworks to do secure computations, combining architectural proof of privacy with a mathematical proof of privacy to secure AI computations protecting everyone’s IP.
Saxena said that 7 out of 10 business conversations between enterprises and AI startups don’t convert into business for the reason that both parties want to work together but data policy and IPs come in between and nobody can tweak their operations processes easily. 8/10 AI startups/service providers don’t have an on-prem operational model; they customise one if asked for.
Fluid is a secure AI computation framework that enables enterprises and AI service providers to engage in AI ops. They design a distributed architecture to share the computation and the data (model + training data, or model + data for prediction) between both the parties with mathematical proof of privacy.
In future, the fluid protocol plans to enable collaborations between multiple organisations, that can all use their data without revealing it to each other, and train a much better AI. In industries like healthcare and finance, it is important to leverage all data that is available across various enterprises or organisations. There are other industries like pharmaceuticals, airlines, nuclear power that haven’t yet leveraged AI.
Uniqueness Of Eder AI
Conventional AI practices require data to be moved away from the owner such that they relinquish control of the data to the service provider, due to which there is a large pool of data owned by research institutes and organisations that remain inaccessible. At Eder AI, the team is building an AI framework for enterprises and AI service providers, to engage in secure training and deployment of AI models in order to significantly augment the value created from private data. The aim is to build a secure computation protocol that ensures mathematical and crypto-graphical proof of privacy, and computation between two or more entities.
Overcoming The Hurdles
Just like any other startup, Eder AI team also had to face certain challenges. Saxena, while talking about the difficulties faced, said, “The first challenge that we experienced was to distil key problems to solve that would be relevant to a majority of the stakeholders in extended deep learning / AI domain.” Following this, they had to rapidly become a research-to-product team, and test various hypotheses in the industry.
They have also experienced the occasional concern raised during some interactions about the team members not belonging to any specific academic or achievement pedigrees, which eventually encouraged them to work harder towards building market-relevant products.
2019, is considered as the year of privacy-tech, Eder AI is looking forward to interviewing hundreds of AI startups and enterprises to solve for secure AI. Its long-term goals are to build technologies and frameworks for all aspects of privacy-preserving AI, as a part of which are multiple technologies like differential privacy, federated learning, multi-party computation and homomorphic encryption. There will also be different kinds of data to record sleep patterns, reading patterns, consumption patterns.
With decentralised internet on its way, the team aims to make all of this data accessible for research and innovation, while maintaining absolute control of the data by its owners. The team eventually wants to work on the technologies of the future, like Brain-Computer Interface, which is only possible in a secure, decentralised future.