OncoCoin AG is a Swiss-based company and a fully owned subsidiary of the international AI and Blockchain-based Drug Discovery and Development company Innoplexus AG. OncoCoin AG is particularly known for initiating the world’s largest Pancreatic Cancer Biomarker Study (PALAS study) led by Pancreatic Cancer experts in Europe.
OncoCoin AG also runs a free mobile app for cancer patients called the CURIA App that helps cancer patients identify people with similar symptoms, reach out to doctors for second opinions and identify clinical trials. Furthermore, OncoCoin AG introduced one of the world’s largest decentralised and GDPR compliant Real World Data (RWD) exchange, dubbed the AMRIT tokens, to help patients pay for services such as independent second opinion, access to digital therapeutics and more. This utility token enables patients to reclaim ownership and benefit from revenues derived from their data.
In an exclusive interview with Analytics India Magazine, Ashwinkumar Rathod, Co-CEO & Co-Founder, OncoCoin, sheds light on how they embed ethics into their products.
AIM: How does OncoCoin (Innoplexus) leverage AI?
Ashwinkumar Rathod: Innoplexus leverages AI in a three-fold way:
- Translating unstructured publicly and proprietary data into a knowledge graph. By publicly available data, we literally understand everything which is crawlable on the web. Our proprietary data covers clinical data, that is, patient files extracted from clinical sites and patient data uploaded by the users of our apps CURIA and NEURIA. CURIA addresses the needs of patients suffering from oncological diseases, while NEURIA helps patients who suffer from currently incurable neurological diseases like Parkinson’s Disease, Alzheimer’s Disease or Multiple Sclerosis.
- In our drug discovery pipelines, AI is another value driver. Our corporate search platform Ontosight™ accelerates Biomarker identification by orders of magnitude and helps pharma companies to compile shorter and more accurate lists of potential drug candidates.
- Finally, we combine our AI capabilities and the publicly available data to predict the future success of ongoing clinical trials. Additionally, we are not only able to forecast the outcome of clinical trials, but we can also recommend new trial designs which improve the odds of a future positive trial outcome. The AI-driven insights of our three product families establish the fundamental value of AMRIT: a Utility Token issued by OncoCoin AG, enabling patients to reclaim ownership and benefit from revenues derived from their data.

AIM: Elaborate on some of the AI governance methods, techniques, or frameworks used within your organisation to ensure that your products/solutions provide the best possible experience to the users.
Ashwinkumar Rathod: Constant validation and monitoring of AI models through a team of medical experts across all stages of their life cycle forms the backbone of our AI governance. Furthermore, continuous feedback from stakeholders and users drives innovation and development. For instance, our award-winning products Ontosight™ and Curia™ are enabled by this open atmosphere of rethinking AI solutions from the perspective of the user. Every model used in all product pipelines is validated with an ample number of samples that best represent the data population, and hence this ensures that all cases, including the minority data segments, are thoroughly validated with a very high confidence interval.
AIM: What explains the growing conversations around AI ethics, responsibility, and fairness of late? Why is it important?
Ashwinkumar Rathod: Rising accessibility and demand for data, especially from private companies with various agendas, leads to rising concerns about data privacy. Establishing authenticity and reproducibility of the results published is still a great challenge, and hence it poses a serious concern when it comes to AI Ethics and fairness. Ensuring fair and responsible data analysis, therefore, needs careful regulations or innovative solutions. In the end, we believe that trust in AI services requires consumers to reclaim ownership of their private data, as exemplified in our blockchain solution. The need of the hour is to bring stakeholders together to eradicate the data silos through a federated ecosystem where everyone can securely contribute their data. Also, a data lineage of how models are trained and tested could pave the way for future RCA (root-cause-analysis) and correct the bias that was introduced in the data pipeline in a discreet and accurate manner.
AIM: How does your company ensure adherence to AI governance policies?
Ashwinkumar Rathod: The company devoted itself to the agile methodology. We break up each project into several phases. Each phase involves constant collaboration with stakeholders and continuous improvement. Once a project starts, the data science team cycles through a process of planning, execution, and evaluation. Continuous collaboration among team members, project stakeholders and product owners is vital. Improvement is documented and monitored by version control of the software we write and the data we curate.
When we developed a dashboard for clinical practitioners for more than 15 months, every incremental improvement that led us from an unresponsive automated script for anonymised data extraction to an interactive dashboard that allows physicians to overview their patients, similar cases and their patient journeys is well documented by our project plans and version control systems. Automated periodic training and testing cycles establish confidence in the results published and incremental improvements as we collate more diverse datasets over time.
AIM: How do you mitigate biases in your AI algorithms?
Ashwinkumar Rathod: Our AI algorithms are constantly evaluated using objective performance metrics at all stages of their life cycle. Consequently, basing downstream decisions on the best-performing algorithms frees them from human judgment and biases. Randomising the test and train sets for each segment of data establishes confidence in the results and points out deviations that provide hints towards potential biases. Each final product feature goes through a well-documented UAT (user acceptance testing), involving multiple types of stakeholders, ensuring that the final output is free from any bias.
AIM: Do you have a due diligence process to make sure the data is collected ethically?
Ashwinkumar Rathod: Public data is scraped by us across many countries irrespective of variations in original data purposes and formats. We always strive to extend the scope of data in order to provide an accurate and balanced reflection of state of the art in medical knowledge and diagnostics. This is especially important when enriching client data which might be more limited in scope. Model versions also follow versioning and lineage to backtrack discrepancies in future and solve the root cause. Being a GDPR compliant organisation, we respect the privacy of individuals and organisations when it comes to collecting the data. Hence, due diligence happens not just at the time of procuring the data but also throughout the pipeline when it comes to the ageing, storage, update, and disposal of data.
AIM: How do you systematically feed ethical principles related to AI and AI applications into your platform?
Ashwinkumar Rathod: Since our public data crawlers and the proprietary data tagger retrieve information only on biomedical entities and no personal data, which is even anonymised in the case of the proprietary data, racial biases or gender imbalances cannot arise in our training data sets. Therefore, the AI solutions trained on them do not suffer from shortfalls known in other AI-based applications, like credit rating or automated candidate selection. Furthermore, we host one of the largest curated ontology of biomedical entities, which ensures the AI an almost complete coverage of the factual domain. Also, the sorting and filtering of data points are made superfluid so that the end-user could always tweak the parameters depending on their respective use cases.
AIM: How does your company ensure the protection of consumer data privacy?
Ashwinkumar Rathod: We are GDPR compliant and an ISO-27001 certified organisation. Consumer privacy and data security are of prime importance at each and every step of handling consumer data. Our blockchain platform enables patients to retain ownership of their own data. In general, our customers and we do not store or process private data without explicit permission from data owners. Furthermore, as elaborated above, our AI solutions leverage factual information independently from the personal context of the data owner. Private data of our consumers is encrypted with AES-256 bit encryption, which is also recommended by the likes of the National Security Advisor (NSA) of the USA.
AIM: What are your efforts in helping brands foster a trusted, transparent relationship with consumers?
Ashwinkumar Rathod: Transparency of the model working is often a crucial element in building trust with our customers. We believe that the explainability of the model predictions goes a long way in helping the non-technical audience to understand the internal working of the model. We provide transparency through various means, starting from explaining the data segmentation across the train, test and prediction sets. We make sure the train, test and prediction sets have minimal to zero overlaps so that the training bias doesn’t affect the prediction results. Hence, the accuracy numbers published by us gain more confidence with our customers and, consequently, their customers downstream.