MITB Banner

Data Was The Reason Why An Enterprise-Centric Economy Shifted To A User-Centric One

Share

From the time where companies used to struggle to make disparate systems talk to each other, one had to create conversation protocols from scratch. Developing new systems was a close-knit affair as almost all of the data systems were centred around hardware and its efficient usage. In such a scenario, companies were barely integrating data, let alone bringing it all together and extracting value out of it.

Though the scientists were trying to solve pattern-puzzles in nature’s phenomenon from the early ‘50s, human psychiatry was considered merely a medical jargon. The proliferation of marketing and rise of consumerism in ‘80s era made scientists explore the idea of combining these two, separate but supplementary ideas into one, and to make money out of it. The idea was about to stir-up the human consumerism and the way we think of our world. After 40 odd years of magnificent growth in every aspect of life including physical sciences, medical sciences, computer sciences, marketers, internet giants, we have reached such a place where EU had to put up new privacy policies called GDPR to limit and streamline what companies do with consumers’ data.

What pushed this former enterprise-centric to the current user-centric world is the organic shift from hardware-centric to software-centric (or logical reasoning) computer sciences. More than a decade ago, most of the Fortune 500 companies realised a new oxygen is being emanated from all around the world and that led to the initiation of data integrating. Data however small or minuscule was making sense to integrate.

Technology Behind Data Has Changed

In the past decade, the technology behind data has changed dramatically and more than that, the user mindset, demanding instant gratification, has cropped up. Big data was the hero of the last decade’s date movie. New hardware was being invented to process semi-structured and unstructured formats of data including online clickstreams which were hosted on premises while the cloud provided a low-cost, highly scalable, distributed, and high fall-back resistant computing systems. This enabled not only the big businesses but even the small or mid-sized organisations to implement these cloud-based analytical solutions which eventually made more sense, in the ever-increasing data generation. Artificial intelligence was present in the scientist community from the early ‘50s but the supporting hardware to carry out such high processing calculations were not built and developed until the latter part of the last decade, which again opened a whole new perspective and uncovered the black hole of machine and deep learning AI models.

Internet of things being the front-runner in the data generation bandwagon is generating Exabytes of useful but unused data, waiting to be explored. This also pushed data mining and technologies like NLP to flourish.

With these new data sources, a whole new congregation of open source software such as Hadoop- to store and cluster-computing frameworks like Apache Spark and Apache Storm are being adopted and they are cheaper than a data warehouse for similar volumes of data. Many organisations, today, are employing Hadoop-based data lakes to store different types of data in their “original unstructured” formats until they need to be structured and analysed.

Again, this unstructured big data needed the data scientists invent new ways to make it structured and ready for statistical analysis, with new/old scripting languages like Pig, Hive, R and Python. This change in acquiring and using open source software is a major change in itself for established enterprises which used to be afraid of even talking about Open-Source.

All these technological advances are “asynchronous”, however, enterprises today are integrating the analytics with the live and synchronous applications. They might fetch data from CRM systems to evaluate the lifetime value of a customer or optimize pricing based on supply-chain systems about available inventory. Integration to these systems is made possible by the advances in the component-based or “microservices” approach to analytical technology. Small snippets of code are embedded into the system to deliver limited analytical results; open source software has assisted this trend.

The manifestation of this embedded approach has opened new analytical approaches like “Edge Analytics” which uses automated algorithms to process data from sensors and other devices at the point of collection, or “Streaming Analytics” which provides real-time analytics and insights at the very end of data generation chain, like data from onboard computer of a car or even a bicycle.

Conclusion

The future also seems very promising with immediate technologies like Blockchain and AI to our age-old dream of flying cars. Hence, we as an information technology flag bearers are in this transient phase where mainframes are still running away in many financial firms in lower Manhattan basements. Firms still use statistical packages, excel sheets, data warehouses and marts, visual analytics, and BI tools. In spite of using these age-old technologies, it is very heartening to see companies exploring the ideas in analytics and data sciences to ultimately impact the human race, and for good.

Mark Weiser, Chief Technologist at Xerox PARC in the 1990s, articulated a far more powerful role for technology when he said, “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.

This should be the ultimate purpose of technology– to be a manifestation of oneself and even, to create peace and contentment. Technology can and should extend beyond the black box to become something far more powerful – an advocate that empowers you to be smarter, better, more capable and more accomplished.”

PS: The story was written using a keyboard.
Share
Picture of Ashish Panchal

Ashish Panchal

Ashish is a part of the AIM Writers Programme. He is a Digital Marketing Analyst and former BI Architect at a noted IT firm. With over 10 years of extensive experience in delivering data engineering & cloud-based BI solutions, he has helped clients across domains to enable their digital platforms, extract insights from their business intelligence suites, thereby triggering & tapping newer business opportunities.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India