We spoke to Ness Digital Engineering CTO Moshe Kranc who has set the technical vision for the New Jersey-headquartered company and is leading innovation through digital transformation, big data analytics, cloud technologies and machine learning. Kranc has 30 years of experience in high-tech and holds 5 patents in areas related to pay television and computer security. He is also leading the charge of digital transformation at Ness – a company known for its engineering excellence and has been acknowledged in the industry for its distributed agile development.
Kranc, a well-known figure in the technical community is also the author of The Hasidic Masters’ Guide to Management, which serves as a guide to novice and experienced managers. In his interview, Kranc talks about how RPA is not the magic bullet organizations seek, challenges to rolling out an AI-based initiative, how DevOps can speed up rapid software development and why we need to champion the Open Data Movement. Interestingly, Kranc was one of the voices back in 2015 that talked about the decline in popularity of Hadoop and today we see the effects — the Hadoop ecosystem has been upstaged by more modern technologies. Read the full interview to find out more.
1) Can you share few big data analytics use cases from retail and financial sector?
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
In the financial sector, Know Your Customer (KYC) is a ubiquitous regulatory requirement that requires analyzing data from all parts of the business in order to reconcile them into a single consistent picture of the customer. The challenge here is data wrangling and de-duplication on massive amounts of data. Fortunately, AI provides the tools needed to automate large portions of this task with humans handling the “grey areas.” On the horizon are techniques that can help eliminate the need for human intervention from the data cleansing process.
For retail, Big Data Analytics can answer questions that improve the retailer’s ability to manage the business. For example, a cold snap is coming – what items will see an increase in demand and which will decrease? Did the items we put on sale last week generate an overall increase or decrease in revenue? What is the optimal combination of items to put on sale next week? How should inventory be optimally distributed so we don’t have too much of one item and not enough of another item at a given store? What are the early warnings that an online shopper is about to abandon their shopping cart, and what can be done to prevent it? Any one of these insights is worth money to retailers, and they are all available via Business Intelligence based on Big Data.
Download our Mobile App
2) Earlier this year, Ness announced the Personalization Accelerator solution that applies ML to get better insights. Can you tell us how it gives this solution helps businesses achieve a better customer experience?
For retail, the use case with the most visible ROI is personalization – it has consistently provided a double-digit lift in click-through rates and revenue (noted by various sources) when implemented correctly.
For an online retailer, this could mean adjusting the landing page or search results page so that the customer sees items that are directly relevant to his/her interests. For a bricks and mortar retailer, this could include personalized coupons and email campaigns. The goal is to match up the metadata about a specific item (e.g., a wristwatch’s price category can be one of: budget, mid-market, luxury) with each user’s specific taste (e.g., this user has in the past almost always looked for information about luxury watches). Information about the user’s taste can be learned from sources such as the user’s on-site behavior and loyalty programs that entice users to register and provide personal information. Another common form of personalization is collaborative filtering: customers like you also bought the following items and customers who bought the item you just purchased also purchased the following items.
The hard part of personalization is not the algorithms – these are available from a variety of Open Source projects and Cloud providers. The challenge is putting the pieces together and tuning them so they give results that make sense to the customer. To overcome this challenge, Ness has developed a Personalization Accelerator that we use to help customers quickly visualize and tune their personalization. This enables retailers to get to market that much faster with their personalization campaigns.
3) Can you talk about the adoption of DevOps in the company and how it is important for driving digital disruption?
DevOps is a culture, a mindset and a process whose purpose is to enable rapid software deployment. DevOps achieves this by breaking traditional silos between development and operations and by providing tools and processes that automate the integration and deployment pipelines. When combined with Agile development and a microservices-based architecture, DevOps can reduce a company’s time to innovate (i.e., the time it takes from an idea’s conception to launch) from months to days. That’s why it is a key enabler for digital disruption. Your customers expect you to respond to their needs and roll out new features at the speed of Facebook, and your competitors are rolling out new digital services every day. To keep up you must adopt DevOps in combination with Agile and microservices.
4) Many enterprises are undertaking Robotic Process Automation initiatives in 2018. What is the attraction? What are the challenges in deploying an RPA solution?
RPA took off in 2017 because it promises short term ROI for a very modest initial cost. Consider a Business Process Management (BPM) project, which requires a large initial investment to analyze and automate an entire process end-to-end, e.g., W fills in a form, submits it to X for validation, then it gets sent to Y for approval, and then to Z for implementation. Such a project has high risk and high reward. By contrast, a RPA project has much more modest goals (e.g., automate W’s filling in the form) with a much more modest but low-risk ROI.
But, RPA is harder than it looks:
- It often requires more time than anticipated to understand the business process, because it requires deep domain knowledge that human employees may be hesitant to share for fear of “bots” taking over some of their responsibilities.
- Sometimes human intervention is required because the task requires human intelligence to make decisions or to ensure compliance with industry regulations.
- RPA must not only work, it must also be enterprise-grade, providing capabilities such as orchestration, high availability, disaster recovery, monitoring, auditing and governance.
- There is a temptation to modify the process rather than just automate it. This creates a moving target that may never converge.
- Automated processes can “melt down” if circuit breakers have not been built into the process. Think of automated stock trading programs that got into a vicious cycle and caused a stock market meltdown in 2010.
- The key to success is to look at RPA not as a magic bullet, but rather as just another part of your enterprise platform that requires planning, can incur technical debt, and needs governance.
5) Artificial Intelligence seems to be at the peak of the hype curve. How will it affect the enterprise in 2018? What challenges will enterprises face in rolling out AI-based initiatives?
AI is not just hype – it’s real, and it has matured to the point where it can bring tangible benefit to the enterprise in 2018. Fifteen years ago, it was hard to program collaborative filtering – you had to understand the math well enough to implement it. Nowadays you don’t have to understand much – there are several, excellent publicly-available implementations that require minimal tuning to produce excellent results. The barrier to entry has been eliminated.
How can the enterprise benefit from AI? I would reverse the question – It’s hard to think of a use case where AI would not provide benefit. For example:
- Data cleansing: Tools like Paxata and Tamr use AI to automatically cleanse Big Data with humans required only to handle “gray areas.”
- Robotic Process Automation: AI can “listen in” to human activity in an enterprise and determine which processes can be automated. For some of those processes, AI can mimic complex human behavior that requires making judgement calls.
- Customer interaction: Chatbots can be deployed that engage in a dialogue with the customer and provide good answers to most questions, and defer to humans to answer more complex questions. The interface to these chatbots can be via typed text or via bi-directional speech.
- Churn analysis: AI can identify behavior patterns that are leading indicators of customer churn and provide a retention plan that is personalized to each wavering customer
There are several challenges to implementing AI as well:
- Finding skilled manpower that understands the AI tool stack: There is a real shortage in the market, and companies are fighting over scarce talent. At Ness, we hire expertise from the market, but augment that with internal training to grow our own AI experts to meet market demand.
- Using immature AI-based technologies and products: The brochure always overstates the actual capability, and it does not tell you those cases where the AI falls flat on its face. There is no substitute for experience – find a partner who has done this before.
- Making the AI enterprise-grade: Your AI algorithm may work well on an engineer’s desktop – that’s a great start. But, you still need to productize the AI, so that it can be deployed reliably in a scalable microservices architecture, e.g., a parallelized TensorFlow algorithm running in multiple Docker containers that is automatically deployed as part of your CI/CD pipeline. This requires expertise in the intersection among AI, microservices, DevOps and Cloud architectures. The best advice is to find a partner who has done this before.
6) You have often echoed this statement that Hadoop will fade and it has come to pass, the hype has faded. Can you tell us what advantage Spark and Cassandra have over Hadoop and why major tech companies are adopting Apache Spark?
In its early days, Spark had a killer advantage over Hadoop’s Map-Reduce framework – the ability to pass results from one stage of the processing pipeline to the next in memory rather than via disk. Hadoop subsequently caught up thanks to Tez, but Spark continues to have other significant advantages over Hadoop-based data processing engines.
- Ability to define arbitrarily complex processing flows.
- Ability to handle both real time and batch processing using the same interface.
- Ability to run in a variety of file systems and resource management environments other than Hadoop.
- Multiple programming models, including Java, SQL and Data Frames.
- A mature code base and a vibrant Open Source community.
If there is any threat to Spark on the horizon, I believe it will come from serverless databases like Amazon’s Athena, which can drastically reduce the cost of queries by spinning up servers on demand. Cassandra and Hadoop’s HBase are both reasonable choices for use cases that require NOSQL databases. But, for customers who already have a Hadoop cluster, I usually recommend HBase because the advantages of Cassandra do not outweigh the cost of installing yet another Big Data cluster to run Cassandra. In fact, I believe serverless databases, which deploy computing resources on demand, will enter the mainstream, because they can be an order of magnitude less expensive than traditional, dedicated resource databases in the Cloud.
7) You have also been a champion for the Open Data Movement. How can a company gain any financial benefit from releasing their data?
I do believe in the “network effect” for data – the more data that is publicly available, the more valuable each individual enterprise’s data becomes. For example, a retail store whose business was down last weekend might want to look for a correlation to general business trends in the store’s immediate neighborhood in order to better plan for future fluctuations. That data is only available if every store in the neighborhood contributes data (with privacy and anonymity assured), and every store in the neighborhood then reaps the benefits.
Sometimes the benefit may be less direct, e.g., good PR for the company. For example, Uber shared its drivers’ GPS data with the world in order to help urban planners improve their highway planning. This helps portray Uber as a “good citizen.