DevOps is a strategic concept that combines software development practice with the operations part of the software deployment in order to improve the process & provide continuous integration and delivery services. Most of the cloud providers have dedicated services around them to have a seamless integration & service to the end customers. Some of the common providers are Amazon Web Services, Microsoft Azure and Google Cloud. These cloud service providers also support machine learning, image processing, GPU Computing & high volume data analysis.
Typical DevOps Structure
Sign up for your weekly dose of what's up in emerging technology.
- Better Operational execution
- Increase in the flexibility of deployment
- Effective Collaboration working
- Cost-Effective Maintenance
- Lesser Capital deployment
- Streamlined Development & Deployment process
- Requires a cultural change in an organization
- Cross-Skilling expenditure
- Outsourcing becomes a little difficult
The need for DevOps in Data science
As we are seeing, the entire data analytics industry has evolved over the last 5 years, hence the need for cost-effective & easy management of development practices has been an attentive topic. With more collaborative teams across the globe, it is essential for an organization to have a structured process around development for the end-users.
From a data science perspective, we see that there are more independent freelancers, consultants, remote teams who are working on various problems & challenges. There has to be a structured way of development, building the code, testing & deployment to the final stage.
Data science solutions are not going to be just a piece of code to work with. In order for the end-user to consume, the model has to work with a front-end application as well as the backend mechanism. As we see, there are 3 different development teams that are going to be integrated at a single point to run the business & provide benefits for the customer.
For example, let’s say we want to build an image recognition application that can recognize objects & provide the users with the predictions. For simplicity purposes, we will be keeping less complex User Interface, pre-trained models such as VGG16/VGG19 & backend.
The following is the UI functionality:
- Upload Image screen
- Display Predictions
- Save the scores
- Open the previous runs & check
The Image recognition requirements are:
- Pre-Trained Models
- Scoring process
- Train & Test metrics capturing
Backend requirements are as follows:
- Storing the user details
- Prediction scores saved to a database
Overall, we realize that there are 3 different teams that are going to work collaboratively on a single goal. It is essential for a Product Owner or Project Manager to have a defined process of product building & usage strategy. Consider having a separate operations team to handle this complexity, which used to be the norm before. There are lots of issues apart from just taking the code & putting it down onto the servers.
Some of the common issues faced are:
- Version mismatch of the libraries
- Multiple builds for a single application
- Efforts Burn-out to integrate the software codes
- Customers facing issues during deployment
The usage of data science applications is to improve customer experience rather than have a faulty application that does not serve the end purpose. In order to tackle these practical challenges, the cloud providers have introduced services where all teams can work in a seamless manner.
AWS is one of the leading providers who are pretty much dominant with their list of services. We can have the same solution work in a better way by having the following strategies:
- Teams having their IDE integrated with Git or AWS CodeCommit or any 3rd party repository
- For Machine Learning models AWS has a Sagemaker service
- AWS CodePipeline along with CodeBuild & CodeDeploy makes things look simpler
- Build tools such as Jenkins along with Docker makes it scalable, efficient & portable (this was not even part of the requirements though, yet we have taken advantage of it).
- Database storage can be done in one of the AWS services such as DynamoDB, S3 File Storage.
Apart from the above advantages levered from the cloud, we also have an efficient process of enabling the logging mechanism, cost management, building dashboards, deriving insights. Some of the services one can use on top of the existing requirements are:
- CloudWatch- Captures logs of the application runs
- IAM – For Security & User management
- Quicksight – Visualization of Scores & Metrics
- Cost Management- Keep control on Budgets & Spendings
Cost Management in Data science Cloud Solutions
Most of the advanced algorithms such as CNN, GAN have a higher usage of computing & needs a lot of memory. With a regular infrastructure, it becomes a constraint & difficult for developers to run executions. In one of my previous experiences where we built Generative Adversarial Networks to come up with an artificial sample of images, it was very difficult to run in our computing environment.
The advent of cloud has enabled us to use more powerful infrastructure machines that have GPU support & can handle a large volume of data processing. Applications that depend on high-resolution images, audio and video data can be processed faster & building the required architecture, design and execution become easier. Purchasing such a powerful infrastructure is not cost-effective unless we use them on a regular basis & see value out of it. Most of the startups, SME & Mid-level organizations heavily rely on cloud solutions.