The Association of Data Scientists (AdaSci), the premier global professional body of data science and ML practitioners, has announced a hands-on workshop on deep learning model deployment on February 6, Saturday.
Over the last few years, the applications of deep learning models have increased exponentially, with use cases ranging from automated driving, fraud detection, healthcare, voice assistants, machine translation and text generation.
Typically, when data scientists start machine learning model development, they mostly focus on the algorithms to use, feature engineering process, and hyperparameters to make the model more accurate. However, model deployment is the most critical step in the machine learning pipeline. As a matter of fact, models can only be beneficial to a business if deployed and managed correctly. Model deployment or management is probably the most under discussed topic.
Sign up for your weekly dose of what's up in emerging technology.
In this workshop, the attendees get to learn about ML lifecycle, from gathering data to the deployment of models. Researchers and data scientists can build a pipeline to log and deploy machine learning models. Alongside, they will be able to learn about the challenges associated with machine learning models in production and handling different toolkits to track and monitor these models once deployed.
The full-day workshop, on deep learning model deployment and management, will cover topics such as: the difference between traditional software development and machine learning; data management including collection, preprocessing, augmentation and analysis; model and hyper-parameter selection; ways for model verification; model deployment, including integration, monitoring, updating, and more.
With the rise in applications of deep learning models across industries, there is an increasing demand for professionals to understand how these deep learning models are deployed and managed across various use cases. To facilitate the same, the workshop provides hands-on tools for machine learning tracking like MLflow.org; Neptune.io; Comel.ml; wandb.ai and also a brief on how deep learning models are deployed in server and serverless frameworks.
The detailed workshop will require candidates to have a basic to moderate level understanding of python, as well as basic knowledge of Pandas, Numpy, Scikit-learn, machine learning and artificial intelligence. The workshop would also expect the participants to have a nodding acquaintance with Google Colab and GPU environment, and an elementary understanding of object storage, databases, and networking. Attendees also need to have few tools like an editor to run the python programs, preferably Google Colab Notebooks, and should also install Pandas, Numpy, scikit-learn, TensorFlow, Pytorch and Keras. The participants must also have a high-speed internet connection. The tools and techniques used during the workshop include Flask, Streamlit, Mlflow.org, Neptune.io, Wandb.ai, Circle CI/CD.
Upon completing the workshop, the attendees will gain hands-on experience in some of the most used tools for model tracking and deployment, and learn about the frameworks Flask and Streamlit. The attendees will also gain proficiency in AWS for AI/ML model deployment, and deploying models using batch, streaming, and real-time. Attendees will also get a certificate on hands-on deep learning model deployment and management.
Details of the workshop:
Date: the 06th of February 2021
Timings (Full day): 10:00 am to 5:00 pm (IST)
Pricing: $12.99 (workshop is free for ADaSci members)