Listen to this story
|
On March 7, Microsoft held Azure Open Source Day to showcase its dedication to open source and emphasize the potential of open source tools in creating intelligent applications with increased speed and flexibility. The tech giant released several updates from cloud-native apps to machine learning foundation models.
The event kickstarted with a group of experts from Github, HashiCorp, Microsoft, and Redis participating in a panel discussion. They discussed the development of open source in software, how it impacts software supply chain and security, and how new AI capabilities may affect the future of open source.
Azure OSS Cloud Native’s corporate vice president Brendan Burns, vice president of communities of GitHub’s Stormy Peters, and director of Microsoft’s Open Source Strategy and Ecosystem Sarah Novotny were among the panelists.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
Key Takeaways from the Event
Microsoft launched a cloud-native application that helps people reunite with their lost pets using fine-tuned machine learning. The app rids you of the hassle of printing posters by using an advanced machine learning image classification model, fine-tuned by the images on your camera roll. This machine learning model has been trained to match the photo of a pet instantly, enabling you to connect with the owner as soon as you snap a picture of the found pet.
The .NET Blazor application is utilised as the frontend of the app, while the Python backend is responsible for handling its communication. To simplify the connectivity between microservices, the distributed application runtime (Dapr) is employed, which also provides helpful application programming interfaces (APIs).
For the backend, a pre-built vision model from Hugging Face is utilized, which is fine-tuned directly through Azure Machine Learning to enable model training and prediction. The entire app is deployed through Bicep templates and operates on Azure Kubernetes Service. The Kubernetes Event Driven Autoscaling (KEDA) is employed to facilitate autoscaling based on the volume of messages transmitted through Dapr.
Microsoft has also released a public preview of the Florence foundation model for their computer vision service. This model has undergone training with billions of text-image pairs and has been integrated into Azure Cognitive Service for Vision, enabling it to operate as a cost-effective, ready-for-production computer vision service.
The new features of Vision, which rely on the Florence model, can be tested by users through Vision Studio.
Developers can leverage the upgraded vision services to develop advanced, production-ready, ethical computer vision applications for diverse industries. This enhancement allows customers to conveniently convert, analyze, and connect their data to natural language interactions, providing valuable insights from their image and video content. This, in turn, facilitates accessibility, enhances search engine optimization (SEO), shields users from harmful content, improves security measures, and optimizes incident response times.
The vision service offers various features such as generating detailed captions, accessible alt-text, SEO, and intelligent photo curation for digital content. Additionally, it includes video summarisation, background replacement, and other features.
Focus on Machine Learning
The technology behemoth has introduced the public preview of foundation models within Azure Machine Learning. It has inherent capabilities that empower users to construct and execute open-source foundational models on a large scale.
The Azure Machine Learning specialists can commence their data science projects effortlessly, fine-tuning and deploying foundation models obtained from various open-source repositories, starting with Hugging Face, via Azure Machine Learning components and pipelines. This service will furnish a comprehensive collection of popular open-source models via the built-in Azure Machine Learning registry, catering to multiple tasks such as natural language processing, vision, and multi-modality.
Users can employ these pre-trained models directly for deployment and inferencing, while also being able to finetune supported machine learning tasks with their own data and directly import any other models from the open-source repository.
Here are a few more highlights from the event showcasing recent innovations and Microsoft’s contribution to open source:
- This month, Microsoft will be introducing a new feature in ACPT – Nebula – that enables data scientists to save checkpoint times faster than current solutions for distributed large-scale model training jobs with PyTorch. In testing, Nebula achieved a 96.9% reduction in single checkpointing time when saving medium-sized Hugging Face GPT2-XL checkpoints. It can significantly reduce checkpoint times, potentially by 95 percent to 99.9 percent, thus, reducing end-to-end training time in large-scale training jobs.
- Introduced new integrations with Azure Database for MySQL– Flexible Server and the Microsoft Power Platform, which simplify the development process and allow users to analyze data, automate processes, and build apps using low-code tools.