Listen to this story
|
IBM announced that it would host Meta’s Llama 2-chat 70 billion parameter model in the watsonx.ai studio, with early access available to select clients and partners. This would build on IBM’s collaboration with Meta on open innovation for AI, which will include working with open source projects developed by Meta, such as the PyTorch machine learning framework and the Presto query engine used in watsonx.data.
Through this partnership, IBM aims to bolster its offering of both third-party and proprietary AI models. Clients utilizing watsonx.ai will soon gain access to the powerful Llama 2 model, further enriching their AI-driven applications.
Following the launch of Meta’s open source AI model, the enterprise software provider has announced its plans to introduce a range of supplementary software offerings. These upcoming additions include AI tuning studios, fact sheets, and an assortment of additional generative AI models.
Notably, the integration of Meta’s model aligns with IBM’s strategy of providing comprehensive support for Natural Language Processing (NLP) tasks, including question answering, content generation, summarization, and text classification.
Leveraging the collaborative approach, watsonx.ai users can already harness AI models from IBM and the Hugging Face community. These pre-trained models cater to diverse NLP needs and empower AI builders to deliver more effective and sophisticated applications.
With Watsonx, IBM is offering its customers an AI development studio with access to IBM-curated and trained foundation and open-source models, besides access to a data store to enable the gathering and cleansing of training and tuning data, and a toolkit for data and AI governance.
Llama 2, an advanced commercial iteration of Meta’s open source AI language model unveiled in July, has been made available through Microsoft’s Azure cloud services. This strategic move positions Llama 2 to rival OpenAI’s ChatGPT and Google’s Bard within the emerging generative AI market.