Meta has rolled out a new software update for its AI-powered smart glasses, improving audio quality through noise filtration and enhancing the audio streaming experience with Spotify using multimodal AI.
The v21 update introduces “Conversation Focus,” a feature that amplifies a speaker’s voice in noisy environments. First announced at Meta Connect earlier this year, the feature uses open-ear speakers built into Ray-Ban Meta and Oakley Meta HSTN glasses to enhance speech clarity. The system selectively amplifies the voice of the person a user is speaking with, helping distinguish conversation from background noise in places such as busy restaurants, trains, or crowded events. Users can adjust amplification levels directly from the glasses or through device settings, depending on their surroundings.
The update also introduced Meta’s first multimodal AI music experience in partnership with Spotify. By combining on-device vision with Spotify’s personalisation engine, users can ask Meta AI to play music that matches what they are looking at, blending visual context with individual listening preferences to create moment-specific soundtracks.
In a move that strengthens its India-focused AI strategy, Meta has added Telugu and Kannada language support to Ray-Ban Meta and Oakley Meta HSTN glasses. The rollout enables fully hands-free interaction with Meta AI in two additional regional languages, making the devices more accessible and natural to use for millions of users across the country.
With this addition, Meta AI’s multilingual footprint in India now extends beyond English and Hindi, reflecting a broader push to localise AI-powered wearables for diverse linguistic communities.
The new features are rolling out gradually, starting with users enrolled in Meta’s Early Access Programme, with wider availability expected over time.

