MITB Banner

Steg.AI’s Unique Watermarking Approach Overshadows Tech Giants

While Meta, Microsoft, Stability AI, Midjourney etc make efforts, Steg.AI has raced ahead with continued efforts

Share

Listen to this story

In May 2023, the world was left shell-shocked as it witnessed images of the Pentagon shrouded in smoke. Many news channels reported the incident based on these images and even the stock market reacted, dropping for a brief period of time. Later, it turned out to be a fake AI-generated image. 

Such instances highlight the challenge in identifying AI-generated content in various contexts. They also bring back the larger discussion on deep fakes and fake images—which has been amplified due to generative AI tools and their ability to produce hyperrealistic images just from prompts.

All of this has led to governments across the globe scrambling for solutions and a way to safely regulate artificial intelligence. Prominent AI firms, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, have voluntarily committed to measures like watermarking AI-generated content to enhance its safety, but there hasn’t been any solid development yet.

Current methods like encoding data into images or audio can be easily bypassed. A robust, invisible watermark that’s easily applied and detected, yet resistant to transformations, is necessary, as studies suggest that it is difficult for humans to differentiate between human and AI-generated contents. As online IP theft is rampant, the ability to prove content creation’s origin is increasingly essential.

Meanwhile, a California-based platform Steg.AI has developed a deep learning-based solution that embeds nearly imperceptible watermarks into digital content. Even if the images are altered, compressed, or manipulated, the Steg.AI watermark remains intact. Remarkably resilient, these watermarks can even be captured using an iPhone camera when displayed on screens or printed. 

Steg.AI’s watermarking solution finds applications in diverse scenarios, such as stock photography services, content sharing on platforms like Instagram, pre-release copies of films, and safeguarding confidential documents. Early iterations of their product faced challenges, leading to a shift in focus towards robustness, a standout feature that resonated with customers.

How it Works

Steg.AI’s core concept involves seamlessly integrating watermarks into AI-generated images before distribution. While the specifics of the process remain proprietary, the basic idea revolves around a pair of machine-learning models. One model customises the watermark’s placement within the image, ensuring imperceptibility to the human eye while remaining detectable by the decoding algorithm.

Analogous to an invisible, mostly unchangeable QR code, this method potentially holds kilobytes of data – sufficient for URLs, hashes, and plaintext information. Each page of a multi-page document or video frame could harbour distinct codes, exponentially increasing the capacity.

The company’s extensive work can be traced back to a 2019 CVPR paper, along with the acquisition of Phase I and II SBIR government grants. Co-founders Eric Wengrowski and Kristin Dana, who were previously involved in academic research, have dedicated years to refining their approach.

Steg.AI’s progress has been supported by NSF grants and angel investments totaling $1.2 million. Recently, the company announced a significant milestone with a $5 million seed funding round led by Paladin Capital Group, accompanied by participation from Washington Square Angels, NYU Innovation Venture Fund, and individual angel investors.

What Big Techs are Doing

Major tech companies have taken steps to incorporate watermarking into their content. Microsoft announced at its annual Build conference that it’s adding new media provenance features to Bing Image Creator and Designer, enabling users to verify AI-generated images and videos. This innovation involves cryptographic methods to mark and sign content with metadata indicating its origin. For this, websites must adopt the Coalition for Content Provenance and Authenticity (C2PA) specification developed with Adobe, Arm, Intel, Microsoft, and Truepic. 

However, the impact of Microsoft’s efforts relies on broader media provenance standard adoption, with support from companies like Stability AI and Google who are also exploring similar approaches. Shutterstock and Midjourney have adopted guidelines to embed markers indicating generative AI-created content.

On the other hand, collaborative research involving Meta AI, Centre Inria de l’Universite de Rennes’, and Sorbonne University have developed an innovative technique that seamlessly incorporates watermarking into the image generation process while preserving the architecture. This method modifies pre-trained generative models to effectively integrate watermarks into generated images, enhancing security and computational efficiency. This technology enables model providers to distribute versions of their models with distinct watermarks for different user groups, facilitating ethical usage monitoring.

The technique is valuable for media organisations in identifying computer-generated images. Leveraging Latent Diffusion Models (LDM), the researchers successfully integrated watermarks with minimal adjustments to generative models. The process involves fine-tuning LDM decoders using perceptual image loss and hidden message loss from a streamlined deep watermarking method called HiDDeN. The technique showcases strong performance in image editing tasks, even with heavily cropped images, maintaining the original model’s utility across various LDM-based tasks.

The move by the seven big techs late last month supports the Biden administration’s push to regulate the booming and popular AI technology. The US Congress is also reviewing a Bill that would mandate the disclosure of AI involvement in creating political ads.

Share
Picture of Shyam Nandan Upadhyay

Shyam Nandan Upadhyay

Shyam is a tech journalist with expertise in policy and politics, and exhibits a fervent interest in scrutinising the convergence of AI and analytics in society. In his leisure time, he indulges in anime binges and mountain hikes.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.