Adobe Teams Up with Ethical Hackers for AI Tool Security

Adobe Teams Up with Ethical Hackers for AI Tool Security

As we continue to integrate generative AI into our daily lives, it’s important to understand and mitigate the potential risks that can arise from its use. Our ongoing commitment to advancing safe, secure, and trustworthy AI includes transparency about the capabilities and limitations of large language models (LLMs).

Adobe has long focused on establishing a strong foundation of cybersecurity, built on a culture of collaboration, enabled by talented professionals, strong partnerships, leading edge capabilities, and deep engineering prowess. We prioritize research and collaborating with the broader industry on preventing risks by responsibly developing and deploying AI.

We have been actively engaged with partners, standards organizations, and security researchers for many years to collectively enhance the security of our products. We receive reports directly and through our presence on the HackerOne platform and are continually looking at ways to further engage with the community and open feedback to enhance our products and innovate responsibly.

Commitment to responsible AI innovation
Today, we’re announcing the expansion of the Adobe bug bounty program to reward security researchers for discovering and responsibly disclosing bugs specific to our implementation of Content Credentials and Adobe Firefly. By fostering an open dialogue, we aim to encourage fresh ideas and perspectives while providing transparency and building trust.

Content Credentials are built on the C2PA open standard and serve as tamper-evident metadata that can be attached to digital content to provide transparency about their creation and editing process. Content Credentials are currently integrated across popular Adobe applications such as Adobe Firefly, Photoshop, Lightroom and more. We are crowdsourcing security testing efforts for Content Credentials to reinforce the resilience of Adobe’s implementation against traditional risks and unique considerations that come with the provenance tool, such as the potential for intentional abuse of Content Credentials by incorrectly attaching them to the wrong asset.

Adobe Firefly is a family of creative generative AI models available as a standalone web application at as well as through features powered by Firefly in Adobe flagship applications. We encourage security researchers to review the OWASP Top 10 for Large Language Models, such as prompt injection, sensitive information disclosure, or training data poisoning to help focus their research efforts to pinpoint weaknesses in these AI-powered solutions.

𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐥𝐚𝐭𝐞𝐬𝐭 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐛𝐲 𝐣𝐨𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 WhatsApp Channel now! 👈📲

𝑭𝒐𝒍𝒍𝒐𝒘 𝑶𝒖𝒓 𝑺𝒐𝒄𝒊𝒂𝒍 𝑴𝒆𝒅𝒊𝒂 𝑷𝒂𝒈𝒆𝐬 👉 FacebookLinkedInTwitterInstagram

Related Stories

No stories found.