Adobe Previews New Gen AI Tools for Crafting and Editing Custom Audio

Adobe Previews New Gen AI Tools for Crafting and Editing Custom Audio

New experimental work from Adobe Research is set to change how people create and edit custom audio and music. An early-stage generative AI music generation and editing tool, Project Music GenAI Control allows creators to generate music from text prompts, and then have fine-grained control to edit that audio for their precise needs.

“With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,” says Nicholas Bryan, Senior Research Scientist at Adobe Research and one of the creators of the technologies.

Adobe has a decade-long legacy of AI innovation, and Firefly, Adobe’s family of generative AI models has become the most popular AI image generation model designed for safe commercial use, in record time, globally. Firefly has been used to generate over 6 billion images to date. Adobe is committed to ensuring our technology is developed in line with our AI ethics principles of accountability, responsibility, and transparency. All content generated with Firefly automatically includes Content Credentials – which are “nutrition labels” for digital content, that remain associated with content wherever it is used, published or stored.

The new tools begin with a text prompt fed into a generative AI model, a method that Adobe already uses in Firefly. A user inputs a text prompt, like “powerful rock,” “happy dance,” or “sad jazz” to generate music. Once the tools generate music, fine grained editing is integrated directly into the workflow.

With a simple user interface, users could transform their generated audio based on a reference melody; adjust the tempo, structure, and repeating patterns of a piece of music; choose when to increase and decrease the audio’s intensity; extend the length of a clip; re-mix a section; or generate a seamlessly repeatable loop.

Instead of manually cutting existing music to make intros, outros, and background audio, Project Music GenAI Control could help users to create exactly the pieces they need—solving workflow pain points end-to-end.

“One of the exciting things about these new tools is that they aren’t just about generating audio—they’re taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio. It’s a kind of pixel-level control for music,” explains Bryan.

Project Music GenAI Control is being developed in collaboration with colleagues at the University of California, San Diego, including Zachary Novack, Julian McAuley, and Taylor Berg-Kirkpatrick, and colleagues at the School of Computer Science, Carnegie Mellon University, including Shih-Lun Wu, Chris Donahue, and Shinji Watanabe.

𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐥𝐚𝐭𝐞𝐬𝐭 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐛𝐲 𝐣𝐨𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 WhatsApp Channel now! 👈📲

𝑭𝒐𝒍𝒍𝒐𝒘 𝑶𝒖𝒓 𝑺𝒐𝒄𝒊𝒂𝒍 𝑴𝒆𝒅𝒊𝒂 𝑷𝒂𝒈𝒆𝐬 👉 FacebookLinkedInTwitterInstagram

Related Stories

No stories found.
logo
DIGITAL TERMINAL
digitalterminal.in