Meta’s latest suite of generative AI tools allows users to input text to create musical and audio compositions, rivaling a similar tool released this year by Google.
Meta, the parent company of Facebook and Instagram, launched a suite of generative artificial intelligence (AI) models on Aug. 2 called AudioCraft for music creation from various inputs, according to a blog post.
Included in the suite of generative AI tools are MusicGen and AudioGen, which operate off of text-based inputs to create new audio, along with another called EnCodec that “allows for higher quality music generation with fewer artifacts.”
In the announcement, Meta mentioned that its MusicGen model was trained with music it owns or “specifically licensed.”
This comes amid major controversy surrounding training AI with copyrighted work across many artistic fields, including a lawsuit against Meta for copyright infringement during AI training.
Meta has made MusicGen and AudioGen available in several sizes to the “research community” and developers. It said as it develops more advanced controls, it envisions the models to become useful to both amateurs and professionals in the music industry.
In a recent interview with Cointelegraph, the CEO of the Recording Academy, Harvey Mason Jr., also likened the emergence of AI-generated music to the early days of synthesizers coming onto the music scene.
Meta’s release of its generative AI music tools comes shortly after Google launched similar tools that turn text into music, called MusicLM.
In May, the company announced that it was accepting “early testers” of the products via its AI Test Kitchen platform.
Meta has been actively releasing new AI tools alongside many other tech giants, including Google and Microsoft, in a race to develop and deploy the most powerful models.
On Aug. 1, Meta announced the launch of new AI chatbots with personalities, which users on its platforms can use as search helpers and as a “fun product to play with.”