Artificial Intelligence (AI) Archives - Digital Music News https://www.digitalmusicnews.com/category/artificial-intelligence-ai/ The authority for music industry professionals. Wed, 06 Nov 2024 04:16:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.digitalmusicnews.com/wp-content/uploads/2012/04/cropped-favicon-1-1-32x32.png Artificial Intelligence (AI) Archives - Digital Music News https://www.digitalmusicnews.com/category/artificial-intelligence-ai/ 32 32 GEMA Unveils ‘AI Charter’ Amid Continued Regulatory Push — 10 ‘Ethical and Legal Principles for Dealing With Generative Artificial Intelligence’ https://www.digitalmusicnews.com/2024/11/05/gema-ai-principles/ https://www.digitalmusicnews.com/2024/11/05/gema-ai-principles/#respond Wed, 06 Nov 2024 04:00:16 +0000 https://www.digitalmusicnews.com/?p=306469 GEMA AI principles

GEMA has officially published a collection of 10 AI principles. Photo Credit: Markus Spiske

About two weeks after disclosing new details about its licensing framework for generative AI, GEMA has now unveiled an official “AI charter.”

The Berlin-based society reached out with that 10-principle charter today. Previously, September saw GEMA outline an ambitious royalties framework for music created via generative AI – complete with derivative-track compensation for professionals whose works trained the underlying models.

That set the stage for the initially mentioned late-October details as well as the just-published AI charter. Importantly, said charter isn’t an in-depth collection of detail-oriented policy proposals. By GEMA’s own description, the concise resource “shall serve as food for thought and provide guidelines for a responsible use of generative AI.”

Running with the point, at least a few of the principles reiterate general ideas. “Generative AI is obligated to the well-being of people,” the first principle reads, with the second underscoring that “intellectual property rights are protected” notwithstanding the unprecedented technology at hand.

But the third principle explores in relative detail what’s perhaps the most significant element of GEMA’s generative AI compensation model. Not limiting its vision to once-off training payments, the entity believes creators and rightsholders should receive a piece of derivative tracks’ royalties and long-term revenue to boot.

“On the contrary,” this section reads in part, “the economic advantages must be considered which arise through AI content being generated (e.g. income from subscriptions) and are achieved in the market through ensuing exploitation (e.g. as background music or AI generated music on music platforms on the internet).

“In addition, the competitive situation with the works created by people must be taken into consideration. After all, these very works made AI content possible in the first place. This also must apply in cases where synthetic data was used for training the AI. Synthetic data are, in turn, based on works created by people whose creative power continues in such content when generating AI music,” the relevant text proceeds.

The remaining principles touch on well-treaded (albeit meaningful) areas including the need for generative AI training transparency and NIL protections against unauthorized soundalike works. (In the States, many in the industry are advocating for the related NO FAKES Act, which arrived in Congress earlier in 2024 and would establish a federal right of publicity.)

Plus, bearing in mind the ongoing implementation of the sweeping AI Act and the European Union’s unique regulatory environment, any company offering “AI systems that will be rolled out in the EU or that affect people in the EU must stick to the EU regulations,” another principle emphasizes.

Lastly, in terms of brass-tacks takeaways, sizable AI players must engage in “collective negotiations” with rightsholders, per GEMA. “The large digital corporations must find their way back to respecting copyright,” the “negotiations at eye level” principle states.

While it perhaps goes without saying, outlining plans to secure rightsholder compensation from generative AI is only the first of several involved steps. But especially because multiple artificial intelligence companies are adamant that training on protected materials constitutes fair use, it’ll be worth closely monitoring the effort – besides different regulatory pushes and ongoing litigation.

]]>
https://www.digitalmusicnews.com/2024/11/05/gema-ai-principles/feed/ 0
Area4Labs’ Hearby Cares Initiative Highlights Relief Concerts Across America https://www.digitalmusicnews.com/2024/11/01/hearby-cares-relief-concerts-highlighted/ https://www.digitalmusicnews.com/2024/11/01/hearby-cares-relief-concerts-highlighted/#respond Fri, 01 Nov 2024 18:28:53 +0000 https://www.digitalmusicnews.com/?p=306247 Hearby Cares initiative

Photo Credit: Hearby Cares

Area4Labs’ Hearby Cares is a new AI-powered initiative to highlight live benefit shows providing relief for areas affected by Hurricanes Helene and Milton.

The team at Area4Labs says they created the initiative to provide a comprehensive benefit show list and a map to maximize awareness and attendance of these important recovery effort live shows. Hearby Cares is a team-wide, collaborative effort to highlight benefit shows, collect event data, and showcase more than 100 benefit shows across the United States.

The existing Hearby venue and event database helped the team fine-tune their data collection practices to identify benefit shows. Shows highlighted by Hearby Cares vary greatly in size, from small listening rooms to stadiums with a capacity for 70,000+ people in attendance. Benefit shows for hurricane relief have raised millions of dollars, while others provide necessary supplies for victims of these natural disasters to get back on their feet. Area4Labs says they view this project as a way to filter and present event data for communities that deserve extra attention.

“Across the Hearby team, we are at our core a group of musicians and music fans with a deep love for local music scenes. Hearby Cares is an initiative born from our appreciation of the Appalachian music scene, from North Carolina through Tennessee and beyond, and our want to help that region after the devastation brought about by recent hurricanes and flooding,” the Area4Labs team told Digital Music News.

“Through Hearby Cares, we have been able to cover over 100 benefit shows (and counting) across the United States that all have a mission to help those in need. Looking ahead, we see tremendous value in Hearby Cares and our ability to quickly react to social and cultural movements, and to highlight the events surrounding them.”

Several high-profile artists have stepped up to help these regions struck by hurricanes. Morgan Wallen donated $500,000 to the Red Cross for relief, while Dolly Parton donated $1 million to recovery efforts. North Carolina natives Luke Combs and Eric Church recently led a benefit ‘Concert for Carolina’ that managed to raise $24.5 million for relief efforts—particularly around Asheville.

]]>
https://www.digitalmusicnews.com/2024/11/01/hearby-cares-relief-concerts-highlighted/feed/ 0
Glassnote Records Toes the Waters With ‘Ethically-Trained AI’ Platform Hook https://www.digitalmusicnews.com/2024/10/31/glassnote-records-toes-the-waters-with-ai-platform-hook/ https://www.digitalmusicnews.com/2024/10/31/glassnote-records-toes-the-waters-with-ai-platform-hook/#respond Thu, 31 Oct 2024 19:33:42 +0000 https://www.digitalmusicnews.com/?p=306040 Glassnote Hook partnership

Photo Credit: Glassnote Records (Daniel Glass, Founder & President)

AI-powered social music app Hook has announced a partnership with Glassnote Records to add tracks from emerging acts including Tors, bby, Hayes Warner, and Dylan Cartlidge to its library. More acts from Glassnote will be added soon.

Glassnote’s roster includes chart-topping acts like Mumford & Sons, Childish Gambino, Phoenix, Two Door Cinema Club, GROUPLOVE, Silvana Estrada, CHVRCHES, AURORA, Jade Bird, Hamilton Leithauser, GRACEY, The Teskey Brothers, and more. Hook describes itself as an ‘ethically-trained AI’ social mash-up app which launched on iOS last month.

Hook allows everyday music fans to use cutting-edge, ethically-trained AI technology to realize the future of music listening in which fans become collaborators, using music to express themselves by creating sped up, slowed down, mashed-up, or different genre songs. Sped up and slowed down remixes on TikTok have exploded in popularity, leading to the creation of Hook.

“Hook also opens up a new revenue stream for artists and rights holders, enabling them to monetize the infinite derivative versions their fans create and consume on Hook and social platforms like TikTok and Instagram,” the announcement reads.

“Glassnote Records is proud to embrace progressive, open-minded, and forward-thinking ideas,” says Daniel Glass, Founder & President of Glassnote Music. “It has always been our goal to run toward what’s next, focusing on the potential of new technologies while recognizing artists’ creative integrity as the number one priority and something to be strongly protected.”

“We believe Hook does just that—providing a comprehensive solution to the use of remixed music across social platforms in a way that emphasizes artists’ control and compensation.”

As music consumption and discovery continue to shift to social media platforms, Hook provides artists and rights holders granular controls over how fans can interact with their licensed music on social media. The app also provides data about how their remixes are performing globally, bringing a consumption-based model to social media while paying artists more as fans create remixes.”

Hook Founder & CEO Gaurav Sharma previously served as Chief Operating Officer for JioSaavn, India’s largest music streaming platform. It was also one of the first platforms to secure global streaming licenses with record labels. Sharma and his team grew JioSaavn to more than 100 MAUs before he departed.

]]>
https://www.digitalmusicnews.com/2024/10/31/glassnote-records-toes-the-waters-with-ai-platform-hook/feed/ 0
How to Train Your AI Chat Dragon: Hearby Uses Chat Technology to Help Fans Find Grassroots Music They’ll Love https://www.digitalmusicnews.com/2024/10/30/hearby-chat-technology-ai-find-music-concerts/ Thu, 31 Oct 2024 01:05:21 +0000 https://www.digitalmusicnews.com/?p=305682 Photo Credit: AI

Photo Credit: AI

ChatGPT is impressive out-of-the-box but challenging to apply to real-world problems. Area4 Labs and Hearby are building with AI technology to create a data-driven live event concierge.

The following comes from Hearby, a fast-emerging player in concert discovery and a DMN partner. Enjoy!

Full disclosure: “Train” when applied to Chat technology is the same “train” that we might apply to cats. That is, we ask them to do things they were going to do anyway in a way that doesn’t displease them, and then we figure out how to be happy with what they did.

This has been our biggest lesson in creating “Ask Hearby,” our AI chatbot music concierge. In this article, I’ll bring you behind the scenes on our AI adventure.

At Hearby, we aim to use technology to find and uplift grassroots music and help people find the wonderful music hidden right in their neighborhoods.  Whether you’re looking for a night of clubbing, a free classical concert, or music to keep the kid out of your hair, it’s all out there.  You may not realize there’s a great music venue right in the industrial park next door, on the dockyard of Liverpool, or in a thrift shop in London.

We want to get people exploring and finding music that they’ll love, and to do this, we spent a lot of time investing in fast search technologies, data-driven filters, and map visualizations. Then we ran right into the wall of ‘Too Much Stuff.’

Enter the chatbot, which allows fans to fast-forward and say what they want without going through all the tedious steps of searching, filtering, and reviewing results. It’s a lot of work for something that should be fun.

However, my experience with chatbots has been a big meh, and we wanted to do something more intriguing.

Our main product requirement was “be useful and don’t be irritating”.  Yes, it took us a long time to get that, and we fell off the dragon a few too many times. But seeing how this ubiquitous technology works and encouraging more ideas and dreams with it has been interesting.  At its core, it’s a foray into using a Large Language Model (LLM), and I will breathlessly say the possibilities are unlimited.

It took a while, but after several tries, we finally have something useful and entertaining to use. So here’s the behind-the-scenes on what we tried that didn’t work — and what finally did.

    • Train, train, and train again.
    • Give me all the data!  More data!
    • Hybrids: Just how many technologies can we cram in here?
    • It’s a sandwich.

So first, a little more about Training when it comes to Machine Learning.

I need to bring up the topic of training, partly for my snazzy title but also because it’s at the bottom of everything you’re hearing about AI.

To train ML, we first choose a neural net architecture, then give it a vast set of data items labeled with the correct answers (for example, Cat/Dog, T-shirt/Skirt, Pedestrian/Bollard).  This type of supervised learning is expensive in computing power, requiring a huge amount of ethically obtained, accurately labeled data. Training enables the ML architecture – the layers and feedback loops that make up the neural net  – to adjust to create maximally accurate predictions.  For example: “99% chance this image is a cat”.

Going beyond cat/dog to something actually relevant quickly gets expensive and time-consuming. It’s pretty much prohibitive on large data sets for all but the biggest players.  Enter LLMs, which come ready-trained on massive amounts of human text right out-of-the-box for anyone to use.

This is what powers our chat dragon: the ability to “understand” human language, figure out what is being asked, and create amazing responses in human language.  On the topic of whether there is any actual human-style understanding of concepts, I can start an argument in an empty room (so I won’t go there).  It doesn’t matter for our purposes as long as the output is accurate, useful, valuable, safe, and reliable.

This brings me to our challenge: how to make already trained chat technology do what we want.

For a small amount of money and a lot of delight, you can get a subscription to Open AI’s ChatGPT, which will happily write you a letter to Grandma, your term paper, or a pretty decent novel – at least better than anything I can write.  Whether soulless or best-selling is in the eye of the beholder, but I prefer to consider it a fantastic tool to help spur creativity.

But as impressive as this is, these out-of-the-box answers are standalone, and the type of chatbot we wanted to create is a conversation that builds as we go along, with context and informality, powered by accurate event, venue, and band data.  The challenge, then, is how to get a language-based model to incorporate this external data and use it in its responses and how to have the conversation build as it progresses (memory).

Data! Give me all the data!

The challenge is getting our data into ChatGPT to inform its responses.  In a “normal” program, this is a matter of, well, programming.  However, an LLM is different: rather than programming, information needs to be text-based to be taken on board.

It’s weird, but not so much when we remember this is a language model. This is precisely how we listen, take in new info, understand it, and use it to inform our actions.  In all fairness, the latest models also allow other forms of input, expanding beyond text input. But this is where it was when we started, so that’s where we began.

We started with text-to-sql, in which we describe in words how to find the answers to questions using the tables in our database. So, essentially, telling a programmer how to formulate database queries.  This sounded so crazy and improbable that we thought it just might actually work.

Sometimes, it did, but mostly, it sulked, made stuff up, or ignored us.  Or all of the above.  If you’re thinking cat again, I’m right there with you.

Bring in the hybrids.

So, we moved on to hybridizing and searching our database using ChatGPT for its language capabilities. Among the many challenges:

(1) Knowing what the fan is asking about – An event? A venue? A neighborhood? A genre? A person?

(2) Find the data in our database with a fuzzy search – the whole point of chatting is that the fan doesn’t have to be specific.

(3) Get the data into ChatGPT in words, which is all it understands.

(4) Receive a human-ready answer from ChatGPT.

(5) Augment that answer with links and images.

We quickly realized we needed to confirm it was using our data and not going elsewhere, which in LLM terms is called temperature.  Or, in human terms, don’t make stuff up!

It’s a sandwich

After a number of tries, we ended up with a workable sandwich of technologies: Bert NER to understand what the fan is asking about; specialized models to detect essential but idiosyncratic info like informal dates (“in 3 weeks”); a vector database to translate a fuzzy human question into something specific we can ask our already existing search capability; a layer to feed the search answer to ChatGPT in words, and then a method to receive the ChatGPT response in human language. And, finally, a layer to augment it with images and links.

Voila!  If this all sounds like a bit much, I get you.  But we were delighted to see that a fan can ask a reasonable question, “What’s on in London tonight?” or “Where can I take Aunt Nelly for a jazz brunch?” and get a believable answer that makes sense.

More interesting is that a fan can ask an unreasonable question and get an answer about music events or venues, and an explanation as to why, or, if it’s too far a stretch, simply a reasonable on-topic answer.  And, to put your mind at ease somewhat, some questions bring in the guard rails: “I cannot assist you with that”.

Onward!

In addition to our chatbot launching later this year, we are working on several other AI efforts, mainly in Machine Learning and classification. These are focused on highlighting the music scene for fans and encouraging them to explore and find new music and venues.  Off their sofas and into venues!

The chatbot has been a very interesting excursion for us into LLMs, which have enormous potential to change how we live with software. So, I hope this has shone a little light on this powerful technology for you.

We’re focused on music and using these incredible tools to uplift grassroots music. Still, I hope this gave you some ideas on how this kind of technology might help in your part of the music world – places where you want people to be able to get to the point faster, have informal access to better information, or be able to explore and expand on an idea on the fly.

]]>
IFPI, IMPALA, GESAC, 20+ European Orgs Call for ‘Meaningful Implementation of the AI Act’ https://www.digitalmusicnews.com/2024/10/30/gesac-ifpi-impala-statement-on-ai-act-eu/ https://www.digitalmusicnews.com/2024/10/30/gesac-ifpi-impala-statement-on-ai-act-eu/#respond Wed, 30 Oct 2024 18:27:33 +0000 https://www.digitalmusicnews.com/?p=305865 GESAC EU AI Act

Photo Credit: Possessed Photography

More than 24 organizations representing creators and rights holders have signed a joint statement seeking ‘meaningful implementation of the AI Act.’

Organizations like GESAC, IMPALA, IFPI, and several others seek the ability for creators and rights holders to enforce their rights when it comes to ingesting and copying copyright-protected works for training by AI models. The open letter is addressed to the European Parliament in support of the AI Act.

“The AI Act is a pioneering model of ethical and responsible AI regulation that sets the basis for best practice at global level,” the joint statement begins. “If implemented and applied effectively it will foster an environment in the EU where AI innovation can develop in an ethical and accountable way alongside flourishing cultural and creative industries across the EU.”

The EU Artificial Intelligence Act aims to set a global standard for AI regulation, much the same way the GDPR did for consumer data privacy. An artificial intelligence system is defined in the AI Act as any machine-based system that operates autonomously to generate outputs such as predictions, recommendations, or decisions affecting physical or virtual environments. The broad definition chosen was inspired by OECD guidelines and is designed to be future-proof and cover a wide range of technologies, from generative AI, deep learning, and more conventional data analysis techniques.

Creator societies and rights holder organizations say they’re dealing with a seriously detrimental situation of generative AI companies taking content without authorization “on an industrial scale.” This open letter states, “Their actions result in illegal commercial gains and unfair competitive advantages for their AI models, services, and products—in violation of European copyright laws.”

“The implementation and application of the EU’s new AI Act provides a crucial opportunity to address such malpractices and ensure accountability in the AI industry. It should aim at achieving a healthy and sustainable licensing market that encourages responsible AI innovation and complies with core principles of fair market competition and remuneration for creators and rights holders, while effectively preventing unauthorized uses of their works.”

“To achieve this, the rules provided in the AI Act—from the obligation for general purpose AI model providers to make publicly available a sufficiently detailed summary of the content used for training of their models to the obligation for such providers to demonstrate that they have put in place policies to respect EU copyright law—must be made meaningful.”

“As is made clear under the AI Act, these measures should enable creators and rights holders to exercise and enforce their rights when it comes to ingesting and copying copyright-protected works for training by AI models. This is not only essential for safeguarding the value of Europe’s world-renowned creative content in a global marketplace, but also for ensuring that AI services generate outputs based on high-quality, diverse, and trust-worthy inputs.”

“We support the standards you have set in the AI Act that should enable the cultural and creative industries to evolve and thrive. We now ask for your continued support in translating them into concrete steps in the forthcoming implementation phase to ensure a fair and equitable framework, where AI and innovation in the EU also safeguards and strengthens cultural and creative industries.”

]]>
https://www.digitalmusicnews.com/2024/10/30/gesac-ifpi-impala-statement-on-ai-act-eu/feed/ 0
UNIFI Music Has Another Plan for AI — Artist Management https://www.digitalmusicnews.com/2024/10/29/unifi-music-ai-artist-management/ Tue, 29 Oct 2024 19:56:32 +0000 https://www.digitalmusicnews.com/?p=305769 UNIFI Music Founder & CEO La'Shion Robinson (Photo Credit: UNIFI Music)

UNIFI Music Founder & CEO La’Shion Robinson (Photo Credit: UNIFI Music)

Finding and retaining effective management is a significant hurdle for many emerging artists. Now, UNIFI Music is building an AI-driven intelligent platform for that.

Superstar music careers frequently start on the fringes: in a poorly-lit rehearsal space, late night on a laptop and cracked DAW, or as part of a local scene that hasn’t yet crossed over.

If an artist or group is lucky, an ardent believer is pulling the strings to get gigs, upload tracks to DSPs, monitor royalties from different licenses and platforms, and settle disputes. But professional managers with acumen, experience, and connections are usually out of reach at the beginning.

And that’s a problem.

The real artist management pros are usually overloaded with their high-demand clientele. And they’re generally inaccessible if they’re taking on emerging artists, or simply too expensive for artists in the early stages of their careers.

The music industry is laser-focused on the profound threat AI-generated music poses – which makes sense. But can AI fill a meaningful role in other areas like artist management?

That was the light bulb for execs at UNIFI Music, a company focused on building artist-focused solutions. “We’ve seen a huge need for artist management from artists in the 0-5 stages of their careers,” La’Shion Robinson, UNIFI’s founder and CEO, told Digital Music News.  “There’s simply an overload of tasks beyond the core competencies of creating music, building a cultural connection, and performing.”

‘Overload’ is a fitting descriptor.

From securing competent management to navigating the complexities of promotion and distribution, the path to success is often fraught with obstacles. And with tasks spanning social media engagement to booking gigs and navigating the complexities of streaming platforms, the workload can be immense – especially in the face of fierce competition.

With that problem in mind, UNIFI Music’s vision is to solve these pain points with an innovative AI-powered solution that could redefine artist management. That is perking the ears of investors, many of whom feel that AI-related models in the music industry are overlapping and saturated.

“Here’s something extremely useful, relevant, with tremendous potential to scale,” Robinson summarized. Just recently, UNIFI joined forces with DMN to further expand their concept.

According to Robinson, AI can play a meaningful role in streamlining artist management and empowering emerging musicians. UNIFI’s AI-powered platform, called Sasha, will act as a centralized toolbox, offering a range of features and services to support artists in their career development.

Sasha is designed to complement platforms like SoundCloud, providing artists with a comprehensive suite of tools to manage their careers effectively. That includes a question-driven interface, with Sasha understanding virtually any language. “This isn’t just a customized ChatGPT,” Robinson continued. “Sasha employs LLMs to provide customized guidance to the artist.”

The SaaS-like Sasha will also integrate with UNIFI’s “LinkedIn for Music” platform, enabling artists to connect with industry professionals and build valuable relationships. The broader aim is to bolster intelligent, AI-driven management with a rich network of connected musicians and opportunities.

According to Robinson, UNIFI Music’s vision for Sasha extends far beyond simple task management.

“This is a brand-new, functional direction for AI in music,” Robinson relayed. “We’re building a complete AI manager built from the ground up for musicians, music companies, and the entire music managerial ecosystem.”

The ultimate goal is to create a virtual manager capable of strategically, tactically, and emotionally guiding an artist’s career. For existing managers, the platform helps to eliminate time-consuming ‘assistant’ tasks like venue research, social media posts, and transportation logistics. “There’s less need for an assistant manager and more opportunity to create the ‘super manager,'” Robinson described.

Currently, Sasha can handle tasks like social media recommendations and identifying promising venues. However, as the platform evolves, it will take on increasingly complex responsibilities like contract negotiation, release planning, and tour management.

Ultimately, UNIFI Music’s vision is to create a virtual manager capable of guiding an artist’s career toward success.

Unifi’s Sasha in action (click to enlarge).

Unifi’s Sasha in action (click to enlarge).

 

Unifi’s Sasha in action (click to enlarge).

Unifi’s Sasha in action (click to enlarge).

Music management agencies may not like Sasha, but UNIFI’s vision is unique when compared to typical AI creation and management companies.

While the debate over AI-generated music continues, UNIFI Music is simply exploring the potential of AI in other areas of the industry. By leveraging AI’s capabilities, the company’s vision is to provide artists with personalized guidance and support, leveling the playing field and democratizing access to the tools and resources needed to succeed.

“UNIFI has the potential to revolutionize artist management and empower emerging musicians. We may also catapult fringe scenes and artists to the fore by boosting their industry savvy and experience overnight,” Robinson relayed. “That’s exciting stuff.”


If you’d like to connect with UNIFI Music, please contact La’Shion Robinson directly at l@unifimusic.ai.

]]>
Universal Music Group Enters Into a Strategic Collaboration with Ethical AI Music Company KLAY https://www.digitalmusicnews.com/2024/10/28/umg-klay-ethical-ai-collaboration-announced/ https://www.digitalmusicnews.com/2024/10/28/umg-klay-ethical-ai-collaboration-announced/#respond Mon, 28 Oct 2024 19:25:47 +0000 https://www.digitalmusicnews.com/?p=305630 UMG Ethical AI Klay

Photo Credit: Michael Nash / UMG

UMG partners with ethical AI music company KLAY in a strategic deal to pioneer a commercial, ethical foundational model for AI generated music.

Los Angeles-based AI music company KLAY has announced a partnership with Universal Music Group (UMG) on a pioneering commercial, ethical foundational model for AI generated music that works in collaboration with the music industry and its creators. KLAY aims to be the backbone to power a new era in products and experiences, committed to the premise that AI can bolster and grow musical creativity and human artistry.

At the core of UMG and KLAY’s shared vision is the conviction that state-of-the-art foundational AI models are best built and scaled ethically through constructive dialog and consensus with those responsible for the artistry that shapes global culture. Building generative AI music models ethically and fully respectful of copyright, as well as name and likeness rights, will dramatically lessen the threat to human creators and stand the greatest opportunity to be transformational, creating significant new avenues for creativity and future monetization of copyrights.

Led by accomplished executives from the fields of music and technology, including Ary Attie (music and tech visionary), Thomas Hesse (former President of Sony Music Entertainment), and Björn Winckler (joining soon from Google Deepmind), KLAY is committed to serving artists and songwriters and those who support them, including music publishers and labels, distributors, and other rights holders across the Major and Indie label landscape. KLAY is developing a global ecosystem to host AI-driven experiences and content, including accurate attribution and will not compete with artists’ catalogs in traditional music services.

“We are excited to partner with entrepreneurs like the team leading KLAY, to explore new opportunities and ethical solutions for artists and the wider music ecosystem, advancing generative AI technology in ways that are both respectful of copyright and have the potential to profoundly impact human creativity,” said Michael Nash, Executive Vice President and Chief Digital Officer of Universal Music. “UMG has always endeavored to lead the music industry in driving innovation, embracing new technologies, and supporting entrepreneurship while protecting human artistry.”

“Research is critical to building the foundations for AI music, but the tech is only an empty vessel when it doesn’t engage with the culture it is meant to serve,” added Ary Attie, founder and CEO of KLAY. “KLAY’s obsession is not just to showcase its research innovation but to make it invisible and mission-critical to people’s daily lives. Only then can music AI become more than a short-lived gimmick. Our great artists have always embraced the newest technologies — we believe the next Beatles will play with KLAY.”

KLAY is developing a new Large Music Model (KLAYMM) that will significantly advance state-of-the-art Music AI. The company is currently in stealth but plans to launch in the coming months with a product that will revolutionize the way people think about music, presenting a new, intuitive music experience.

]]>
https://www.digitalmusicnews.com/2024/10/28/umg-klay-ethical-ai-collaboration-announced/feed/ 0
GEMA Elaborates on Its Generative AI Licensing Framework — Including Calls for ‘A 30% Share of All Net Income’ from Developers https://www.digitalmusicnews.com/2024/10/25/gema-ai-licensing-model-details/ https://www.digitalmusicnews.com/2024/10/25/gema-ai-licensing-model-details/#respond Fri, 25 Oct 2024 23:16:29 +0000 https://www.digitalmusicnews.com/?p=305333 gema ai licensing model

An overview of the GEMA licensing model for generative AI platforms. Photo Credit: GEMA

One month ago, GEMA announced a licensing framework for generative AI, complete with rightsholder payments for derivative audio creations. Now, the German society is providing additional details about the aggressive proposal.

GEMA reached out with new information pertaining to the model, after we asked about its specifics in late September. In more words, a representative explained the month-long response window by emphasizing the many moving parts associated with developing an approach to music licensing for generative AI.

“Involved” only begins to describe the undertaking, which GEMA touted in September as the first “licensing approach aiming to balance technological progress and the protection of creative work.”

That same month, the 91-year-old entity indicated that its model, not solely addressing once-off payments from AI developers that trained on protected music sans permission, would look to compel ongoing rightsholder payments for derivative audio creations.

As suggested by the available resources, derivative audio referred to all the music creations pumped out by generative AI platforms trained on copyrighted works without licensing pacts in place.

GEMA is now elaborating that its proposal centers on “one licensing model” featuring “two key components.”

The first of those components would apply to all generative AI players active in Germany, regardless of how, where, and when their training processes occurred, that utilized protected musical works at some point.

Far from subtle, the same component would then transfer to the relevant rightsholders “a 30% share of all net income generated by the generative AI model or system of the provider,” per GEMA, with a “minimum royalty” obligation in place to boot.

Of course, outlining the sizable payment is one thing, and actually getting AI companies to cough up is another challenge altogether. (OpenAI has already threatened to leave the EU over regulations, for instance.)

But it’s worth bearing in mind the EU’s enactment of the sweeping AI Act, which is still going into effect, and the unique regulatory environment in Germany itself. Running with these points, the second licensing component floated by GEMA will seemingly prove harder yet to make reality.

Payments should also be made for “all economic benefits that can arise from the subsequent use of AI-generated music,” including in public establishments and on streaming services, because this music resulted from an initial library of protected media, according to GEMA.

“In the future,” GEMA proceeded, “rights holders will so receive an appropriate share of the additional income generated by AI-produced songs. This share must be at least equivalent to what would have been provided for purely human-generated works.”

Though it looks as though many details still need to be ironed out – plus, the language barrier certainly isn’t helping given the highly complex subject at hand – it’ll be interesting to monitor GEMA’s push as well as the wider area of possible regulation on the AI-training side.

Meanwhile, high-stakes litigation is still plodding along in the space, with ongoing copyright cases against Anthropic, Suno, Perplexity, and Udio, to name a few.

]]>
https://www.digitalmusicnews.com/2024/10/25/gema-ai-licensing-model-details/feed/ 0
Meta Inks an AI-Focused Deal with Reuters — Just Months After Reaching an AI-Related Accord with UMG https://www.digitalmusicnews.com/2024/10/25/meta-ai-focused-deal-reuters/ https://www.digitalmusicnews.com/2024/10/25/meta-ai-focused-deal-reuters/#respond Fri, 25 Oct 2024 20:56:13 +0000 https://www.digitalmusicnews.com/?p=305330 Meta AI Reuters

Photo Credit: Mark Zuckerberg by Anthony Quintano / CC by 2.0

Facebook parent company Meta inks a deal with Reuters to include their news content in responses from its AI chatbot.

Facebook’s parent company Meta has announced a multi-year deal with Reuters to include the news platform’s content in responses from its AI chatbot. Starting today, users of Meta’s AI chatbot in the US will have access to “real-time news and information from Reuters when they ask questions about news or current events,” according to an initial report from Axios.

The company’s AI chatbot is integrated into the search and messaging features across its apps, including Facebook, Instagram, Messenger, and WhatsApp.

According to a response from the chatbot when asked by TheWrap about the deal, “The partnership is a significant move for Meta, marking its first major AI news deal, and will enable users to access trustworthy information directly within the chatbot.”

“While most people use Meta AI for creative tasks, deep dives on new topics, or how-to assistance, this partnership will help ensure a more useful experience for those seeking information on current events,” said a Meta spokesperson.

According to Axios, Meta’s AI chatbot will cite Reuters’ stories in its answers and provide links to the outlet’s content. Reuters will be compensated for its reports.

The deal comes on the heels of Meta CEO Mark Zuckerberg saying his company would strike up partnerships to bolster its AI responses. This includes an expanded global agreement with Universal Music Group earlier this year, which included framework for addressing “unauthorized AI-generated content.”

Meta has also been in talks with Apple for a generative AI partnership despite the two companies’ longstanding rivalry. While those discussions are not concrete, it’s notable that several companies are also courting Apple as the EU continues its battle to open the company’s “walled garden” approach to managing its App Store.

]]>
https://www.digitalmusicnews.com/2024/10/25/meta-ai-focused-deal-reuters/feed/ 0
Nearly 70 Years Later, ‘Rockin’ Around the Christmas Tree’ Is Available in Spanish Thanks to ‘Responsibly-Trained AI’ https://www.digitalmusicnews.com/2024/10/25/brenda-lee-rockin-around-the-christmas-tree-ai/ https://www.digitalmusicnews.com/2024/10/25/brenda-lee-rockin-around-the-christmas-tree-ai/#respond Fri, 25 Oct 2024 20:30:54 +0000 https://www.digitalmusicnews.com/?p=305306 brenda lee

Brenda Lee, who’s given the green light to an AI-created Spanish-language version of her 1958 holiday classic ‘Rockin’ Around the Christmas Tree.’ Photo Credit: UMG

Nearly seven decades after it was recorded by Brenda Lee, perennial holiday hit “Rockin’ Around the Christmas Tree” has received a Spanish-language re-release courtesy of artificial intelligence.

Universal Music Group (UMG) today announced the AI track, which is already live on streaming services. Now 79 years old, Lee first recorded “Rockin’ Around the Christmas Tree” in 1958, and the Christmas-playlist staple has, of course, remained commercially prominent since then.

With the apparent support of the appropriate artist, continued AI advancements, and strong streaming growth across several Spanish-speaking markets in Latin America, why not try to build on that prominence with a new version? Enter “Noche Buena y Navidad,” which UMG says resulted from “responsibly-trained AI technology.”

Featuring the existing instrumentals and background vocals, the updated song was produced by Auero Baqueiro, who also handled the preliminary step of adapting the relevant lyrics into Spanish, per Universal Music.

From there, Chile-born Leyla Hoyle recorded these lyrics in Spanish, working to mimic “Lee’s unique vocal patterns – matching pitch, tone breaths and phrasings of the original recording,” per the major label.

Next, the MicDrop creator SoundLabs (with which UMG partnered over the summer), drawing from “hours of isolated vocal stems from Lee’s UMG archives,” made “a unique bespoke AI vocal model,” UMG relayed of the involved process.

Said model was then applied to Hoyle’s recording to make it seem as though a young Lee had recorded the track in Spanish back when, for instance, color TV was still relatively new. Predictably, the final step was replacing the initial vocals with their AI-powered Spanish-language counterparts.

In a statement, the four-time Grammy nominee Lee communicated: “I am so blown away by this new Spanish version of ‘Rockin’ Around The Christmas Tree,’ which was created with the help of AI.

“Throughout my career, I performed and recorded many songs in different languages, but I never recorded ‘Rockin’’ in Spanish, which I would have loved to do. To have this out now is pretty incredible and I’m happy to introduce the song to fans in a new way,” concluded the Rock and Roll Hall of Fame inductee.

Looking ahead to the new year, while it might go without saying, all manner of additional (authorized) AI soundalike projects are presumably on the way. (Warner Music and Randy Travis in May used artificial intelligence to release the artist’s “first new music in more than a decade,” and closer to the present, Timbaland utilized Suno to help make a fresh single.)

Beyond those and other potential positives, the unprecedented technology, in many ways a runaway train more than anything else, appears poised to keep on fueling a variety of issues owing to its training specifics, the prevalence of unauthorized soundalike tracks, the sheer volume of audio it’s pumping out, and a whole lot else.

Bearing the obstacles in mind, Universal Music took the opportunity to conclude the “Noche Buena y Navidad” announcement message by reiterating its support for the NO FAKES Act.

]]>
https://www.digitalmusicnews.com/2024/10/25/brenda-lee-rockin-around-the-christmas-tree-ai/feed/ 0
Former OpenAI Researcher Highlights How OpenAI Violated Copyright Law in Training ChatGPT https://www.digitalmusicnews.com/2024/10/24/openai-researcher-copyright-law-violated/ https://www.digitalmusicnews.com/2024/10/24/openai-researcher-copyright-law-violated/#respond Thu, 24 Oct 2024 19:50:04 +0000 https://www.digitalmusicnews.com/?p=305211 OpenAI violated copyright law

Photo Credit: Andrew Neel

OpenAI has been fraught with leadership changes as executives pour out of the company like water through a sieve. The latest departure is a former researcher who says the company broke copyright law and is destroying the internet. Here’s the latest.

The New York Times reports Suchir Balaji’s departure from OpenAI after spending four years as an artificial intelligence researcher with the company. He was instrumental in helping OpenAI hoover up enormous amounts of data, scraping the web for knowledge to build out its large language models (LLMs).

Balaji told The NY Times that while working for OpenAI, he did not consider whether the company had a legal right to build its products by scraping data from other sources. He assumed any data published on the internet and available freely was up for grabs—whether the data was copyrighted or not. So pirate sites that archive copyrighted books, paywalled news sites, and even reddit posts were fair game for the massive data machine.

Balaji says in 2022 he thought harder about how the company was approaching data collection and came to the conclusion that how OpenAI gathered data was a violation of copyright law and that technology like ChatGPT was damaging to the internet as a whole. In August 2024, Balaji departed the company because he believed OpenAI would cause more harm than societal benefit.

“If you believe what I believe, you have to just leave the company,” Balaji told The New York Times. Balaji joined Open AI in 2020 at just 25-years-old, drawn to the potential AI presents for problems like finding curs for diseases and stopping aging. Instead, he says he found himself at the helm of a technology that is “destroying the commercial viability of the individuals, businesses, and internet services that created the digital data used to train AI systems.”

Earlier this week, Balaji published an essay on his website detailing his concerns about the future of OpenAI. He believes that how AI companies gather data does not fall within the ‘fair use’ that AI data companies like OpenAI and Anthropic are arguing—saying regulation of AI is the only way out of this mess.

“While generative models rarely produce outputs that are substantially similar to any of their training inputs, the process of training a generative model involves making copies of copyrighted data,” Balaji writes. “If these copies are unauthorized, this could potentially be considered copyright infringement, depending on whether or not the specific use of the model qualifies as ‘fair use.’”

“Because ‘fair use’ is determined on a case-by-case basis, no broad statement can be made about when generative AI qualifies for fair use.” Balaji points to traffic drops for major sites like Stack Overflow as potentially destroying the internet as new users ask their questions to generative AI models rather than the human help resource that the model was trained on. While OpenAI has arranged for licensing agreements with several newspapers, it still faces lawsuits from authors who say they did not consent to an LLM being trained on their copyrighted works.

]]>
https://www.digitalmusicnews.com/2024/10/24/openai-researcher-copyright-law-violated/feed/ 0
Google Launches New Interface for MusicFX DJ AI Tool with Jacob Collier https://www.digitalmusicnews.com/2024/10/23/google-launches-new-interface-for-musicfx-dj-ai-tool-with-jacob-collier/ https://www.digitalmusicnews.com/2024/10/23/google-launches-new-interface-for-musicfx-dj-ai-tool-with-jacob-collier/#respond Thu, 24 Oct 2024 03:09:53 +0000 https://www.digitalmusicnews.com/?p=305051 MusicFX DJ

Photo Credit: Google

Google has partnered with Grammy Award-winning singer, songwriter, and producer Jacob Collier on its latest AI project, MusicFX DJ. It’s a generative music creation tool that can mix prompts for instruments, genres, and even emotions to steer the flow of a continuous live music stream.

Jacob’s work with the research team was geared towards the user flow state of asking, “What am I dreaming up today?” Collier says that question has always inspired his workflow. “You answering that question give you access to a kind of flow—once you’re in that flow you’re off, you’re going,” he says.

MusicFX DJ can accept multiple prompts with the ability to fine-tune how much emphasis each prompt has on the final continuous stream. Collier’s perspective helped inform the team’s approach to offering creative control and tools for musical collaboration and artistic innovation. The controls offer the ability to make the music fast or slow, bright or dark, or feature specific sounds from prompts that can be adjusted to be heavily influenced or only lightly present.

Creations made with the MusicFX DJ can be shared with others to use as a creative jumping off point. People can watch a 60 second playback of the performance and remix it by taking over controls, adding their own prompts, and continuously building upon the track from their initial creations.

“Our collaboration with Jacob demonstrated the importance of experimentation in the creative process, and how working closely with artists can push the boundaries of creative tools,” Google says about MusicFX DJ. Generating a continuous stream of music that can be influenced with prompts makes it possible to create music with little know-how.

The new interface is much more intuitive than the interface for MusicFX DJ that Google debuted back in May 2024—highlighting Collier’s influence on the process. Alongside the new interface update for MusicFX DJ, Google also released updates for its Music AI Sandbox and YouTube’s Dream Track, giving Shorts creators the ability to generate high quality instrumentals for their YouTube Shorts.

]]>
https://www.digitalmusicnews.com/2024/10/23/google-launches-new-interface-for-musicfx-dj-ai-tool-with-jacob-collier/feed/ 0
Audio Collaboration Startup Highnote Scores $2.5 Million Raise With Support from Dropbox Ventures, Plots AI Buildout https://www.digitalmusicnews.com/2024/10/23/highnote-dropbox-funding/ https://www.digitalmusicnews.com/2024/10/23/highnote-dropbox-funding/#respond Thu, 24 Oct 2024 01:29:10 +0000 https://www.digitalmusicnews.com/?p=305036 highnote dropbox funding

Music collaboration startup Highnote has announced a $2.5 million round. Photo Credit: Highnote

Music collaboration startup Highnote has announced a $2.5 million raise with support from Dropbox Ventures.

New York City-based Highnote disclosed the multimillion-dollar capital influx today, after arriving on the scene back in 2022. With co-founders including Songtrust vet Paulina Vo, Chris Muccioli (previously with Spotify and Splice), and Jordan Bradley (who doubles as CEO), the collaboration platform bills itself specifically as “the best way to discuss and organize notes on any audio file.”

On the features front, Highnote offers lossless streaming, timestamped commenting (including voice comments), group chat, file-version management, secure storage, and more, according to its website. Monthly plans vary in price from free (15 tracks/50GB cloud storage) to $30 for Studio, which supports unlimited tracks and 5TB of cloud storage, the appropriate page shows.

Returning to the funding round, Highnote, as mentioned, has raised $2.5 million from Dropbox Ventures (which is zeroing in on “the next generation of apps and tools”) as well as existing backers Afore Capital, Character Capital, Brooklyn Bridge Ventures, and Precursor Ventures.

Also participating in this latest round are new angel-investor execs associated with Figma, Atlassian, Abstract, and Dropbox, besides returning angel investors at SoundCloud, Auth0, and Splice.

(On top of that long list of Highnote investors, other music collaboration startups, among them Baton, Submix, and most recently Ampollo, have scored multimillion-dollar raises, complete with a number of backers, of their own.)

Looking to the bigger strategic picture, Highnote intends to capitalize on the funds by exploring “AI-powered features aimed at enhancing the creative process,” with “comment summarization, tone analysis, and creative recommendations” all in the cards, per higher-ups.

And closer to the present, October 15th delivered a fresh Dropbox integration through which users can automatically open any stored audio files directly via Highnote. Meanwhile, November will see the collaboration startup roll out “a full 2-way integration, creating a seamless audio layer on top of any Dropbox account,” the business indicated.

Addressing his company’s $2.5 million raise, co-founder and CEO Jordan Bradley touched on the funding’s ability to accelerate expansion plans at the intersection of AI and collaboration.

“As AI accelerates content creation, content collaboration is at an all-time high,” communicated the former Mighty lead product designer. “We built the industry’s best audio workflow layer so that collaborators can stay organized, efficient, and in control—no matter how fast things are moving.

“This partnership with Dropbox Ventures allows us to deepen our commitment to creators and accelerate our plans to provide a foundational layer for AI powered audio workflows,” concluded Bradley.

]]>
https://www.digitalmusicnews.com/2024/10/23/highnote-dropbox-funding/feed/ 0
What Major Label Infringement Battle? AI Music Startup Suno Scores Exclusive Timbaland Pre-Release Under Broader Partnership Deal https://www.digitalmusicnews.com/2024/10/22/suno-timbaland-deal/ https://www.digitalmusicnews.com/2024/10/22/suno-timbaland-deal/#respond Wed, 23 Oct 2024 00:03:03 +0000 https://www.digitalmusicnews.com/?p=304931 Songwriters Hall of Fame

Photo Credit: Timbaland

What major label copyright infringement battle? AI music startup Suno has inked a partnership deal with Timbaland – including an exclusive pre-release of the 52-year-old’s latest single.

Suno, still engaged in a high-stakes legal showdown with Universal Music, Sony Music, and Warner Music, unveiled its far-reaching Timbaland tie-up today. Perhaps the most noteworthy component of the union (in part because it could mark the start of a broader trend) is the above-mentioned exclusive pre-release of “Love Again,” which is now streaming via a dedicated page on Suno’s website.

But the evidently involved agreement doesn’t end there. Suno has also put out a debut episode of MUSE, billed as a “branded content series that demonstrates how Suno empowers music creators by both igniting new ideas and reviving forgotten or unfinished tracks.”

In the six-minute upload, Timbaland in more words touts Suno (and specifically its “Covers” tool, which generates variations of one’s own works) as a cutting-edge asset to the contemporary creative process. Of course, a number of rightsholders have a decidedly different view of the AI business and artificial intelligence generally.

And while it remains to be seen whether Suno can change these negative perceptions – a concrete answer to the ever-important fair use training question will prove important here – it’s certainly working to do so.

The effort further encompasses a “Love Again” remix contest for fans, who will according to the appropriate website have the chance to win $100,000 in prizes by “reimagining” the track via Suno. Public access to the single’s stems is set to open up at 9 AM PST tomorrow.

Unsurprisingly, Suno is capitalizing by encouraging individuals “from Grammy-winning producers to up-and-coming artists” to give its “cutting-edge, AI-powered editing tools” a try as well. Suno, a portion of the relevant text reads, “supports you through every step of the creative process—from generating fresh ideas to preparing tracks for release.”

Addressing his Suno pact, the Verzuz co-founder Timbaland emphasized a perceived “unique opportunity to make A.I. work for the artist community and not the other way around” – besides a chance “to open up the floodgates for generations of artists to flourish on this new frontier.”

In comments of his own, Suno CEO Mikey Shulman struck an optimistic tone about the future.

“It’s an honor to work with a legend like Timbaland,” communicated Shulman. “At Suno, we’re really excited about exploring new ways for fans to engage with their favorite artists. With Timbaland’s guidance, we’re helping musicians create music at the speed of their ideas—whether they’re just starting out or already selling out stadiums. We couldn’t be more excited for what’s ahead!”

Only time will tell exactly “what’s ahead” for Audible Magic-partnered Suno and other music-focused AI players, which appear unlikely to resolve training-related infringement disputes by appealing directly to creators.

However, the suits seem as though they’ll take some time to play out and might not be a slam dunk for the plaintiffs – raising interesting questions about what the landscape will look like should Suno and others achieve material adoption-rate growth in the interim.

]]>
https://www.digitalmusicnews.com/2024/10/22/suno-timbaland-deal/feed/ 0
Thom Yorke, Björn Ulvaeus, Max Richter, Billy Bragg Among 11,500+ Creatives Demanding ‘AI Training Guardrails’ https://www.digitalmusicnews.com/2024/10/22/creatives-demand-ai-training-guardrails/ https://www.digitalmusicnews.com/2024/10/22/creatives-demand-ai-training-guardrails/#respond Tue, 22 Oct 2024 18:49:05 +0000 https://www.digitalmusicnews.com/?p=304893 AI training guardrails

Photo Credit: Growtika

With multiple ongoing lawsuits against several AI companies in the United States alone, creatives are taking a stance against their works being used as training data for large language models (LLMs) and more.

Meanwhile, in the United Kingdom, the government has said it would like to change copyright law and allow these AI companies to train on copyrighted works without a license in place. The Human Artistry Campaign has organized a petition that has been signed by more than 11,500+ actors, artists, authors, musicians, and organizations against this move.

The petition reads, “The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works and must not be permitted.”

Thom Yorke, Björn Ulvaeus, Max Richter, and Billy Bragg are just a few of the top creatives who have signed the petition, urging more guardrails for AI training. They’ve joined the Human Artistry Campaign in seeking the advancement of responsible AI, working to ensure AI is developed in ways that strengthen the creative ecosystem rather than gutting it.

In the United States, OpenAI, Anthropic, and now Perplexity are some of the major tech companies facing lawsuits over their use of copyrighted works in their training data. Anthropic has argued that this constitutes a ‘fair use’ in current copyright law—a defense which will be tested in court soon.

The owners of the Wall Street Journal and the New York Post recently launched a lawsuit against Perplexity for copyright infringement. They allege that the San Francisco company owes its success to a “brazen scheme to compete for readers while simultaneously free-riding on the valuable content” produced by these two companies. Both companies reached out to Perplexity for a potential licensing deal for their content, but now both have filed a lawsuit after never receiving a response.

They allege that Perplexity has copied hundreds of thousands of their articles without permission for use in its retrieval-augmented generation (RAG) database. They call this practice un-transformative and allege that it does not constitute fair use as the articles are preserved in their entirety for the Perplexity AI to recall at will when asked. Perplexity can recall detailed and quote-heavy summaries of paywall protected articles on both the Wall Street Journal and New York Post websites.

]]>
https://www.digitalmusicnews.com/2024/10/22/creatives-demand-ai-training-guardrails/feed/ 0
Spotify Launches Mobile Custom Playlist Cover Art Creator https://www.digitalmusicnews.com/2024/10/22/spotify-launches-mobile-custom-playlist-cover-art-creator/ https://www.digitalmusicnews.com/2024/10/22/spotify-launches-mobile-custom-playlist-cover-art-creator/#respond Tue, 22 Oct 2024 17:41:30 +0000 https://www.digitalmusicnews.com/?p=304885 how to create custom cover art for Spotify playlist

Photo Credit: Spotify

Spotify is bringing custom playlist art creation to mobile with a new feature in beta. Both Free and Premium users can now generate custom art for their playlist creations on mobile.

The cover art can feature unique images, colors, text effects, graphic elements, and more—allowing more creativity for those custom playlists. The new feature is available on both iOS and Android as long as you’ve updated to the latest version of Spotify. Ready to create a new look for your custom playlists? Here’s a quick guide on how to do so using this new feature.

How to Create Custom Playlist Cover Art on Spotify

  1. Open the Spotify app on your iOS or Android device.
  2. Select a playlist that you personally created—or start creating a new one.
  3. Tap the three-dot menu on the playlist page and select ‘Create Cover Art.’
  4. Upload an original photo of your own, or choose a variety of custom options.
  5. Text styles, colors, and effects can be added to the image.
  6. You can also change the background color and gradients or use image masking and visual effects.
  7. Spotify also included exclusive stickers from artists to use in this new feature.
  8. Once complete, the playlist will be updated with your newly created cover art.

Spotify says you can only save one custom cover art per playlist at a time and each new cover you create will override the previous cover created for that specific playlist. In order to create a playlist with multiple cover arts, you’ll have to make a copy of the playlist first. This new custom playlist cover art feature on mobile is currently in beta across 65 English markets.

As part of the feature launch, Spotify has partnered with music artists, visual creators, and artists behind some of the most iconic album art. Music artists in on the partnership include Clairo, Jamie xx, and Arlo Parks. Album artwork experts include creative director Imogene Strauss; Adrian Hernandez; and Cey Adams.

]]>
https://www.digitalmusicnews.com/2024/10/22/spotify-launches-mobile-custom-playlist-cover-art-creator/feed/ 0
Perplexity Faces Copyright Suit Over Alleged ‘Massive’ Infringement of NY Post and Wall Street Journal Articles https://www.digitalmusicnews.com/2024/10/21/perplexity-ny-post-lawsuit/ https://www.digitalmusicnews.com/2024/10/21/perplexity-ny-post-lawsuit/#respond Tue, 22 Oct 2024 04:10:05 +0000 https://www.digitalmusicnews.com/?p=304833 new york post perplexity lawsuit

The New York Post printing plant in the Bronx. Photo Credit: Jim Henderson

In a case that could establish precedent relevant to multiple music industry lawsuits against generative AI companies, the owners of the Wall Street Journal and the New York Post are suing Perplexity for copyright infringement.

Dow Jones & Company as well as NYP Holdings submitted that copyright complaint to a New York federal court, naming as the lone defendant San Francisco-based Perplexity. Billing itself as today’s “most powerful answer engine,” the latter startup counts as stakeholders Jeff Bezos and Nvidia.

Against the backdrop of sizable funding rounds and massive valuations in the AI space, the just-filed action points to a possible $3 billion market worth for Perplexity – though reports today suggested that the business is looking to raise $500 million at a whopping $8 billion valuation.

Conveyed in different words, it’s an understatement to say that ample cash is floating around the AI world. But according to the corporate entities behind the Journal and the Post, Perplexity in particular owes its success to a “brazen scheme to compete for readers while simultaneously freeriding on the valuable content” at hand.

As recounted in the 42-page suit, the plaintiffs reached out to the defendant in July of 2024 with a letter describing infringement concerns and “offering to discuss a potential licensing deal.” (Separately, the New York Times recently sent Perplexity a cease-and-desist letter concerning alleged infringement, Reuters reported.)

Predictably, in light of the fresh complaint, the filing parties, having previously finalized a licensing pact with ChatGPT developer OpenAI via their parent, say they never received a response from Perplexity.

Shifting to the actual copyright claims, the complaint contrasts previously filed actions against generative AIs (including Amazon-backed Anthropic, OpenAI, and more) by accusing Perplexity of infringement at several stages.

First, the platform, often used to summarize news, allegedly “copied hundreds of thousands” of copyrighted Journal and Post articles without permission for its retrieval-augmented generation (RAG) database. Taking aim at arguments made by other AI giants, the action rather directly claims the alleged practice isn’t transformative and doesn’t constitute fair use.

In a nutshell, the RAG database, distinct from the much-discussed training process for large-language models, is said to house a continually updated (via web scraping) collection of information for use in AI-generated answers to user questions (including requests for breakdowns of articles, for example).

(Incidentally, at the time of this writing, the AI platform was declining to use the Post article about the lawsuit to create a summary of the matter, even when asked to do so. Citations are featured prominently beside Perplexity answers but, according to the plaintiffs, render “users less inclined to visit the original content source” and generate “virtually no click-through traffic” in any event.)

Next, Perplexity’s “full or partial verbatim reproductions of” copyrighted articles allegedly constitute independent instances of copyright infringement. That includes detailed, quote-heavy summaries of paywall-protected Journal coverage as well as entire Post pieces.

Furthermore, the AI defendant allegedly makes additional unauthorized copies of “articles to preserve the outputs it generates in another database that it uses for analytical and other purposes.” The exact quantity of alleged copies is unclear, but the plaintiffs say “each individual electronic copy constitutes its own infringement subject to statutory damages under the Copyright Act.”

Lastly, Perplexity allegedly produces “made-up text (hallucinations) in its outputs” and then falsely attributes said text, sometimes alongside genuine quoted materials, to specific articles and authors from the plaintiff publications. Among other things, the alleged practice is “likely to cause confusion or mistake,” according to the suit.

“This conduct likewise harms the news-consuming public,” the complaint sums up towards its end. “Generating content for advertisement or subscription revenue is unsustainable if the content is taken en masse and reproduced by bad-faith actors for substitutive commercial purposes.”

All told, the plaintiffs are seeking substantial damages and a number of orders – one barring the unauthorized copying of protected materials and another calling for the “destruction of any index or database created by Perplexity that contains” the same materials, to name a couple.

]]>
https://www.digitalmusicnews.com/2024/10/21/perplexity-ny-post-lawsuit/feed/ 0
Penguin Random House Takes Strong Stance Against AI — No Mistaking This Updated Copyright Language https://www.digitalmusicnews.com/2024/10/21/penguin-random-house-takes-strong-stance-against-ai/ https://www.digitalmusicnews.com/2024/10/21/penguin-random-house-takes-strong-stance-against-ai/#respond Mon, 21 Oct 2024 21:18:46 +0000 https://www.digitalmusicnews.com/?p=304824 Random House stance on AI

Photo Credit: Penguin Random House (UK CEO Tom Weldon)

Penguin Random House has updated its wording on its copyright pages to better protect its authors’ intellectual property from AI uses. The language specifically addresses large language models (LLMs) and other artificial intelligence (AI) tools.

A report from The Bookseller details these changes across all of its imprints globally confirming these new guidelines will appear “in imprint pages across our markets.” The new wording from these documents states, “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems.”

These new notices will be included in all new titles and any back-listed titles that are reprinted. The statement “expressly reserves [the titles] from the text and data mining exception,” in accordance with a European Parliament directive. The Bookseller reports that PRH UK CEO Tom Weldon said in a memo to staff in August 2024 that the trade publisher would “vigorously defend the intellectual property that belongs to our authors and artists.”

“It is encouraging to see major publishers like PRH adopt new wording in their printed materials that reaffirms the principle of copyright and explicitly forbids technology companies from using copyrighted works to train their models,” says Barbara Hayes, CEO of The Authors’ Licensing & Collecting Society. “We hope more publishers follow [PRH’s] lead and that those companies developing such models take urgent notice.”

Several publishers have written cease and desist letters to some of the larger LLM platforms, taking practical steps to prevent their copyrighted works from being scraped or use for LLM training. When other major publishers were asked about updating their copyright wording, Pan Macmillan, Hachette, and Simon & Schuster all declined to comment.

Faber could not be reached for a comment. Though Faber recently adopted an ‘AI Policy’ that would prohibit freelancers working with its authors’ books from copying any of the information into an AI program “for the purposes of editing, checking, extraction, or any other purpose.”

]]>
https://www.digitalmusicnews.com/2024/10/21/penguin-random-house-takes-strong-stance-against-ai/feed/ 0
Music Funding Slipped In Q3 2024 to $425 Million — Who’s Still Getting the Startup Cash? https://www.digitalmusicnews.com/pro/weekly-funding-q3-2024/ https://www.digitalmusicnews.com/pro/weekly-funding-q3-2024/#respond Wed, 02 Oct 2024 22:31:05 +0000 https://www.digitalmusicnews.com/?post_type=dmn_pro&p=302921 Music Industry Investment by Category, Q3 2024 (Source: DMN Pro)

Music Industry Investment by Category, Q3 2024 (Source: DMN Pro)

In keeping with existing 2024 trends, third-quarter music industry funding rounds fell well short of their Q3 2023 counterparts in both volume and cumulative value.

Even so, the newly completed quarter wasn’t without signs of a possible rebound.

These and other insights are made possible by the Music Industry Funding Tracker, DMN Pro’s one-stop database of raises from in and around the music world. The Tracker, which complements this report, includes round types, amounts, participating investors, and more dating back to 2014. It also reveals several interesting funding trends developing this year.

In general, those trends haven’t been positive – referring in part to material year-over-year falloffs in multiple months and whole quarters.

Furthermore, the sizable decreases would have been more significant if not for massive 2024 raises like the $1 billion secured by Iconic Artists Group in February.

Here, we’ve crunched the numbers to see what changed during 2024’s third quarter and to bring out pertinent takeaways. Also included is an updated breakdown of year-to-date funding across Q1 through Q3 of 2023 and 2024.

Table of Contents

I. Introduction: An Overview of Music Industry Funding in Q3 2024

II. Q3 2024 Funding by the Numbers: Industry and Industry-Adjacent Companies Raised $425.28 Million Across 16 Rounds

Graph: Q3 Music Industry Funding by Total Value, 2023 v. 2024

III. Q3 2024 Funding by Category – What Kinds of Companies Are Investors Betting On?

Graph: Q3 2023 Music Industry Funding Rounds by Company Type

Graph: Q3 2024 Music Industry Funding Rounds by Company Type

IV. 2024’s Industry Funding in the Bigger Picture – YTD Figures Show a Massive Capital Decrease

Graph: Total Q1-Q3 Music Industry Funding, 2023 v. 2024

Please note: this report is for DMN Pro subscribers only. Please do not redistribute — we appreciate it!


]]>
https://www.digitalmusicnews.com/pro/weekly-funding-q3-2024/feed/ 0
Gavin Newsom Vetoes AI Safety Bill Despite Overwhelming Hollywood Support https://www.digitalmusicnews.com/2024/09/29/gavin-newsom-vetoes-ai-safety-bill/ Mon, 30 Sep 2024 03:57:18 +0000 https://www.digitalmusicnews.com/?p=302561 Gavin Newsom vetoes

Photo Credit: Tim Wildsmith

California Governor Gavin Newsom vetoes the AI safety bill SB 1047 despite its overwhelming Hollywood support.

California Governor Gavin Newsom has vetoed a bill that took aim at the ever-increasing risks of advanced generative artificial intelligence models. The bill, SB 1047, became the most hotly debated topic at the session, as numerous opponents and supporters lined up on either side.

Newsom wrote an accompanying letter with his veto, explaining that while the bill addresses a genuine issue, it does not establish the appropriate regulatory framework. “I do not believe this is the best approach to protecting the public from real threats posed by the technology,” Newsome wrote. “Ultimately, any framework for effectively regulating AI needs to keep pace with the technology itself.”

While announcing the veto over the weekend, Newsom also said he would convene experts to develop regulations to promote the safe development of AI. He says he will continue working on the issue in 2025.

Still, the news is devastating to the bill’s many supporters, which include the Hollywood actors’ union SAG-AFTRA, which has been perhaps the most outspoken on the looming threat of AI using actors and creatives’ likenesses and work without their consent.

Additionally, a group known as Artists For Safe AI released an open letter last week in support of the bill, which includes signatories J.J. Abrams, Rob Reiner, Jane Fonda, Mark Hamill, and numerous other actors, writers, and directors.

“It really stems from the fact we have experienced firsthand the dangers of one aspect of AI,” said Jeffrey Bennett, general counsel for SAG-AFTRA. “This bill seems to be the one bill that targets only the incredibly powerful expensive systems that have the capability to cause a mass critical problem. Why not regulate at that level? Why not build in some sensible, basic safety protocols at this stage of the game?”

SAG-AFTRA has supported two other AI-related bills in California this year designed to regulate the use of AI in the entertainment sector. Newsom signed both earlier this month.

]]>
GEMA Eyes Royalties for AI-Generated Derivative Works Under New Licensing Model — Flat Training Payments Are ‘Not Nearly Sufficient to Compensate Authors’ https://www.digitalmusicnews.com/2024/09/27/gema-ai-licensing-model/ Fri, 27 Sep 2024 16:58:41 +0000 https://www.digitalmusicnews.com/?p=302448 gema

Are the original creators of musical works used to train AI models entitled to a stake in the resulting derivative songs? Germany’s GEMA believes so, and it’s announced a new licensing model in pursuit of the objective. Photo Credit: Luca Bravo

Once-off payments are inadequate for authors whose works have been used to train generative AI models – at least according to Germany’s GEMA, which says it’s created the “first licensing model” tackling royalties racked up by derivative songs.

The Berlin-based collecting society and PRO reached out with an overview of the royalties framework, which it initially unveiled at the Reeperbahn Festival earlier in September. At the top level, it’s worth noting that this push for bolstered author protections has arrived amid the implementation of the EU’s sweeping AI Act.

Among many other things, the latter is expected to compel generative-model developers to disclose precise details about the media used to train their systems. That will presumably set the stage for the relevant recording and compositional rightsholders to seek payments for their IP’s (unauthorized) use.

But what about when AI factors prominently into works performed in public establishments? Just scratching the surface here, far-reaching questions remain with regard to measuring the percentage of each creation that’s attributable to AI.

That’s a departure from the comparatively straightforward existing process of identifying public plays (preferably with exact measurements as opposed to extrapolations) and then compensating the appropriate authors accordingly.

Perhaps more pressingly on the AI side, what about the public performance of derivative works that only exist thanks to generative models that were trained (with or without permission) on protected music?

Of course, there aren’t any direct answers at present – with even larger unknowns when it comes to developing a system to register the usages, particularly in light of the ongoing legal battles over where the AI-training copyright line will be drawn.

Nevertheless, GEMA says it’s “the first collecting society worldwide to develop a licensing approach aiming to balance technological progress and the protection of creative work.”

“Pure remuneration through a buyout, i.e. a one-off lump sum payment for training data,” the organization proceeded, “is not nearly sufficient to compensate authors in view of the revenues that can be generated. The model provides for fair remuneration at a high level while keeping in mind that the market and its technical developments can change dramatically and rapidly.”

Rather, “authors must be adequately involved in the subsequent generation of AI content based on their creative work,” the entity emphasized.

It’d be an understatement to say that fleshing out, implementing, and ensuring compliance with this system will prove a tall task. DMN requested additional details from GEMA, which offered an overview in its formal release, but didn’t receive a response in time for publishing.

In any event, the push raises interesting questions about yet another component of the AI explosion. Pre-cleared music for use in public establishments (and specifically those that are unconcerned with playing today’s top hits) is more widely available than ever, and many of the involved companies are harnessing AI.

But multiple AI developers say their models didn’t train on protected media at all, and in the bigger picture, AI tracks will undoubtedly make a mainstream commercial splash at some point.

]]>
Zuckerberg Weighs In on AI & Copyright Amid Orion Glasses Debut https://www.digitalmusicnews.com/2024/09/26/zuckerberg-weighs-in-on-ai-copyright-amid-orion-glasses-debut/ Thu, 26 Sep 2024 18:11:46 +0000 https://www.digitalmusicnews.com/?p=302339 Zuckerberg comments on AI

Photo Credit: The Verge YouTube

Meta’s Mark Zuckerberg is betting big on AR glasses as the next tech accessory to take over our lives after smartphones. He recently debuted the Orion augmented reality glasses, which the company says is too complicated and expensive to take to market at the moment. But that will change.

Orion AR glasses are a custom-built computer for your face, which has become the new focus for many tech companies. (What happened to the metaverse?) It was designed by Meta and features micro LED projectors inside the frame that beams graphics in front of your eyes via waveguides in the lenses. Zuck says he believes people will want AR glasses for two purposes—communicating with each other through digital information overlaid in the real world, and interacting with AI.

Zuckerberg also sat down for an interview with The Verge, discussing the potential applications for this current prototype. During that interview, he was asked about AI training data and how it’s used and whether or not he sympathizes with creators who see their work used without adequate compensation.

“I think that in any new medium in technology, there are concepts around fair use and where the boundary is between what you have control over,” Zuckerberg told The Verge during that interview. “When you put something out in the world, to what degree do you still get to control it and own it and license it?”

“I think that all these things are basically going to need to get re-litigated and re-discussed in the AI era. I get it. These are important questions. I think this is not a completely novel thing to AI, in the grand scheme of things. There were questions about it with the internet overall too, with different technologies over time. But getting to clarity on that is going to be important, so that way, the things that society wants people to build, they can go build.”

When asked if he sees a scenario where creators get directly compensated for the use of their content to train AI models, Zuck became a bit cagey.

“I think there are a lot of different possibilities for how stuff goes in the future. Now, I do think that there’s this issue. While, psychologically I understand what you’re saying, I think individual creators or publishers tend to overestimate the value of their specific content in the grand scheme of this.”

]]>
Look Out, Curators — Spotify Expands ‘AI Playlist’ Beta Into the U.S., Ireland, and More https://www.digitalmusicnews.com/2024/09/25/spotify-ai-playlist-expansion/ Thu, 26 Sep 2024 03:30:23 +0000 https://www.digitalmusicnews.com/?p=302213 spotify ai playlist

Spotify has officially brought its AI Playlist feature to the U.S. and other markets. Photo Credit: Spotify

Almost six months after launching an AI Playlist beta in the U.K. and Australia, Spotify is bringing the feature to the U.S. and other nations.

The streaming platform just recently announced AI Playlist’s availability expansion, which has arrived roughly nine months following initial rumblings of auto-generated playlist tests. Especially in light of the strong consumer reception behind Spotify artificial intelligence offerings like AI DJ and Daylist, the AI Playlist embrace didn’t exactly come as a surprise.

Now, paid subscribers in the States, Canada, Ireland, and New Zealand can also access the tool, which, as its name suggests, auto-generates playlists based on text prompts. A number of fans are already taking to social media to weigh in on the more widely available feature.

Beyond these early AI Playlist comments, the newest artificial intelligence buildout is important on multiple levels for Spotify. First, the development of AI Playlist, AI DJ, and Daylist has quietly expanded the service’s sway in the recommendation and promotion departments.

While many know streaming platforms generally favor major label acts, there’s relatively little discussion about the spots contractually guaranteed to high-profile artists on lucrative editorial playlists. In short, despite the comparatively pressing nature of the AI music deluge that’s hitting streaming platforms, the point could prove significant amid the evolution of recommendations.

Also worth keeping in mind is the way that AI Playlist and more will potentially fit into Spotify’s forthcoming “Deluxe” package. It’s not by chance that AI Playlist is available only to paid users, and reports have connected more advanced AI options yet, like mixing support, to Deluxe. (A long-elusive launch date for the tier, also referred to as “Supremium” and “Music Pro” in recent years, still hasn’t been nailed down.)

Bigger picture, Spotify is hardly alone in capitalizing on AI products, which are becoming increasingly prevalent in the ultra-competitive streaming arena.

Amazon Music jumped into AI playlist generation with Maestro in April, for instance. Not to be outdone, Deezer joined the AI party by rolling out “Playlist with AI” in July, YouTube Music began experimenting with AI radio stations that same month, and Apple Music reportedly started testing the AI artwork waters.

Meanwhile, it remains to be seen whether Apple will invest in OpenAI as suggested by multiple reports about one month back. Of course, this and other AI giants are embroiled in several copyright infringement lawsuits centering on their training processes and adjacent outputs.

]]>
If You Liked the Weeknd + Drake Deepfake, You’ll Love This Justin Bieber One About Diddy https://www.digitalmusicnews.com/2024/09/25/justin-bieber-deepfake-diddy/ Wed, 25 Sep 2024 19:02:01 +0000 https://www.digitalmusicnews.com/?p=302228 Justin Bieber deepfake Diddy

Photo Credit: Justin Bieber by Joe Bielawa / CC by 2.0

An AI-generated Justin Bieber song with lyrics referencing Sean ‘Diddy’ Combs begins circulating on social media, fooling many listeners into believing its authenticity.

Back in April, a new AI-generated song emerged, made to sound like Justin Bieber, and began circulating on social media. At first, the song, which features lyrics referencing Sean “Diddy” Combs, tricked many listeners into believing it was an authentic Justin Bieber release.

With lyrics like “Lost myself at a Diddy party / Didn’t know that’s how it goes / I was in it for a new Ferrari / But it cost me way more than my soul,” the song’s release coincided with the legal troubles mounting against Combs, who is now in federal custody for racketeering and sex trafficking offenses. This certainly contributed to the song’s viral spread and highlights the potential for AI to spread misinformation at a break-neck pace.

The song was identified conclusively as a deepfake, but that hasn’t curbed its wide circulation. It’s been used heavily as a background track in thousands of TikTok videos, and it first emerged at a time when speculation was ripe over the ever-growing mountain of charges against Diddy.

To make matters even more confusing for fans, the 30-year-old Bieber developed a close relationship with Combs early in his career. The two even collaborated as recently as 2023 on the track, “Moments.” Old clips of a then-teenaged Bieber spending time with Combs have also resurfaced, and Justin Bieber has yet to publicly comment on any of the allegations against the 54-year-old rapper and producer.

Combs was arrested last week and charged with sex trafficking, racketeering, and transportation to engage in prostitution. The latest lawsuit against him was filed yesterday (September 24), accusing him and his bodyguard of “viciously” raping a woman in a New York City recording studio over 20 years ago.

]]>
SoundExchange Releases Registry of Tracks Authorized for AI Use — Too Little, Too Late? https://www.digitalmusicnews.com/2024/09/23/soundexchange-releases-registry-of-tracks-authorized-for-ai-use-too-little-too-late/ Mon, 23 Sep 2024 18:24:26 +0000 https://www.digitalmusicnews.com/?p=302051 SoundExchange creates AI music registry

Photo Credit: Microsoft Copilot

SoundExchange is developing a global artificial intelligence (AI) registry for sound recording creators and rights owners. But most AI companies have scraped copyrighted data already to train their models. Is this too little, too late?

SoundExchange President & CEO Michael Huppe shared the information during a discussion with artist Timbaland about music rights at the Fast Company Innovation Festival in Manhattan last week.

This new registry will provide a much-needed resource for creators and rights owners to protect their rights related to the use of their content in AI models. It will allow them to reserve those rights, if they so choose, against training by AI algorithms. While U.S. law does not require such a reservation to protect creators’ rights, the global registry will be another tool to help AI companies properly handle their training data and to help facilitate similar protections in Europe and elsewhere.

SoundExchange says it plans to launch this registry in Q1 2025, as an evolution of systems purpose-built by SoundExchange for the collection and distribution of recording royalties. The registry will utilize SoundExchange’s authoritative and comprehensive international standard recording code (ISRC) database. Companies building AI training models will be able to reference the database of authorization declarations before ingesting recordings.

“The rapid proliferation of companies building and leveraging AI music models demands creators have an ability to declare easily whether or not they want their work used in that process,” says SoundExchange CEO Michael Huppe. “Our driving mission is to simplify the music industry and protect the value of music.”

“because of our role in the music industry and our authoritative data, SoundExchange is in a unique and trusted position to create an AI sound recordings registry. We see this as another opportunity to bridge the information gap while keeping control in the hands of creators and rights owners and providing AI companies with a centralized resource for researching consent.”

Record labels and other rights owners would still have the ability to undertake a reservation of rights individually with each AI company. The SoundExchange AI registry will supplement that ability and facilitate economies of scale in the notification process. The database will be a voluntary tool, and rights owners will maintain all legal rights to their recordings regardless of their listing in the registry.

While the effort is a noble one, it feels a bit like creating a registry of horses to ride after they’ve already been let out of the barn. AI companies like Anthropic are arguing in court that training their AI models on copyrighted music falls under “fair use,” with the outcome of that argument remaining to be seen.

]]>
Spotify Joins Meta in Open Letter to EU Decrying ‘Inconsistent Regulatory Decision Making’ https://www.digitalmusicnews.com/2024/09/19/spotify-meta-eu-open-letter-on-ai-regulation/ Thu, 19 Sep 2024 18:04:07 +0000 https://www.digitalmusicnews.com/?p=301730 EU inconsistent regulatory rules Spotify open letter

Photo Credit: Alexey Larionov

A group of companies led by Meta and including Spotify have issued an open letter to the European Union concerning “fragmented and inconsistent” decision-making on artificial intelligence and data privacy.

Meta, Spotify, and several other companies and researchers have signed the open letter claiming that Europe has become less competitive and risks falling behind in the age of AI. The signatories seek “harmonized, consistent, quick and clear decisions” from data privacy regulators to “enable European data to be used in AI training for the benefit of Europeans.”

The letter specifically takes issue with the General Data Protection Regulation (GDPR) which passed in 2018. Specifically, Meta has stopped plans to harvest data from European users to train AI models after privacy regulators put pressure on the company and have issued fines for failing to respect privacy laws.

“In recent times, regulatory decision making has become fragmented and unpredictable, while interventions by the European Data Protection Authorities have created huge uncertainty about what kinds of data can be used to train AI models,” the letter reads.

“This means the next generation of open source AI models, and products, services we build on them, won’t understand or reflect European knowledge, culture or languages.”

Meta has faced record fines in the EU for breaching privacy laws, including a fine of 1.2 billion euros under the GDPR. Europe has been at the forefront of framing major legislation around the use of AI, its AI Act coming into force earlier this year. Meta has delayed releasing products in the European market—including the release of Twitter alternative Threads.

“We hope European policymakers and regulators see what is at stake if there is no change of course. Europe can’t afford to miss out on the widespread benefits from responsibly built open AI technologies that will accelerate economic growth and unlock progress in scientific research,” the letter continues.

“For that we need harmonized, consistent, quick and clear decisions under EU data regulations that enable European data to be used in AI training for the benefit of Europeans. Decisive action is needed to help unlock the creativity, ingenuity and entrepreneurialism that will ensure Europe’s prosperity, growth and technical leadership.”

]]>
Music Industry Funding Has Topped $360 Million During the Past Month Alone — What Are Investors Betting On? https://www.digitalmusicnews.com/pro/weekly-funding-aug-sept-2024/ https://www.digitalmusicnews.com/pro/weekly-funding-aug-sept-2024/#respond Thu, 19 Sep 2024 03:00:57 +0000 https://www.digitalmusicnews.com/?post_type=dmn_pro&p=301648 Tune.fm ($50 million), FanCircles ($1.44 million), and Miris ($26 million) are among several companies scoring funding rounds in the past 30 days (Pictured: DMN Pro Music Industry Funding Tracker)

Tune.fm ($50 million), FanCircles ($1.44 million), and Miris ($26 million) are among several companies scoring funding rounds in the past 30 days (Pictured: DMN Pro Music Industry Funding Tracker)

Between mid-August and mid-September 2024 alone, music industry companies secured over $360 million in funding. But where’s the money going?

Answering that question (and gaining a better sense of the industry’s direction in the process) is easier than ever thanks to DMN Pro’s Music Industry Funding Tracker. The one-stop database compiles key information about every funding round from in and around today’s quick-moving music space.

And despite ongoing belt-tightening in the core industry, concerns about the broader economy, and the year-over-year funding decreases we’ve charted for multiple months in 2024, funding rounds are hardly ceasing.

All told, between August 16th and September 16th, our Music Industry Funding Tracker registered $361.92 million in raises — up 258.34% from the same period in 2023. (If not for TickPick’s quarter-billion-dollar August growth investment, 2024’s funding would have risen by 10.81% YoY.) While the increase itself is significant in light of funding trends, the companies that scored the capital are insightful as well.

Table of Contents

I. Introduction: The Industry’s Strong Funding Showing Between Mid-August and Mid-September 2024

II. Music Industry Funding’s Overlap — and Differences — Between Mid-August and Mid-September 2023 and 2024

Graph: Funding Takeaways At a Glance — Mid-August – Mid-September 2023 v. 2024

Graph: Music Industry Funding Rounds by Type, Mid-August – Mid-September 2023 v. 2024

III. Will Investors’ Superfan Bet Pay Off? A Look At the Companies Working to Capitalize on the Latest Industry Focus 

IV. A Funding Dry Spell for Artificial Intelligence in Music? AI’s Slow Mid-August – Mid-September and Other Interesting Takeaways from a Month of Industry Raises

Please note: this report is for DMN Pro subscribers only. Please do not share without prior authorization. Thank you!

 


]]>
https://www.digitalmusicnews.com/pro/weekly-funding-aug-sept-2024/feed/ 0
Johnny Cash Estate, Reba McEntire, Tyga, Joe Walsh, Lainey Wilson, and Many More Urge ‘No Fakes Act’ Passage in New Campaign https://www.digitalmusicnews.com/2024/09/18/no-fakes-act-campaign-september-2024/ Wed, 18 Sep 2024 20:54:55 +0000 https://www.digitalmusicnews.com/?p=301647 no fakes act

Lainey Wilson, one of the many artists urging Congress to pass the No Fakes Act, kicking off the U.S. leg of her Country’s Cool Again Tour. Photo Credit: Erick Frost

It’s time for Congress to establish federal AI soundalike and lookalike protections with the No Fakes Act – at least according to the Human Artistry Campaign and hundreds of involved creators.

This latest push for the No Fakes Act’s passage kicked off with an advert from the Human Artistry Campaign, which counts as members the RIAA, A2IM, and several others. In said advert, which was printed in Politico, the likes of 21 Savage, Billy Idol, Cardi B, Elvis Costello, Mary J. Blige, Lee Greenwood, deadmau5, Common, Joe Walsh, Randy Travis, and many more expressed support for the legislation.

Besides arriving on the heels of California’s new SAG-AFTRA-backed AI laws, the current showing of No Fakes Act support has come about seven weeks after the bill’s formal introduction in Congress.

Short for the “Nurture Originals, Foster Art, and Keep Entertainment Safe Act,” the proposed law dates back to October of 2023 and would, we covered in detail, establish a federal right protecting one’s voice and likeness.

In the interest of brevity – and though it perhaps goes without saying given the above-mentioned support and the continued prevalence of AI media – the music industry has strongly backed the bill from the outset. That includes a related April appearance before Congress from Warner Music head Robert Kyncl.

However, like with California’s five just-implemented AI laws, the No Fakes Act has attracted criticism as well. ReCreate Coalition executive director Brandon Butler, for instance, is of the belief that it “threatens free expression online” and “would create more problems for creativity and society than it solves.”

Running with the point, proponents of federal AI soundalike and lookalike regulations haven’t had an entirely smooth ride thus far. While it’s been out of the media spotlight for eight months, the No AI Fraud Act was introduced at the top of 2024 and, by the RIAA’s own description, “builds on” the No Fakes Act framework.

But with the seemingly more robust bill still stuck in committee despite a carefully coordinated support campaign, the focus has evidently shifted back to the No Fakes Act.

In keeping with the renewed focus, the Songwriters Guild of America, Music Creators North America, and the Society of Composers & Lyricists also reached out to DMN today, albeit with a letter they’d sent to four representatives.

This letter doesn’t mention the No Fakes Act by name, but thanks the lawmakers for their support in the AI space and other areas. (All three organizations are part of the Human Artistry Campaign in any event.)

Bigger picture, it’ll be interesting to see whether the developments spur a No Fakes Act vote in Congress. Worth highlighting on this front is that rightsholders and broadcasters (for obvious reasons, the latter strongly oppose the unauthorized AI-powered replication of voices and likenesses) seemingly find themselves on the same page here.

Possessing considerable legislative sway, broadcaster associations yesterday expressed support for the bill in a different open letter, touting it as “a step in the right direction.”

]]>
AI Protection Bill Signed into Law by Governor Gavin Newsom — SAG-AFTRA Chalks Up Major Win https://www.digitalmusicnews.com/2024/09/17/ai-protection-bill-signed-into-law-by-governor-gavin-newsom/ Wed, 18 Sep 2024 05:16:58 +0000 https://www.digitalmusicnews.com/?p=301578 AI protections bill CA Gavin Newsom

Photo Credit: State of California

California Governor Gavin Newsom has signed two bills that require the consent of actors and performers for the use of their digital likeness. SAG-AFTRA calls the protections a big step forward.

The two bills are designed to help actors and performers protect their digital likenesses in audio and visual productions, including those who are deceased. The legislation is designed to help ensure the responsible use of artificial intelligence (AI) and other digital media technologies in entertainment by giving workers more protections.

“We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers,” the CA Governor Newsom said during the signing ceremony. “This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used.”

“It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom,” adds SAG-AFTRA President Fran Drescher. “They say as California goes, so goes the nation!”

AB 2602 requires contracts to specify the use of AI-generated digital replicas of a performer’s voice or likeness, and the performer must be professionally represented in negotiating the contract. This will help protect performers’ and actors’ careers, ensuring that AI is not used to replicate their voice or likeness without permission.

AB 1836 prohibits commercial use of digital replicas of deceased performers in films, TV shows, video games, audiobooks, sound recordings, and more—without first obtaining the consent of those performers’ estates. It aims to curb unauthorized uses of digital replicas, encompassing any audiovisual work or sound recordings linked to performances delivered by artists when they were alive.

It’s worth noting that under AB 1836, Drake’s use of the AI Tupac lines in his recent diss track would be illegal. Drake did not have the Tupac Shakur estate’s permission for those AI generated lines when he dropped them.

]]>
Spotify Says It Paid 1% of a $10 Million Streaming Fraud Scheme — So Where Did the Other 99% Come From? Questions Abound Amid Radio Silence from Competitors https://www.digitalmusicnews.com/2024/09/13/music-streaming-fraud-case-platform-payments/ Sat, 14 Sep 2024 01:02:04 +0000 https://www.digitalmusicnews.com/?p=301318 music streaming fraud

If Spotify paid less than 1% of the royalties behind an over $10 million music streaming fraud scheme, which platforms provided the rest of the sum? Photo Credit: Towfiqu Barbhuiya

Yesterday, another interesting piece of the $10 million+ royalties scheme puzzle fell into place, as Spotify claimed to have promptly halted the operation on its own platform. But which services allowed the AI tracks to stay live and rack up billions of plays?

That’s the multimillion-dollar question right now, after a North Carolina-based musician was indicted earlier in September on charges stemming from an alleged music streaming fraud scheme. Per the indictment, the defendant allegedly used bots to stream hundreds of thousands of tracks across the better part of a decade.

But the plot quickly thickened when it came to light that the CEO of Warner Music-backed and ADA-partnered AI music generator Boomy had allegedly provided the works at the scheme’s center. And subsequently, Spotify told DMN that it, today’s leading on-demand music platform, had only paid out $60,000 or so of the $10,000,000+ in royalties described in the indictment.

(Said indictment also cites emails penned by the defendant, who pointed therein to closer to $12 million in total royalties stemming from the alleged scheme.)

The disclosure raised multiple interesting points and questions. Chief among the former was the fact that Spotify seemingly caught the alleged fraud early on – before the culprit nevertheless managed to score millions upon millions in illicit royalty payments from different services. Related communications between the MLC (which per the indictment uncovered the alleged fraud last year) and Spotify were seemingly non-existent.

And on the questions front, which platform(s) failed to flag the defendant’s tracks and paid the lion’s share of the alleged fraudulent royalties?

Running with the information provided by Spotify, DMN set out to answer the question. Yesterday, we sought comments from Apple Music, Amazon Music, and YouTube Music alike. Starkly contrasting their usual eagerness to discuss coverage, none of the services had responded at the time of this writing.

Furthermore, various factors are preventing a detailed analysis of where exactly the alleged fraudulent streams came from. Per the indictment, the defendant allegedly went out of his way to direct a small number of streams to each of the tracks, attributed to a multitude of made-up artist names, to avoid raising suspicion.

Among other things, this largely successful endeavor means that while Chartmetric registered the “artists” and “songs” at hand, few fetched the consumption information, and there isn’t an abundance of hard data available about the since-removed tracks. Separately, it’s possible that at least some of the uploads would have failed to generate recording royalties on Spotify under the platform’s controversial 1,000-stream-minimum policy.

Shifting the focus to what we do know, the “songs” mentioned in the indictment didn’t appear to be streaming on Apple Music, Amazon Music, YouTube Music, or Deezer at the time of writing. But for those interested in getting a taste of the generic AI outputs, SoundCloud was still hosting uploads from the likes of Calypso Xored, one of the artist names mentioned in the indictment.

(In another aside, even the mass-produced AI tracks were seemingly unable to avoid winding up on piracy platforms, one of which has a Saint Pierre and Miquelon domain ending and was the target of multiple BPI DMCA notices to Google.)

As we continue to seek details about which services hosted and paid royalties on the uploads at the heart of the indictment, it’s worth reiterating the apparent scope of the wider AI problem on streaming services.

And those streaming services include Spotify despite the comparative quickness with which it looks to have flagged the alleged royalties scheme here. Back in April of 2023, we reported on the findings of a listener who said he’d encountered the same “song,” albeit with different artist names and titles, about 50 times via Spotify Radio.

Roughly 17 months after that individual added 49 of the works to a Spotify playlist, 40 are still live – with similar availability through competing services.

This isn’t to say the remaining tracks are benefitting from streaming fraud, but they’ve racked up over 553,600 cumulative Spotify plays in any event. With several other machine-generated audio snippets on each of the profiles (which also have uploads across non-Spotify platforms), this operation, a tiny component of an increasingly massive overall AI library, seems to be raking in sizable royalty payments relative to the responsible party’s effort.

]]>
In the Wake of a $10 Million Royalty-Heist Indictment, Is Boomy’s ADA/Warner Music Deal Still Alive? Here’s What We Know https://www.digitalmusicnews.com/2024/09/10/boomy-ada-deal-update/ Wed, 11 Sep 2024 05:00:04 +0000 https://www.digitalmusicnews.com/?p=300916 boomy

Germany-based Katirha, one of the Boomy users who’s distributed music under the AI-powered platform’s ADA deal. Photo Credit: Boomy

Last year, Boomy inked a high-profile distribution tie-up with Warner Music’s ADA. Is the deal still on after the AI music generator’s CEO was reportedly named as a co-conspirator in a massive fake streams scheme?

DMN set out to answer that question in the wake of the indictment of one Michael Smith, a North Carolina-based musician who’s been accused of wire fraud and more for allegedly masterminding the mentioned royalties scheme. In short, according to the indictment, Smith throughout the better part of a decade harnessed AI to pump out hundreds of thousands of tracks.

After uploading these tracks to major on-demand platforms, the defendant then allegedly scored over $10 million in royalties by using bots to stream the songs, per the indictment. Smith is the lone defendant at present, but he isn’t the only individual who had a hand in the alleged scheme.

To be sure, the indictment describes a 2018 deal between Smith and “the Chief Executive Officer of an AI music company,” whose business allegedly began supplying the computer-generated works at the heart of the purported scheme.

Prosecutors didn’t mention Boomy CEO Alex Mitchell by name, but his involvement as one of the co-conspirators highlighted in the indictment subsequently emerged. And while the fact is valuable, it wasn’t particularly difficult to uncover.

That’s because Mitchell, who seemingly did more than allegedly provide AI creations to Smith, is named as a co-author on thousands of tracks with the defendant in the MLC database. While hindsight is 20/20, registering north of 200,000 of these works under Smith’s name (with 5,119 attributed to Mitchell) to collect compositional royalties wasn’t exactly subtle.

Furthermore, at least as things are laid out in the indictment, the curiously brazen move contrasted other elements of the calculated alleged scheme. For instance, the defendant was allegedly careful to direct a small number of streams to each track so as to avoid raising suspicion.

Shifting the focus to where the cards will fall in the wake of the alleged scheme, there are obvious questions about the future of Boomy and especially its ADA distribution deal.

Finalized after Spotify booted thousands of Boomy tracks due to alleged fake streams – besides temporarily blocking the AI music company’s releases before implementing a distributor fine for streaming manipulation – the underlying agreement was formally announced in November of 2023.

But it’s unclear where things go from here. At the time of writing, neither Mitchell nor ADA/Warner Music (which previously invested in the AI music platform) had responded to a request for comment.

The radio silence from the involved parties, on top of a lack of news on the original affidavit, means we’ll have to rely on existing evidence. And as things stand, this evidence points to business as usual for Boomy and ADA.

Once again at the time of writing, the AI music generator’s website was still up and running, with a continued emphasis on ADA-partnered Boomy’s distribution capabilities. However, the timing of some recent changes at Boomy might not be coincidental.

Per an FAQ response that was updated 12 days back, only Boomy paid accounts – Pro and Creator, that is – “are enabled for distribution.” Free accounts, on the other hand, “must upgrade in order to release their songs to streaming platforms.”

The adjusted policy marks a departure from late April (and possibly a more recent date), when, per a screenshot captured by the Wayback Machine, free users could “release 3 singles or 3 albums per month, with up to 12 tracks on an album.”

]]>
SM Entertainment Debuts AI Artist ‘Naevis’ — But Reactions Are Mixed https://www.digitalmusicnews.com/2024/09/10/sm-entertainment-ai-artist-naevis-mixed-reactions/ Tue, 10 Sep 2024 19:21:41 +0000 https://www.digitalmusicnews.com/?p=300953 SM Entertainment Naevis

Photo Credit: SM Entertainment

SM Entertainment debuts its first AI artist, Naevis, as part of K-pop group aespa’s storyline. But reactions from fans are mixed.

On the heels of its partnership with LG Uplus, SM Entertainment has officially debuted its first virtual artist — the AI character Naevis — as an extension of K-pop group aespa’s ongoing storyline. But fans online aren’t sure how to react.

Naevis had been teased at several aespa events, but officially launched this week with a short 15 second teaser and then a full music video for her debut track, “Done,” which also features aespa. The video shows Naevis dancing in a city at night as she switches back and forth between a cartoon version of herself and her more realistic form. She is accompanied by members of aespa as her backup dancers.

Although many fans were impressed by the CGI employed, which appears to be edited over footage of a real person dancing, others found the combination unsettling. “To me, it looks like her body is a human, but the face looks like a mannequin,” said one Korean fan.

Others questioned the need for a virtual idol in the first place, when they would rather the song be given to aespa. Some pointed out that Naevis’ voice sounds like a combination of members of aespa, further questioning the need for Naevis’ presence. The quality of the song and the voice itself have also been criticized.

“The 3D movements feel super natural — but is it like AI synthesized just the face onto a real person?” asked one fan. “Did they pour a ton of money into this? The voice is so-so,” said another. “I thought it was a duet between [aespa members] Winter and NingNing. The voices sound like they’re alternating between the two.”

In the aespa universe, Naevis is responsible for helping the group (and their computer-generated counterparts) to fight the fictional supervillain, Black Mamba. SM Entertainment plans to expand Naevis’ IP via AI voice technology and generative AI to bring her into music, games, webtoons, merchandise, and brand collaborations.

]]>
Major Music Publishers Fire Back Against Anthropic Dismissal Motion in High-Stakes Infringement Dispute https://www.digitalmusicnews.com/2024/09/09/music-publishers-anthropic-lawsuit-dismissal-motion/ Tue, 10 Sep 2024 00:24:36 +0000 https://www.digitalmusicnews.com/?p=300882 anthropic ai

Photo Credit: Igor Omilaev

A little over a month out from the one-year anniversary of its start, the copyright suit levied by major music publishers against Anthropic is heating up amid the AI giant’s renewed push for dismissal.

That push, music publisher plaintiffs including Concord and UMPG emphasized in their latest filing, actually marks the second attempt by Anthropic to dismiss the high-stakes case. As many know by now, the courtroom confrontation centers on alleged infringement stemming from the training process behind Anthropic’s Claude product.

The filing companies have pointed to the alleged presence of lyrics in the chatbot’s outputs – and claimed, among other things, that the alleged “massive copyright infringement” helps Anthropic to generate revenue and attract users.

Multiple twists and one time-consuming venue change later, Anthropic last month (again) fired back against the publishers’ push for a preliminary injunction blocking the continued use of lyrics in outputs and in future training.

And now, the music publishers themselves are taking the opportunity to refute Anthropic’s latest dismissal arguments as well as the appropriate motion.

Predictably, given the ultra-important case’s plodding nature, this 33-page refutation doesn’t break too much new ground. Instead, the publishers drove home that Anthropic’s dismissal motion is untimely in part because it arrived before a formal answer to the suit.

Running with the latter idea, the plaintiffs indicated that Anthropic had “deliberately contravened the Federal Rules” by ignoring purported warnings about the timing of its answer (or the lack thereof).

In short, the Amazon-backed AI mainstay is working “to gain a litigation advantage by prioritizing resolution” of the dismissal motion without first answering the complaint.

“When Anthropic answers the Complaint,” the publishers spelled out, “it will have to admit facts that it so far refuses to acknowledge directly, including that Anthropic copied Publishers’ lyrics when training Claude and made no effort to remove those lyrics from its training dataset despite its ability to do so.”

And from there, the publishers took aim at Anthropic’s specific dismissal arguments, which are looking to toss each claim save that involving direct infringement.

“Publishers are not required to name and date every instance of direct infringement to state a claim for secondary infringement,” the plaintiffs reiterated in part. “Publishers plausibly allege that Anthropic’s AI models respond to queries from users seeking copyrighted lyrics—including queries from Publishers’ investigators—by delivering those lyrics as requested.

“That allegation, without naming specific infringing users, is sufficient to set forth a valid claim for secondary copyright liability,” they proceeded.

Furthermore, dismissal would be “especially unwarranted” because the plaintiffs have yet to “discover from Anthropic what other third parties have requested from the Claude chatbot or APIs,” per the precedent-heavy legal text.

]]>
Co-Conspirator Details Emerge in Massive $10 Million Streaming Fraud Indictment — Including the CEO of a Major AI Music Company https://www.digitalmusicnews.com/2024/09/05/music-streaming-fraud-case-co-conspirator-details/ Fri, 06 Sep 2024 01:00:59 +0000 https://www.digitalmusicnews.com/?p=300594 New details have come to light about the co-conspirators in an alleged music streaming fraud scheme. Photo Credit: Sora Shimazaki

New details have come to light about the co-conspirators in an alleged music streaming fraud scheme. Photo Credit: Sora Shimazaki

Yesterday, a North Carolina-based musician was indicted for allegedly facilitating a massive streaming fraud scheme for the better part of a decade. Now, new details – among them the name of the prominent AI music company CEO with whom the defendant allegedly conspired – are coming to light. Also emerging: the MLC’s Credits Database attributes an astounding 202,000 AI-generated works to the defendant.

On Wednesday, DMN broke the news that North Carolina musician Michael Smith had been slapped with wire fraud and money laundering conspiracy charges by the U.S. Attorney’s Office for the Southern District of New York. According to the indictment, Smith had raked in more than $10 million since 2017 by spinning up thousands of AI-generated tracks and using bots to generate fake streams and royalty payments.

Now, it turns out that Smith wasn’t acting alone. According to prosecutors, Smith could serve up to 60 years behind bars if found guilty on all counts, depending on how this shakes out.  But the affidavit also mentions details on co-conspirators without naming names — though only Smith has been formally charged with a crime so far.

So who else was in on the $10 million heist?

Importantly, while the sole defendant is alleged to have organized the scheme, the indictment doesn’t claim that he acted alone. Rather, the Cornelius, North Carolina-based producer allegedly coordinated with “the Chief Executive Officer of an AI music company” and “a music promoter” to create the hundreds of thousands of songs that made the scheme possible.

We contacted both the MLC and the US Attorney’s Office to determine those identities but couldn’t get any answers. The Mechanical Licensing Collective (MLC), which is said to have halted royalty payments to the defendant in March of 2023 due to fraud suspicions, told us it was “unable to comment beyond” the official statement put out by CEO Kris Ahrend.

The MLC isn’t talking, though the fingerprints of at least two co-conspirators can be found in multiple song databases — including the MLC’s own Credits Database.

Hardly opting for a subtle approach, the defendant, having evidently raked in recording and compositional royalties alike, is registered in the MLC database as a writer on a staggering number of works.

Specifically, an astonishing 201,944 “creations” in the MLC database are attributed to Michael Anthony Smith. Digging into the attributions alongside DMN, Billboard unearthed this afternoon the names of Indiehitmaker founder Bram Bessoff and Alexander Mitchell (the founder and CEO of Warner Music-partnered AI music generator Boomy) as co-writers with Smith on thousands of works.

Admittedly, the scope of these joint credits seems relatively small; bearing in mind the almost 202,000 works tied to Smith, Bessoff’s associated with 5,118 works in the MLC database, against 5,119 for Mitchell, search results show.

It’s unclear whether the north of 200,000 AI-generated tracks are included in the 20.16 million songs that Boomy’s created (according to its website, that is) thus far. But in any event, just at the top level, the facts raise interesting questions about the unprecedented impact of the AI music explosion as well as the revenue sources behind it.

Bessoff has opted against publicly addressing the information, reportedly due to “his cooperation in the ongoing investigation.” However, Mitchell in a statement said he was “shocked” with the indictment and insisted that Smith had “consistently represented himself as legitimate.”

That doesn’t quite sound right given some smoking-gun emails and communications details by federal investigators, though this investigation is just getting started.

Moving forward, it’ll be worth keeping an eye out for additional charges, possible effects on the AI music space, and, perhaps most notably, the defense of Smith.

There are, of course, serious ethical issues with using bots to rack up streams on artificial intelligence tracks. Nevertheless, time will tell if the defendant (possibly taking a page from the book of the individual charged in Denmark for alleged streaming fraud) and his counsel attempt to paint the relevant laws as less clear-cut than those covering more traditional crimes.

Furthermore, the case may also be made that this isn’t entirely a criminal matter given that Smith violated a number of contractual terms with platforms like Spotify.

Also worth asking: how did Smith and the gang rack up $10 million in fake plays without anyone — including the MLC and Spotify, among others — even noticing?

More questions than answers, indeed — stay tuned as more foul-smelling solids hit the rotating fan blades.

]]>
US, UK, EU, Sign ‘First-Ever International Legally Binding Treaty’ for AI Systems https://www.digitalmusicnews.com/2024/09/05/us-uk-eu-international-treaty-ai/ Thu, 05 Sep 2024 20:17:12 +0000 https://www.digitalmusicnews.com/?p=300566 international AI treaty

Photo Credit: Council of Europe

The US, UK, and EU have signed the ‘first-ever international legally binding treaty’ for AI systems to abide by human rights and the rule of law.

The Council of Europe’s Framework Convention on artificial intelligence and human rights, democracy, and the rule of law (CETS No. 225) was open for signing today during a conference of the Council of Europe Ministers of Justice in Vilnius, Lithuania.

The convention is the first international legally binding treaty relating to AI, which has been in the works for the past few years and was formally adopted in May, following discussions between 57 countries. The Framework Convention was signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom, Israel, the United States, and the European Union.

The convention focuses primarily on the protection of human rights of those affected by AI systems, and is not to be confused with the EU AI Act, which also took effect last month. The EU AI Act formulates comprehensive regulations on the development, deployment, and use of AI systems within the EU market. Founded in 1949, the Council of Europe is an international organization distinct from the EU, with a mandate to safeguard human rights. All 27 EU states are members, with 46 countries in total making up its membership.

“This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, lie human rights and the rule of law,” said Britain’s Justice Minister, Shabana Mahmood.

“We must ensure that the rise of AI upholds our standards, rather than undermining them. The Framework Convention is designed to ensure just that,” said Council of Europe Secretary General Marija Pejčinović Burić.

“It is a strong and balanced text — the result of the open and inclusive approach by which it was drafted and which ensured that it benefits from multiple and expert perspectives. The Framework Convention is an open treaty with a potentially global reach. I hope that these will be the first of many signatures and that they will be followed quickly by ratifications, so that the treaty can enter into force as soon as possible.”

The Framework Convention was adopted by the Council of Europe Committee of Ministers on May 17, 2024. The 46 Council of Europe member states, the European Union, and 11 non-member states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay) negotiated the treaty. Contributing as observers were representatives of the private sector, civil society, and academia.

The treaty will enter into force on the first day of the month following the expiration of a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it. Countries worldwide will be eligible to join and commit to complying with its provisions.

]]>
YouTube Unveils Voice-Detection Tool Targeting AI-Powered ‘Synthetic Singing,’ Eyes 2025 Launch https://www.digitalmusicnews.com/2024/09/05/youtube-ai-vocals-detection-technology/ Thu, 05 Sep 2024 18:28:55 +0000 https://www.digitalmusicnews.com/?p=300527 youtube ai

YouTube is preparing to release a tool with which rightsholders can detect and manage videos containing unauthorized AI vocals. Photo Credit: Javier Miranda

YouTube is officially set to launch “synthetic-singing identification technology” that it says will allow rightsholders to detect and manage uploads containing unauthorized soundalike vocals.

The video-sharing giant, having reportedly been in active AI discussions with the major labels for some time, disclosed the voice-detection tool in a brief update today. Penned by VP of product management for creator products Amjad Hanif, this update, aptly entitled “New tools to protect creators and artists,” has arrived on the heels of multiple other music-focused AI moves for YouTube.

July saw the Dream Track developer roll out an updated eraser feature that uses AI to remove copyrighted tracks without compromising videos’ audio quality, for instance. The same month also brought revamped YouTube privacy guidelines under which “uniquely identifiable” first parties can seek the removal of certain unauthorized AI soundalike and/or lookalike media.

And now, a vocals-detection capability appears poised to take YouTube’s AI-protection offerings a step further.

Expected to debut with a pilot “early next year,” the tech will be available directly through Content ID, according to YouTube. While concrete details are few and far between at present, Hanif did spell out that eligible partners will be able “to automatically detect and manage AI-generated content…that simulates their singing voices.”

Especially because one needn’t search far on YouTube to find presumably unauthorized soundalike projects – referring both to original artificial intelligence tracks such as “Heart on My Sleeve” and an abundance of AI-powered covers – it’ll be interesting to see the tool’s effectiveness and ultimate impact.

Meanwhile, though it doesn’t really need saying, non-audio AI products (and particularly their video-generation counterparts) are evolving by leaps and bounds. Given the rapid improvements, it seems to be a matter of when, not if, eerily realistic AI lookalike media will explode in popularity.

Enter the second piece of AI-related tech announced today by YouTube, which says it’s developing a separate tool designed to help musicians and others “detect and manage AI-generated content showing their faces.” This option, Hanif noted, will complement the previously highlighted privacy-policy updates.

Looking ahead to 2025 and beyond, these increasingly robust AI-detection abilities could provide a means of controlling and monetizing artificial intelligence music.

Adjacent to efforts to get a hold on unapproved AI works uploaded to YouTube, Instagram, TikTok, Spotify, and different high-profile platforms in the near term, the likes of Sony Music (which is developing a little-discussed proprietary AI product) and Universal Music (now partnered with content-attribution startup ProRata.ai) are also eyeing bigger-picture solutions.

But as things stand, the unprecedented technology is still at the center of high-stakes infringement litigation – with different effects yet stemming from the massive stream of non-infringing AI works that are reportedly flooding Spotify and more.

]]>
Musician Indicted Over Years-Long Streaming Fraud Scheme After Allegedly Making Over $10 Million on AI Tracks https://www.digitalmusicnews.com/2024/09/04/music-streaming-fraud-indictment-september-2024/ Wed, 04 Sep 2024 23:47:20 +0000 https://www.digitalmusicnews.com/?p=300457 music streaming fraud

A North Carolina-based musician has been indicted in connection with an alleged music streaming fraud scheme. Photo Credit: Igor Omilaev

A North Carolina-based musician is facing criminal charges for allegedly racking up millions in royalties on AI-generated tracks as part of a massive music streaming fraud scheme.

The U.S. Attorney’s Office for the Southern District of New York today announced the indictment as the “first criminal case involving artificially inflated music streaming.” Despite AI music generators’ relatively recent entry into the commercial mainstream, the 18-page indictment indicates that the alleged music streaming fraud scheme kicked off way back in 2017.

Said alleged scheme, extending to Spotify, Amazon Music, Apple Music, and YouTube Music alike, even carried on into 2024, delivering a cumulative $10 million or so in royalty payments to which the defendant, one Michael Smith, “was not entitled,” according to the indictment.

As described in the same legal text, Smith took the alleged streaming fraud into high gear by developing a voluminous library of tracks – a move powered by a 2018 tie-up with the head “of an AI music company.”

With this unnamed AI company pumping out (in exchange for a cut of the alleged scheme’s revenue) thousands upon thousands of tracks, Smith allegedly spearheaded a “labor-intensive” effort to register “bot accounts” on the above-mentioned platforms.

Allegedly coordinating with co-conspirators based in the States and abroad, the defendant for obvious reasons allegedly zeroed in on multi-account Family plans and spread streams out across a multitude of AI songs so as to avoid raising suspicion. (Incidentally, Spotify no longer pays recording royalties on uploads with fewer than 1,000 annual streams, a development that may have shifted the math a bit.)

Between 2020 and 2023, the defendant allegedly “transferred $1.3 million in fraudulently obtained royalties to a bank account he controlled at a U.S.-based financial institution.”

From there, the proceeds were allegedly shifted to a Manhattan-based provider of corporate debit cards, which was allegedly misled into believing that made-up names (each tied to an email address and streaming account) were employees of a company owned by Smith.

Moving beyond the many other details associated with the execution of the convoluted alleged scheme, the debit cards were allegedly used to pay for the bots’ streaming subscriptions.

And in 2017, Smith in an email allegedly relayed “that he had 52 cloud services accounts, and each of those accounts had 20 Bot Accounts on the Streaming Platforms, for a total of 1,040 Bot Accounts.

“He further wrote that each Bot Account could stream approximately 636 songs per day,” the indictment proceeds, “and so in total SMITH could generate approximately 661,440 streams per day. SMITH estimated that the average royalty per stream was half of one cent, which would have meant daily royalties of $3,307.20, monthly royalties of $99,216, and annual royalties of $1,207,128.”

While the indictment doesn’t dive too far into rights-related specifics, Smith allegedly lied to his distributor for multiple years when confronted about streaming and royalty abnormalities. (Besides the aforementioned annual-stream minimum at Spotify, the platform is now fining distributors themselves for fake plays.)

Though it’s hardly a new phenomenon for AI music and even non-music uploads (like white noise, which Spotify and others have also cracked down on at the behest of Universal Music Group) to generate recording royalties, the defendant here took things a step further by allegedly raking in compositional royalties to boot.

As described in the indictment, the Mechanical Licensing Collective caught wind of that decidedly bold maneuver and then “halted royalty payments to” the defendant in March or April of 2023.

“Today’s DOJ indictment shines a light on the serious problem of streaming fraud for the music industry,” MLC CEO Kris Ahrend added in a statement emailed to DMN. “As the DOJ recognized, The MLC identified and challenged the alleged misconduct, and withheld payment of the associated mechanical royalties, which further validates the importance of The MLC’s ongoing efforts to combat fraud and protect songwriters.”

Of course, it’s not illegal to create and distribute AI music – provided that the works aren’t infringing on protected media. However, bearing in mind the bot-powered fake-stream allegations and more, the defendant is being accused of wire fraud and money laundering conspiracy.

]]>
SM Entertainment Taps LG to Supercharge Its Emerging AI Artist, Naevis https://www.digitalmusicnews.com/2024/09/02/sm-entertainment-taps-lg-to-supercharge-its-ai-artist-naevis/ Mon, 02 Sep 2024 19:07:28 +0000 https://www.digitalmusicnews.com/?p=300107 SM Entertainment Naevis

Photo Credit: Jung Soo-heon (LG Uplus Consumer Division VP) & Tak Young-jun (SM co-CEO)

SM Entertainment forms a partnership with LG Uplus to collaborate on AI-driven content for its virtual artist, Naevis.

SM Entertainment has announced a strategic partnership with LG Uplus, one of the largest wireless telecom carriers in South Korea, to collaborate on AI-driven content creation and joint branding efforts for its virtual artist, Naevis.

The partnership involves using LG Uplus’ generative AI technology, ixi-GEN, to create a variety of content for Naevis, including music videos, short visual clips, and concept images. Originally introduced in aespa’s digital universe, Kwangya, Naevis is due to make her solo debut this month.

To ramp up for the debut, LG Uplus has been airing a video, “The Birth of Naevis,” on its IPTV service, U+TV’s Dolby-dedicated channel, since last week. The video provides viewers with an immersive experience that highlights the story of Naevis.

“It is a milestone in achieving future-oriented content and technological innovation through creative synergy,” said Tak Young-jun, co-CEO of SM Entertainment, in a press release over the weekend.

“Our digital innovation goal is to provide new and enjoyable experiences for our customers. Collaborating with SM, a leading K-pop agency, will offer new digital experiences not only to our domestic customers but also to global K-pop fans,” added Jung Soo-heon, Vice President of LG Uplus’ Consumer Division.

Naevis, the first fully CGI (and AI) character in aespa’s universe, is responsible for helping the group and their computer-generated counterparts fight the fictional supervillain, Black Mamba. She is set to debut as a solo venture on September 10, and will feature “hyper-realistic visual effects,” as well as adapting to various forms, including 2D and 3D, to tailor to the characteristics of different platforms.

SM plans to expand Naevis’ intellectual property using AI voice technology and generative AI into music, webtoons, games, merchandise, and brand collaborations.

]]>
Who Says Amazon and Google Get to Have All the AI Fun? Apple Reportedly Prepares OpenAI Investment in Multibillion-Dollar Round https://www.digitalmusicnews.com/2024/08/29/apple-openai-investment-report/ Fri, 30 Aug 2024 00:31:35 +0000 https://www.digitalmusicnews.com/?p=299846 apple openai investment

Apple is reportedly preparing an OpenAI investment as part of a multibillion-dollar raise. Photo Credit: Solen Feyissa

Who says Google and Amazon get to have all the AI-investment fun? In another twist, Apple is reportedly poised to back ChatGPT developer OpenAI.

That possible investment, which would be part of a new raise valuing the AI business at over $100 billion, came to light in a Wall Street Journal report. At the time of this writing, the Apple Music developer didn’t appear to have addressed the matter publicly.

However, per the mentioned report, the “unusual” startup investment would see Apple (and potentially Nvidia as well) join existing multibillion-dollar backer Microsoft aboard the OpenAI train.

Against the backdrop of an accelerating battle for AI supremacy, reports previously suggested that Apple had even considered an artificial intelligence alliance with rival Meta.

But with the iPhone developer having tapped OpenAI in June as its first “Apple Intelligence” partner, things are seemingly trending in a different direction. Per the Journal, it’s unclear how much Apple may contribute to the Thrive Capital-led OpenAI round, but the raise is expected to total “several billion dollars.”

And in the bigger picture, the move would officially add Apple to a growing list of high-profile companies (and music streaming service operators) with stakes in artificial intelligence giants.

Perhaps most conspicuously, this overlap between leaders in on-demand streaming – which is, of course, an integral component of the contemporary music landscape – and AI mainstays refers to Amazon’s approximately $4 billion interest in Anthropic.

Facing a particularly important copyright infringement lawsuit from music publishers, the latter entity remains adamant that training its models on protected compositions sans authorization constitutes fair use.

Meanwhile, Microsoft and OpenAI are grappling with multiple training-related actions from a long list of professionals and businesses. And Daniel Ek-backed Air Street Capital, which specifically strives to support “AI-first companies,” has quietly provided funding to startups including “Hollywood-grade visual AI” platform Odyssey.

That’s not even accounting for the varied list of backers behind Suno and Udio (which are also the target of industry infringement claims, this time from the major labels) – not to mention the far-reaching AI activities of the YouTube Music owner Google/Alphabet, Facebook parent Meta, and TikTok developer ByteDance.

(Air Street Capital was an early investor in AI music startup Jukedeck, which ByteDance scooped up in 2019.)

With Apple’s possibly imminent OpenAI stake, the curious situation will feature more ties than ever between key streaming platform operators on the one hand and the very AI companies that are allegedly compromising the building blocks of creativity on the other.

Needless to say, it’ll be worth closely monitoring the trend as well as the byproducts thereof moving forward, especially because the challenges posed by generative AI for rightsholders and creatives appear exceedingly unlikely to abate anytime soon.

]]>
Anthropic Fires Back Against Music Publisher Injunction Demand, Says Publishers Haven’t Suffered ‘Irreparable Harm’ From ‘Fair Use’ LLM Training https://www.digitalmusicnews.com/2024/08/27/anthropic-fires-back-against-music-publisher-injunction-demand/ Wed, 28 Aug 2024 03:11:37 +0000 https://www.digitalmusicnews.com/?p=299606 Anthropic Music Publishers Injunction

Photo Credit: Anthropic

Anthropic claims that using copyrighted lyrics to train AI is fair use, and publishers haven’t suffered ‘irreparable harm’ from their doing so.

The battle between AI company Anthropic and a group of music publishers led by Concord Music Group continues to heat up. Concord alleges Anthropic is committing copyright infringement by training its Claude LLM (large language model) on lyrics owned by the publishers. Now, Anthropic is firing back, doubling down on its assertion that using copyrighted lyrics for training constitutes fair use — and there should be no injunction against it as no “irreparable harm” has befallen the publishers for its doing so.

Previously, Concord and the music publishers requested a preliminary injunction against Anthropic, alleging copyright infringement by Anthropic’s LLM model, Claude. Anthropic argues that the publishers have not demonstrated “irreparable harm” in court, a necessary condition for an injunction to be granted. Further, the company claims an injunction would seriously cripple the development of its AI model.

Anthropic contends that Concord’s claims are either speculative or addressable through monetary damages — by the case moving forward to determine if any damages have taken place — not injunctive relief.

Throughout the filing, Anthropic doubles down on its use of copyrighted works for AI training constituting fair use, by transforming the work for a different purpose. The company also emphasizes the public interest in allowing AI development, and the potential harm an injunction could cause to “innovation.”

To that end, Anthropic has been reiterating its continued effort to paint itself as a more ethical and transparent AI company, publicly releasing its system prompts that make the Claude LLM function.

Alex Albert, head of Anthropic’s developer relations, said in a post on X (formerly Twitter) that Anthropic plans to disclose information like this regularly as it updates and fine-tunes its system prompts.

Meanwhile, a group of authors hit Anthropic with a class action lawsuit last week, alleging the misuse of their work to train its AI model.

]]>
Spotify ‘Getting Flooded’ With AI Covers and Fake Bands — And a Lot of This Looks Legal https://www.digitalmusicnews.com/2024/08/27/is-spotify-getting-flooded-with-ai-covers-fake-bands/ Wed, 28 Aug 2024 02:56:55 +0000 https://www.digitalmusicnews.com/?p=299603 Spotify AI covers and fake bands

Photo Credit: Spotify

AI generated covers of famous songs from fake bands are flooding Spotify. IP owner approval isn’t necessarily needed for AI generated music and granted automatically for most cover songs—leading to hundreds of fake bands on Spotify with AI covers.

Covers of popular songs are now being added to existing playlists featuring songs from real artists — with fake artist names and covers to make them seem more legit. With playlist names like “Country Time Summer Vibes” and artist titles like Jet Fuel & Ginger Ales, Savage Sons, Gutter Grinders, and Grunge Growlers, it’s enough to slip under the radar while minting serious per-stream cash.

So how is this all working? Let’s break this down.

A number of music AI shops now appear to be using generative AI models (like Suno and Udio) with a license for commercial use to create over versions of copyrighted songs. Longtime Spotify users have encountered lots of covers in the past, with many almost unlistenable. Now, that process is ‘on steroids’ thanks to AI, with the quality quotient also potentially improving.

A lot of this is possible because of how music licensing works in the US and many other countries — with compulsory licenses allowing covers as long as specific royalties are accounted for and paid out. Other rules and stipulations apply, but the short version is that if the songwriter and publisher royalties are paid on the AI generated cover work, then technically no law has been broken.

But there’s also an illegal version to this game plan.

With the explosion of AI generated covers, it’s clear Spotify is lacking the resources or department to handle these kinds of use cases. Bad actors can obfuscate their identity by using anonymous artist names or copyrighted LLCs, allowing them to claim the mechanical and public performance rights for their AI generated covers, while attempting to bypass the traditional licensing requirements and fees owed to the original songwriters.

The bad actor engaging in creating AI generated covers can rack up millions of plays and substantial royalties through the unauthorized cover versions. Original songwriters must work with Spotify to have the infringing content removed if royalties aren’t being remitted or recordings are being infringed. But this is like playing whack-a-mole.

It’s also unclear how these songwriters can take legal action if the bad actor’s identity remains hidden. Songwriters must appeal to the DSP to delist infringing content until courts can rule on the legitimacy of the allegations or an agreement between the two parties.

Bad actors may claim that since they have licensed their generative AI model for commercial purpose, they are entitled to bypass licensing requirements as they own the rights to whatever the genAI model creates via its license. It’s a potentially unlawful attempt to exploit a legal grey area, as AI model licensing typically does not grant the rights to use a copyrighted musical work (indeed, that last point is now being litigated in contentious lawsuits like Concord vs. Anthropic, which DMN is covering closely).

Now, the use of genAI to create covers is moving into unprecedented territory.

Case in point: here’s a crop of cover material that debuted on Spotify in 2023. They’ve all blocked comments on the YouTube versions of their music and are focused on racking up plays while avoiding the spotlight.

  • Gutter Grinders — (“Fast Car,” “1979,” “Listen to the Music,” “Go Your Own Way”)
  • Savage Sons — (“Don’t Stop Believin’,” “Every Breath You Take,” “Mississippi Queen,” “Old Time Rock & Roll”)
  • Jet Fuel & Ginger Ales — (“Under the Bridge,” “Linger,” “Carry on My Wayward Son,” “Sharp Dressed Man”)
  • Grunge Growlers — (“Creep,” “Fly Away,” “Smells Like Teen Spirit,” “Like A Stone”)

And that’s just scratching the surface — time for a rules change? Or better yet, a major label-led demand to clamp down on copycats?

More as this develops.

]]>
California Senate Passes Bill to Limit AI Replicas https://www.digitalmusicnews.com/2024/08/27/california-senate-passes-bill-to-limit-ai-replicas/ Wed, 28 Aug 2024 00:00:23 +0000 https://www.digitalmusicnews.com/?p=299609 California limit AI replicas

Photo Credit: Tim Wildsmith

The California Senate has passed a bill from actors union SAG-AFTRA to protect performers from unauthorized AI replicas, soon heading to the governor’s desk.

A bill to protect performers from unauthorized AI replicas received approval on Tuesday from the California Senate. It will soon head to the governor’s desk for a signature. The actors union, SAG-AFTRA, has made the bill one of its top legislative priorities for 2024.

The bill, AB 2602, would require explicit consent for the use or creation of a “digital replica” of a performer. Its language mirrors that in the SAG-AFTRA contract that ended last year’s four-month strike against the film and television studios. The bill extends those protections to include other types of performances, like video games, audio books, commercials, and would encompass non-union work.

“We’re looking to make sure people who aren’t currently covered by one of our agreements are protected,” said Jeffrey Bennett, general counsel of SAG-AFTRA. “We don’t want to see the next generation of performers lose all rights to voice and likeness because they don’t have any leverage or ability to effectuate fair terms.”

Initially, the Motion Picture Association, a lobbyist on behalf of the major studios, opposed the bill, arguing it would interfere with common post-production techniques. But legislators have since changed some of the bill’s language to address those concerns, and the Motion Picture Association “ultimately took a neutral position.”

The State Assembly first approved the bill back in May with a unanimous vote of 62-0, before passing the Senate at a vote of 36-1. Since it was amended in the Senate, the bill will first return to the Assembly for concurrence before it finally makes its way to the desk of Gov. Gavin Newsom.

“Voice and likeness rights, in an age of digital replication, must have strong guardrails around licensing to protect from abuse. This bill provides those guardrails,” said Duncan Crabtree-Ireland, executive director of SAG-AFTRA, calling the bill, “a huge step forward.”

SAG-AFTRA is also focusing its attention on a federal law, the No Fakes Act, which would protect anyone, whether performer or regular individual, from digital replicas being created without their consent.

Currently, SAG-AFTRA is on strike against the major video game companies following stalled negotiations over AI provisions.

]]>
Daniel Ek and Mark Zuckerberg Call Out Europe’s ‘Fragmented Regulatory Structure’ in Push for Open-Source AI https://www.digitalmusicnews.com/2024/08/23/daniel-ek-mark-zuckerberg-ai-comments-europe/ Fri, 23 Aug 2024 20:08:07 +0000 https://www.digitalmusicnews.com/?p=299235 Spotify plans for a two-tier system are ‘intrinsically unfair, clearly discriminatory, bordering on offensive.'

Photo Credit: Kmeron for LeWeb11 / CC by 2.0

It’s time for European Union regulators to embrace open-source AI – at least according to Meta head Mark Zuckerberg and Spotify CEO Daniel Ek, who’ve laid out their position in a jointly penned article.

That approximately 1,000-word article was posted to the appropriate companies’ websites today after being published in the Economist earlier this week.

Though Ek’s decision to collaborate with Zuckerberg here is perhaps surprising – it was only last year that EU officials ordered Meta to pay a record fine – the Stockholm native couldn’t very well have done so with Spotify rival Apple.

To be sure, Apple is already facing an even larger EU fine specifically for “abusing its dominant position on the market for the distribution of music streaming apps,” besides grappling with different investigations yet.

And Google, despite having a seemingly positive overall relationship with Spotify, reportedly provides its custom microchips to Apple as part of the iPhone developer’s AI training processes.

In any event, it’s Zuckerberg and Ek who are entreating EU regulators and lawmakers to embrace open source, which they believe represents “the best shot at harnessing AI to drive progress and create economic opportunity and security for everyone.”

Meta currently “open-sources many of its AI technologies,” according to the execs, and Europe is said to be “particularly well placed to make the most of” open-source AI. But the continent’s “fragmented regulatory structure, riddled with inconsistent implementation, is hampering innovation and holding back developers,” per Ek and Zuckerberg.

“Instead of clear rules that inform and guide how companies do business across the continent,” the two spelled out in a Meta-specific section, “our industry faces overlapping regulations and inconsistent guidance on how to comply with them. Without urgent changes, European businesses, academics and others risk missing out on the next wave of technology investment and economic-growth opportunities.”

From there, the authors pointed to the perceived “uneven application” of the EU’s General Data Protection Regulation measure and made a final push for “thoughtful, clear and consistent” AI guidelines.

While time will tell whether the publicly expressed position elicits changes in the EU – Ek, whose holdings (and AI stakes) now extend well beyond Spotify, certainly possesses considerable sway in this department – the higher-ups also mentioned possible advantages of open-source AI for their businesses.

With regard to Spotify, early AI investments laid the groundwork for personalization success and “made the company what it is today,” Daniel Ek wrote, emphasizing as well the “tremendous potential to use open-source AI to benefit the industry” moving forward.

This tremendous potential refers to helping “more artists get discovered,” per the text. Needless to say, Spotify’s interest in achieving a favorable AI regulatory environment hardly begins and ends with assisting artists.

However, the line does raise questions about AI’s ability to fuel additional discovery in the approaching years. On Spotify itself, offerings like AI DJ have been well-received as a whole, and tech undoubtedly drives a substantial amount of discovery on TikTok, for instance.

]]>
Did Kunlun Tech Train Melodio on Copyrighted Tracks? We Tested ‘The World’s First AI-Powered Music Streaming Platform’ to Find Out https://www.digitalmusicnews.com/pro/ai-streaming-platform-test-weekly/ https://www.digitalmusicnews.com/pro/ai-streaming-platform-test-weekly/#respond Wed, 21 Aug 2024 23:47:09 +0000 https://www.digitalmusicnews.com/?post_type=dmn_pro&p=299046 Photo Credit: Kunlun Tech/Melodio

Photo Credit: Kunlun Tech/Melodio

On August 14th, 2024, Kunlun Tech announced Melodio, “the world’s first AI-powered music streaming platform.” Free to use and promising unlimited custom-listening options, the service immediately ignited conversations about the materials on which it was trained — and its potential to disrupt well-entrenched players like Spotify down the line.

Perhaps the most pressing question that remains unanswered: is Melodio being trained on copyrighted music, including works from major labels like WMG, UMG, and Sony Music? In an attempt to answer that question, DMN Pro took a look under the Melodio hood.

Report Table of Contents

I. Introduction: The Questions Raised by Kunlun Tech’s Melodio Announcement

II. Kunlun Tech’s AI Music Products: What We Know About Melodio (And Its Training Data)

Graph: Kunlun Tech’s Organizational Structure and Products At a Glance

III. Has Melodio Been Trained on Copyrighted Music? What the Available Evidence Tells Us

IV. AI Music Streaming in the Long Run: Are We Witnessing the Beginning of a Fundamental Industry Shift?

Please do not redistribute this report without permission. Thank you!


]]>
https://www.digitalmusicnews.com/pro/ai-streaming-platform-test-weekly/feed/ 0
The Start of a (Really) Bad Thing? AI Could Put ‘Up to 23% of Music Creator Revenues’ At Risk Before 2028, Study Warns https://www.digitalmusicnews.com/2024/08/20/music-ai-apra-amcos-study/ Wed, 21 Aug 2024 01:00:54 +0000 https://www.digitalmusicnews.com/?p=298901 apra amcos music ai study

Sydney, Australia, where APRA AMCOS is based. Photo Credit: Cheney Qian

By 2028, an estimated 23% of music creators’ revenue “will be at risk due to” generative AI, according to a new report on the unprecedented technology.

That nearly 150-page report comes from APRA AMCOS, which surveyed 4,274 of its member songwriters, composers, and publishers in connection with the analysis. Factoring for these responses, expert interviews, and existing earnings-distribution data, the Sydney-headquartered entity arrived at the mentioned 23% “potential damage” figure.

Zeroing in on the music markets of Australia and New Zealand, this percentage would mark an approximately $152.97 million (AU$227 million) hit in 2028 alone – with the sum cracking an estimated total of $349.82 million (AU$519 million) between 2024 and 2028, according to the document.

At least as described by the report, though, the far-from-ideal revenue risk isn’t stopping music professionals from tapping into AI. All told, 38% of those surveyed confirmed using “AI in their work with music and creation in general,” and 5% said they’d gotten in the habit of capitalizing on artificial intelligence “consistently or nearly always.”

Another 27% of respondents said they’d steadfastly refused to utilize AI to that point, with 20% preferring to avoid the tools.

And on a brighter note, the 38% figure doesn’t solely represent instances where participants threw in the creative towel; just 14% of respondents (and nearly one in five between the ages of 45 and 54) conceded to using AI “in my creative activity with music.”

However, the incorporation of AI into mixing and mastering, social media, artwork creation, and several additional categories contributed substantially to the 38% usage total as well.

Doubling back to the 14% of survey participants who’d opted to let AI take the creative reins, a concerning 56% admitted to generating lyrics. “Ideation/To explore new musical horizons” placed second with 54%, immediately ahead of vocal synthesis (34%) and sound synthesis (28%), the report shows.

Finally, respondents were more united in their opposition to artificial intelligence. A cumulative 8% said they had a somewhat positive or very positive view of AI, and 82% acknowledged worries that its growing prevalence “in music could lead to music creators no longer being able to make a living from their work.”

On this front, it can safely be said that the AI music wheels are in motion – with all signs pointing to consistent technical improvements for the foreseeable future. Among many other things, litigation is ongoing against generative models that allegedly trained on protected media without authorization, and different artificial intelligence music products yet are also arriving on the scene

]]>
The Major Labels’ Udio Infringement Suit Isn’t Getting a Trial Anytime Soon — Proposed Schedule Would Take the Dispute Deep Into 2025 https://www.digitalmusicnews.com/2024/08/20/udio-umg-copyright-lawsuit-schedule/ Tue, 20 Aug 2024 20:00:03 +0000 https://www.digitalmusicnews.com/?p=298984 music ai udio suno

Photo Credit: Steve Johnson

Earlier this month, a 2026 trial date was tentatively set in music publishers’ Anthropic infringement suit – raising questions about the legal system’s ability to keep pace with AI. Now, it looks like a different copyright complaint, filed by the majors against Udio, won’t see a trial for some time yet.

The schedule-related update in the major labels’ action against Uncharted Labs/Udio (similar copyright litigation involving Suno was filed separately) emerged in a new case management plan. And predictably, there are key contrasts in the timetables sought by the rightsholder plaintiffs and the generative AI company.

Running with the point, Udio’s proposed an April 10th, 2025, cutoff for the “substantial completion of document production” – far later than the desired November 1st, 2024, deadline of the majors.

The plaintiffs’ deadline would extend solely to Udio’s fair use arguments, which, as we’ve long noted, are at the heart of this legal battle and different rightsholders suits targeting generative AIs. In brief, several developers are adamant that ingesting copyrighted materials en masse without rightsholder permission is transformative in the context of AI training.

“In the interests of judicial economy, limiting expense, and speeding resolution,” the majors spelled out, “Plaintiffs’ proposal sequences discovery to focus first on that central issue [the fair use argument] of liability.”

Expectedly, the gap between the preferred schedules only widens from there. Ideally, Universal Music and its fellow filing parties would like to see expert discovery (once again on the fair use question) conclude on Valentine’s Day this coming February; Udio is pushing for a date more than seven months later, September 26th, 2025.

Subsequently, the major labels are requesting a March 14th, 2025, wrap for summary judgement motions, compared to October 21st of the same year for Udio. And to reiterate the obvious, the slower litigation pace would easily keep the action alive deep into 2025.

“In fact,” the AI defendant and its counsel claimed in support of the more methodical approach, “Udio’s schedule proposes a faster timeline than that of many other AI cases—as well as other schedules this Court has adopted.”

Against the backdrop of a rapidly evolving AI landscape, and in light of the breakneck performance improvements the technology’s achieved across 2024’s first eight months, there’s simply no telling what artificial intelligence will look like during the final quarter of 2025.

Among different things, the situation underscores the idea (expressed in different words by Elon Musk, former Google CEO Eric Schmidt, and others) that it’s exceedingly difficult to get a hold on quick-moving and unprecedented technology in the courtroom.

In any event, the cases are, of course, in motion, with plenty riding on their outcomes not just from a monetary perspective, but especially when it comes to legal precedent. Closer to the present, AI’s reach in the music space is continuing to grow on multiple levels.

]]>
What If Radio Could Answer Your Questions? — will.i.am Launches RAiDiO.FYI https://www.digitalmusicnews.com/2024/08/20/what-if-radio-could-answer-your-questions/ Tue, 20 Aug 2024 19:12:09 +0000 https://www.digitalmusicnews.com/?p=298936 RAiDiO.FYI will.i.am

Photo Credit: RAiDiO.FYI

What if radio could answer your questions? In celebration of National Radio Day (August 20), will.i.am is answering that question with the launch of RAiDiO.FYI.

RAiDiO.FYI is a first-of-its-kind, AI-infused interactive, conversational media platform that transforms how we interact with radio as we know it. It brings a new dimension to the platform, connecting consumers deeper with the music they’re hearing, with the talk radio program they enjoy, and the cultural content they love.

The FYI.AI app is available for iOS and Android, giving access to the RAiDiO.FYI experience. The debut marks a new innovation as a landmark moment in the 110 year history of radio, converging information and music delivery with powerful AI that can answer questions, deliver facts, and guide the conversation. RAiDiO.FYI is the brain child of superstar and tech innovator will.i.am, who envisions a future where radio can empower listeners with the ability to personalize and immerse themselves in content while having a two-way real-time conversation on their favorite topics.

Powered by FYI, RAiDiO.FYI is evolving broadcasting into hyper-casting, bringing information to life, inviting people to chat with their radio stations, and asking about the music or stories behind the songs. Listeners can instantly become active participants in the conversation, choosing topics, asking questions, and talking with these AI personas.

Having built his music career with global music super group Black Eyed Peas, will.i.am is endeared to radio; he understands first-hand how intergral it is in breaking and making the world’s most beloved music acts. Because audiences now have multiple pathways to get their music, news, and information via radio, streaming services or podcasts, RAiDiO.FYI was introduced at the right time to harness the power of AI to allow listeners to customize their content to their liking.

Digital Music News spoke to will.i.am about the RAiDiO.FYI experience when he launched his partnership with SiriusXM. Now the experience is free to try in the FYI app for anyone who has an iOS or Android device.

]]>
Chinese Tech Giant Debuts ‘World’s First AI Music Streaming Platform’ https://www.digitalmusicnews.com/2024/08/15/chinese-tech-giant-ai-streaming-platform/ Fri, 16 Aug 2024 03:15:22 +0000 https://www.digitalmusicnews.com/?p=298601 Chinese AI music streaming paltform

Photo Credit: Kunlun Tech

Chinese tech giant Kunlun Tech launches what it claims is the ‘world’s first AI-powered music streaming platform.’

Kunlun Tech, a former investor in TikTok’s predecessor, Musical.ly, and parent company of web browser Opera, has launched what it calls the “world’s first AI-powered music streaming platform.” The Chinese company says its new Melodio service features “personalized, AI-generated music streams tailored to [users’] moods and scenarios.”

Melodio lets its users input text prompts like “mellow tunes for morning coffee,” or “energetic music for a long drive,” to “instantly craft a customized music stream that fits the occasion.”

“With endless streams of real-time, personalized music, Melodio caters to users’ every mood and scenario, enabling them to modify their prompts on the fly, switch between generated lyrics, and save or share their favorite moments for a truly transformative listening experience,” says Kunlun.

Alongside the AI streaming service, Kunlun has also launched an AI music creation platform called Mureka, which it says “empowers music enthusiasts and professional artists to create and monetize their AI-generated music.” Users can even sell their AI music through the Mureka Store, which Kunlun says will allow artists to “explore new business models for AIGC.”

Mureka’s “Create” page enables users to “input lyrics, reference tracks, and control music styles using the Style function.” The company asserts Mureka’s AI music “boasts unparalleled stability and controllability, allowing users to fine-tune sections like intros, verses, choruses, bridges, and outros with ease.”

Both products, the Melodio streaming service and Mureka music generator, are powered by Kunlun’s AI Music Generation Large Language Model called SkyMusic 2.0. The company claims SkyMusic 2.0 is “the industry’s first AI music model capable of consistently and stably generating [an] endless music feed in specific styles.”

SkyMusic was, according to Kunlun, the first “commercial-grade composing AI model in China,” which it says is able to process lyrics exceeding 500 words and product 6-minute, 4400Hz dual-channel stereo AI songs. It supports 31 languages, including Chinese, English, Japanese, Korean, and French, and supports lyric generation from melodies and text sources. Notably, Kunlun has not disclosed the data used to train its AI models.

The tech giant, which has a market value of $4.8 billion (34.49 billion Chinese yuan), boasts an average monthly active user base of nearly 400 million across the “AGI, AIGC, content distribution, metaverse, social entertainment, and gaming sectors.” Last summer, the company’s Star Group Interactive division acquired a stake in AI company Singularity AI in a deal worth $160 million. Following that acquisition, Kunlun then pumped $400 million into its Star Group division.

]]>
Former Google CEO Eric Schmidt Says AI Companies Steal IP Then ‘Hire A Whole Bunch of Lawyers to Go Clean the Mess Up’ — In an Interview Now Taken Offline https://www.digitalmusicnews.com/2024/08/15/eric-schmidt-ai-companies-commentary/ Thu, 15 Aug 2024 20:15:29 +0000 https://www.digitalmusicnews.com/?p=298532 Eric Schmidt comments on AI companies

Photo Credit: DoD photo by EJ Hersom

Former Google CEO Eric Schmidt spoke to Stanford students recently, making some interesting comments about how AI companies operate on the internet.

The video has since been taken down after Schmidt’s comments about work from home costing Google the AI race, but his comments about AI were far more interesting. In detailing how Silicon Valley operates with AI, he highlights the ‘better to ask forgiveness than permission’ mentality that these companies have in hoovering up data.

Speaking about AI cloning a TikTok competitor if it is banned, Schmidt says: “If TikTok is banned, here’s what I propose each and every one of you do—say to your [large-language model] LLM the following: “Make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour if it’s not viral—do something different along those same lines.”

Schmidt continues, “In the example that I gave of the TikTok competitor—and by the way, I was not arguing that you should illegally steal everybody’s music—what you would do if you’re a Silicon Valley entrepreneur, which hopefully all of you will be, is if it took off, then you’d hire a whole bunch of lawyers to go clean the mess up, right? But if nobody uses your product, it doesn’t matter that you stole all the content. And do not quote me.”

Whoops—cameras were running and Schmidt is in fact, being quoted here. But it’s a mask-off candid moment into how Silicon Valley has always approached building products. Early Reddit was seeded with fake accounts run by multiple employees, as confirmed by Reddit CEO Steve Huffman himself. In the process of building the “Spit Out Everything Humanity Has Ever Created” machine, Silicon Valley couldn’t be bothered to ask the humans it was copying for permission. Now the lawyers are cleaning up that failure to ask permission—with monetary forgiveness at the helm.

“Silicon Valley will run these tests and clean up the mess,” Schmidt confirms. “And that’s typically how those things are done. Speaking specifically about how those lawyers will arrange ‘forgiveness,’ Schmidt postulates that an AI agreement for use will be similar to what was established for royalties.

“I used to do a lot of work on the music licensing stuff. What I learned was that in the 60s, there was a series of lawsuits that resulted in an agreement where you get a stipulated royalty whenever your song is played. even if they don’t even know who you are. It’s just paid into a bank. And my guess is it’ll be the same thing,” Schmidt says.

“There’ll be lots of lawsuits and there’ll be some kind of stipulated agreement, which will just say you have to pay X percent of whatever revenue you have in order to use ASCAP BMI. It will seem very old to you, but I think that’s how it will alternate,” Schmidt concludes.

]]>
New Real-Time Face-Swapping AI Allows Anyone to Mimic Famous Faces https://www.digitalmusicnews.com/2024/08/15/face-swapping-ai-real-time-mimic/ Thu, 15 Aug 2024 17:50:56 +0000 https://www.digitalmusicnews.com/?p=298520 real-time face-swapping AI

Photo Credit: X / João Fiadeiro

Deepfakes are not a new sensation, but the technology for creating them is moving at light-speed. A new tool has gone viral on social media because it can extract a face from a single photo and apply it to a live webcam video source. Here’s the latest.

Deep-Live-Cam can let anyone mimic a famous face with just a single photograph and this software. While results aren’t perfect, they are much more convincing than you would think—based on past deepfakes created. Software like deep-live-cam powers tools that have allowed a South Korean woman to get scammed by an Elon Musk deepfake, while another deepfaker in Hong Kong perpetrated a $25.6M scam complete with multi-person video conferences.

The software Deep-Live-Cam has been in development since late last year, but recent videos uploaded to social media showcasing how it works have gone viral. One video example from X/Twitter user João Fiadeiro showcases how a person can transform into J.D. Vance, Mark Zuckerberg, and George Clooney.

What sets deep-live-cam apart is that it can create this convincing face-mask from just a single photograph. The face-swapping tool combined with voice synthesis tech means digital fakery is more prevalent than ever on the internet and leaves some security experts suggesting a ‘safe word’ as another form of two-factor authentication.

When bad actors can mimic someone’s face and voice, a pre-arranged word or phrase that only the genuine person could know becomes necessary for executives and anyone else in a position of power at a company. So how does the technology work?

Dee-Live-Cam uses several existing software packages combined into a single interface to achieve its goal. First it detects faces from the source video and the target images to dupe. A pre-trained AI model called ‘inswapper’ performs the actual face swap onto the live video feed, while another AI model called ‘GFPGAN’ improves the quality of the face-swap by smoothing the image around the person’s head and ears or correcting artifacts from the face-swap.

The AI model ‘inswapper’ can guess what a person in a photograph’s physical expressions might look like from different angles. That’s because the project has been trained on a dataset containing millions of facial images of individuals captured in various lighting conditions, angles, and multiple expressions with meta-data labeling them with human emotions. The result is a convincing deepfake that can be managed in real-time, posing a threat to Zoom meetings everywhere.

]]>
Can the Legal System Keep Up With AI? Music Publishers’ Anthropic Copyright Lawsuit Tentatively Set for 2026 Trial https://www.digitalmusicnews.com/2024/08/09/music-publishers-anthropic-copyright-lawsuit-trial/ Sat, 10 Aug 2024 01:25:28 +0000 https://www.digitalmusicnews.com/?p=298057 music publishers anthropic lawsuit

A high-stakes case between music publishers and Anthropic is seemingly set for a 2026 trial at the earliest. Photo Credit: Steve Johnson

Is the AI space evolving too quickly for the legal system to keep up? It certainly seems so, as music publishers’ high-stakes copyright infringement litigation against Anthropic might not receive a trial until 2026.

The involved parties outlined that proposed schedule in a joint case management statement yesterday, about 10 months after Universal Music Publishing Group, Concord, and other music publishers sued Amazon-backed Anthropic.

As most probably know, the straightforward-but-important suit, one of several ongoing copyright actions against generative AI developers, centers on the training process behind Anthropic’s Claude chatbot.

In short, the publisher plaintiffs say the product infringed on their protected compositions during said training process and in its outputs when responding to certain user prompts. Like other AI players, Anthropic is adamant that its training maneuvers fall under the fair use banner.

It will be a while before we have definitive answers to the significant questions raised by those clashing positions.

Though many moving parts and a far-off timetable mean nothing is set in stone, the publishers themselves want a trial date between mid-March and April 1st of 2026, the aforementioned case management statement shows.

Anthropic, for its part, is calling for a slightly nearer trial that, with a start date between December 2nd of 2025 and January 13th of 2026, still wouldn’t initiate for another 16 months from now.

Particularly in light of AI’s breakneck evolution, there’s no telling what the technology will look like – or be capable of – at that point. Exactly how this affects the case (and the broader battle against AI giants) remains to be seen, but Elon Musk’s comments may be ringing true when it comes to the technology’s advancing too quickly for courtroom confrontations to keep pace.

Of course, as previously little-known companies in a heretofore seldom-discussed sector went ahead and ingested a massive chunk of the world’s protected media into their products without authorization, it’s unclear whether preemptive steps could have produced a different outcome.

In any event, it’s not as if the filing-party publishers (or the plaintiffs in similar suits) are sitting idly by ahead of the sought 2026 trial. Having renewed calls for a preliminary injunction blocking Anthropic from continuing to train on their compositions and incorporating the materials into outputs, the publishers just days ago saw the RIAA submit an amicus brief in support of the push.

However this component of the case plays out, we won’t have answers overnight. Anthropic is set to outline its opposition to the preliminary injunction motion on August 22nd, with the publishers expected to reply on September 12th ahead of an October 10th hearing.

And before that, Anthropic is poised to file a dismissal motion next Thursday, August 15th.

]]>
UMG, WMG, Sony Music Weigh In On Music Publishers’ AI Copyright Battle vs. Anthropic https://www.digitalmusicnews.com/2024/08/08/umg-wmg-sony-comments-anthropic-copyright-legal-battle/ Fri, 09 Aug 2024 03:25:18 +0000 https://www.digitalmusicnews.com/?p=297966 UMG, WMG, Sony weigh in on legal battle Anthropic

Photo Credit: Ashley King

An amicus brief filed in support of a court injunction against AI company Anthropic seeks to have the startup stop using lyrics without permission. Major label trade group RIAA argues that Anthropic’s defense is the same position Napster took in the late 90s.

Universal Music Publishing Group, Concord, and Abkco sued Anthropic in October 2023, alleging copyright infringement of their songs. Anthropic is an AI startup founded in 2021 by four former OpenAI employees and has received an investment from Amazon worth up to $4 billion. The lawsuit alleges that Anthropic’s chatbot Claude has infringed on the publishers’ copyrights by training Claude on their songs and by posting the songs’ lyrics in prompted answers.

Lyrics websites like Genius or LyricFind typically republish lyrics after they have signed a licensing agreement with the music publishers. However, Anthropic allegedly scraped these lyrics websites to train Claude, bypassing any licensing agreement that would be set up. When prompted, Claude provided lyrics to Katy Perry’s “Roar,” to which Concord owns the rights. Other songs tested include Gloria Gaynor’s “I Will Survive” (UMPG), and The Rolling Stones’ “You Can’t Always Get What You Want” (Abkco).

Plaintiffs allege that lyrics results are often returned as prompts, for example, asking Claude to write a song about the death of Buddy Holly returns most of Don McLean’s “American Pie.” When asked to write a song about moving to Bel Air from Philadelphia, the chat bot returns the lyrics of “The Fresh Prince of Bel-Air.” It also alleges when asked to write a short fiction in the style of Louis Armstrong, the chat bot mostly returned portions of the song “What A Wonderful World.”

In responding to the lawsuit, Anthropic did not deny that it trained Claude on these lyrics, but argues that their use falls under ‘fair use.’ Anthropic says the chat bot should not produce the lyrics in a 1:1 copy, but if it did that it was a bug rather than an intended feature of the chatbot.

“Anthropic has always had guardrails in place to try to prevent that result. If those measures failed in some instances in the past, that would have been a bug, not a feature of the product,” Anthropic writes in its response. Music publishers ask the court to issue an injunction that requires Anthropic to “maintain guardrails to prevent its AI models from generating output that contains publishers’ lyrics” and to “refrain from using unauthorized copies of such lyrics to train future AI models.”

Now a bevy of music industry organizations have weighed in on the proceedings in an amicus curiae brief in support of these publishers. The RIAA, Artist Rights Alliance, and the Music Artists Coalition argue that while other AI companies agreed to licensing deals, Anthropic has so far refused.

“[M]any companies in the AI field have obtained licenses to use copyrighted content for AI model training and other purposes,” the brief states. “These companies are willing and able to comply with the law as they develop generative AI software—but not Anthropic. In order to obtain an advantage over its competitors, Anthropic has refused to license or compensate the authors and owners of the highly creative, copyrighted works that it copies and uses to generate competing works. Anthropic has argued ‘fair use.’ It is not.”

“The false choice that Anthropic have presented between compliance with copyright law and technological progress is a well-worn, losing policy argument previously made by other mass infringers such as Napster and Grokster in their heyday. Anthropic and COP even employ the same rhetoric as those pirate sites,” the amicus brief states.

The brief states that shutting down Napster and Grokster did not hurt technological progress. Instead, it paved the way for legal music streaming services that properly compensates rights holders when their music is streamed from any service which licenses it.

]]>
Could the Major Labels Lose Against Suno and Udio? A Pressing Look At Where These Critical Cases Stand https://www.digitalmusicnews.com/pro/udio-suno-legal-update-weekly/ Thu, 08 Aug 2024 02:00:11 +0000 https://www.digitalmusicnews.com/?post_type=dmn_pro&p=297801

Photo: Pavel Danilyuk

Two critical court cases carry the potential to change the music industry for decades to come — and this is anything but a slam dunk for the major labels. So where do things stand in these all-important battles?

Millions of dollars and potentially far-reaching precedents are on the line in the major labels’ intensifying copyright infringement lawsuits against Suno and Udio. Here’s an in-depth look at where the cases stand and how they could play out.

Table of Contents

I. The Music Industry’s Most Critical AI Legal Battles Are Just Getting Started

II. The Major Labels v. Suno and Udio — A Quick Recap of the Industry’s Most Pressing AI Infringement Actions

III. Fair Use AI Training and the Commercial Implications Thereof: The Surprisingly Strong Arguments of Suno and Udio

IV. The RIAA v. Suno and Udio Moving Forward — A Closing Look At the Cases’ Significance In and Beyond the Industry

V. Appendix: A Refresher on Other Ongoing AI Infringement Litigation

 

Please do not redistribute this report without permission — thank you!


]]>
AI-Focused Content-Attribution Startup ProRata.ai Scores Reported $25 Million Series A, Inks Universal Music Partnership Deal https://www.digitalmusicnews.com/2024/08/06/prorata-universal-music-deal/ Tue, 06 Aug 2024 22:08:22 +0000 https://www.digitalmusicnews.com/?p=297743 UMG shareholders approve lucian grainge's $150 million compensation package

Photo Credit: Luke Harold

Amid intensifying industry legal battles with Suno, Udio, Anthropic, and other generative AI players, Universal Music Group has inked an agreement with content-attribution startup ProRata.ai.

Founded by Idealab Studio’s Bill Gross, who’s set to serve as CEO, ProRata.ai formally launched today. According to Axios, the software developer, which says it enables AI platforms to “fractionally attribute and compensate content owners,” has scored a $25 million Series A.

Behind that sizable backing, Pasadena-based ProRata itself confirmed support from Mayfield Fund, Revolution Ventures (a Sound Credit investor), Prime Movers Lab, and the mentioned Idealab Studio.

Also in place for ProRata are pacts not only with Universal Music, but the Financial Times, the Atlantic, Fortune, and more, according to the release that was forwarded to DMN.

As summarized by the debuting business, which says it has multiple patents pending, the technology at hand “analyzes AI output, measures the value of contributing content and calculates proportional compensation” for rightsholders.

Furthermore, ProRata by its own description “uses a proprietary algorithmic approach to score and determine attribution,” with compensation then doled out to rightsholders on a per-use basis.

Bearing in mind the aforesaid tie-ups with media outlets, the focus appears to be on text-based chatbot outputs for the time being; ProRata has teed up “a consumer AI answer engine,” designed to showcase its attribution capabilities, for release this fall.

Universal Music’s own focus is, of course, on the music side. And in a statement, CEO Lucian Grainge indicated that his company will “help shape” ProRata’s efforts in the industry.

“We are encouraged to see new entrepreneurial innovation set into motion in the Generative AI space guided by objectives that align with our own vision of how this revolutionary technology can be used ethically and positively while rewarding human creativity,” communicated Grainge.

“Having reached a strategic agreement to help shape their efforts in the music category, we look forward to exploring all the potential ways UMG can work with ProRata to further advance our common goals and values,” concluded the 64-year-old.

Time will reveal exactly which collaborations the partnership drives – an important point in light of the initially highlighted infringement battles with AI platforms and the similar disputes that are undoubtedly forthcoming.

Closer to the present, however, ProRata says it’s “in advanced discussions with global news publishers, media and entertainment companies, and more than 100 noted authors.” In light of the many authors who are spearheading separate litigation against AI platforms over alleged copyright infringement, those talks could prove significant.

]]>
Meta Offering Millions for Hollywood Voices for AI Projects https://www.digitalmusicnews.com/2024/08/04/meta-ai-hollywood-voices-offered-millions-report/ Mon, 05 Aug 2024 03:28:37 +0000 https://www.digitalmusicnews.com/?p=297524 Meta reportedly paying millions for voices from hollywood

Photo Credit: Meta

New reporting suggests Meta is looking to pay Hollywood voices for their talent to voice AI projects—with Awkwafina, Judi Dench, and Keegan-Michael Key among the names listed.

Both Bloomberg and The New York Times are reporting that Meta is in talks with Hollywood celebs to lend their voice to MetaAI, a digital assistant similar to Siri and Google Assistant. Meta hopes to record their voices and secure the right to use them for as many situations as possible across its suite of projects including Instagram, Messenger, WhatsApp, Facebook, and Ray-Ban Meta glasses.

The Bloomberg reporting notes that negotiations among the parties has stopped and started numerous times. Neither side can come to terms they agree upon for use of the voices, though the project appears to be moving forward with some kind of time limit in place. That means any voice recordings Meta makes for use with the MetaAI could only be used for a specified time period, rather than indefinitely.

SAG-AFTRA has reportedly reached an agreement with Meta on the terms of the deal, though the actors’ representatives are looking to negotiate stricter limits. Meta hopes to finalize these deals with the specified actors ahead of its Meta Connect conference in September 2024. Meta is expected to announce several new AI products at the conference, especially after teasing its chatbot platform.

During that announcement in September 2023, Meta said it would be rolling out a digital chat bots with 28 characters voiced by various celebrities. They included Charli D’Amelio, Paris Hilton, Snoop Dogg, Tom Brady, and more. But the chatbots failed to generate interest among users, with many taking to social media to call the experiment weird and creepy. That’s because the celebs weren’t playing themselves, but voicing characters like a detective partner for Paris Hilton or a dungeon master for Snoop Dogg.

Meta reportedly paid up to $5 million for the two-year pilot program for these failed chatbots. Snoop Dogg’s dungeon master chatbot only garnered 15,000 followers the year the chatbot was live, while the rapper’s Instagram account has more than 87.5 million followers. Kendall Jenner’s ‘Billie’ chatbot achieved a following of 179,000 as the most successful chatbot in the experiment—but far below her massive 194 million followers on Instagram.

]]>
Suno, Udio Fire Back Against RIAA Copyright Suits, Doubling Down on Fair Use and Soundalike Output Arguments https://www.digitalmusicnews.com/2024/08/02/udio-suno-copyright-lawsuit/ Sat, 03 Aug 2024 06:00:16 +0000 https://www.digitalmusicnews.com/?p=297243 udio suno copyright lawsuit

Generative AI platforms Udio and Suno have taken aim at the arguments introduced by the majors in a pair of infringement suits. Photo Credit: BoliviaInteligente

AI-powered music-creation platforms Suno and Udio have officially fired back against the high-stakes copyright infringement lawsuits they’re facing from the major labels.

Both defendants just recently took aim at the suits; Udio is being sued in a New York federal court, while Suno is facing a separate-but-similar action in Massachusetts. We’ve covered the cases – and a public war of words between the defendants and the RIAA – in detail.

At the top level, though, they revolve around the all-important question of whether companies have the legal authority to copy and train generative AIs on protected materials (which are then incorporated into outputs) without the authorization of rightsholders.

Predictably, Suno and Udio are adamant that they do possess the authority, with the training process at hand purportedly drawing from the basic “building blocks of music” and constituting fair use, per their filings.

Putting everything out in the open, the AI companies in their answers directly acknowledged that both models utilized the majors’ recordings to train.

“The many recordings that Udio’s model was trained on presumably included recordings whose rights are owned by the Plaintiffs in this case,” reads one relevant line.

But doing so is lawful under copyright law, the responses claim in more words, including not only when recordings are copied behind the scenes but, more than that, when the AI outputs share characteristics with protected works.

“Under longstanding precedent,” Udio and its counsel wrote on the former front, “it is fair use to make a copy of a protected work as part of a back-end technological process, invisible to the public, in the service of creating an ultimately non-infringing new product.”

And on the equally important output side, the defendants underscored the belief that copyright law, owing to a described carve-out of sorts for recordings resembling existing works but not copying directly, permits their soundalike outputs.

“Even to the extent that Udio’s outputs ‘imitate or simulate’ sounds in the Plaintiffs’ recordings,” Udio penned, “Congress made the public policy choice to immunize such new creations from copyright infringement liability, so long as they do not themselves contain actual snippets of pre-existing recordings. Which they do not.”

Taking that important argument a step further – and using as an example Frank Sinatra’s “My Way,” which the majors say has been infringed upon because an output allegedly shared melodic characteristics with the famed recording – Udio emphasized the “literally hundreds of different recordings” of the work available on streaming platforms.

“Consequently,” Udio relayed, “Plaintiffs’ argument betrays a profound misunderstanding of the technology at issue by suggesting that UMG’s particular version must have been in the training set because Udio allegedly generated an output that contains ‘melodic similarities to the Sinatra original throughout.’ So too do countless other recordings of the song.”

Attempting to replicate the recording with a prompt containing the “My Way” lyrics, Udio added for good measure, allegedly violated its terms of use.

“No one owns musical styles,” the platform concluded. “Developing a tool to empower many more people to create music, by analyzing on a massive scale the relationships among notes and rhythms and tones to ascertain the building blocks of different musical styles, is a quintessential fair use under longstanding and unbroken copyright doctrine.”

Suno responded to the RIAA-spearheaded suit with similar arguments, and both also took the opportunity to point out the majors’ perceived “aversion to competition,” depicting generative AI as the latest in a line of resisted but ultimately accepted innovations.

Furthermore, “the major record labels wield massive market power” and haven’t “hesitated to exploit it in fundamentally anticompetitive ways,” according to both answers, which maintain that the alleged anticompetitive behavior is carrying over to dealings within the AI sector.

The RIAA reached out with a roughly 250-word statement about the responses, seizing on the above-described training admissions and doubling down on its position.

“After months of evading and misleading,” an RIAA spokesperson communicated in part, “defendants have finally admitted their massive unlicensed copying of artists’ recordings. It’s a major concession of facts they spent months trying to hide and acknowledged only when forced by a lawsuit.

“Their industrial scale infringement does not qualify as ‘fair use’. There’s nothing fair about stealing an artist’s life’s work, extracting its core value, and repackaging it to compete directly with the originals, as the Supreme Court just held in its landmark Warhol Foundation case.

“Defendants had a ready lawful path to bring their products and tools to the market – obtain consent before using their work, as many of their competitors already have. That unfair competition is directly at issue in these cases.”

]]>
DMN Pro Weekly Report: 55% of Active Music Industry Lawsuits Involve Copyright Disputes. What’s the Remaining 45% About? https://www.digitalmusicnews.com/pro/legal-music-categories-aug-2024-weekly/ Wed, 31 Jul 2024 23:26:00 +0000 https://www.digitalmusicnews.com/?post_type=dmn_pro&p=297077 A breakdown of active music industry lawsuits by category, August 2024 (Source: DMN Pro)

A breakdown of active music industry lawsuits by category, August 2024 (Source: DMN Pro)

One thing the music industry doesn’t lack is litigation – or threats of litigation. But what kinds of disputes are behind the steady stream of lawsuits?

DMN Pro zeroed in on that question by analyzing data from its soon-to-be-released Music Industry Litigation Tracker. Comprehensive and filterable, the one-stop database of industry and industry-adjacent suits will feature a variety of case-specific details – from the involved parties and their counsel to assigned judges and court venues.

Table of Contents

I. Introduction – And a Teaser for DMN Pro’s Upcoming Music Industry Litigation Tracker

II. Industry Litigation At a Glance: A Percentage Breakdown of Case Types, August 2024

III. Copyright Infringement Actions Take Center Stage – With Surprisingly Little Precedent to Show for It

IV. Trouble Beneath the Surface: The Influx of Royalty-Collection Lawsuits Against Music Services

V. Music-Space Lawsuits Moving Forward: What to Expect in 2024 and Beyond

VI. Appendix: A Grab Bag of Copyright, Trademark, and Patent Lawsuits Roiling the Music Industry

Please do not redistribute this report without permission. Thank you!


]]>
Meta Launches AI Studio for Creating Custom Chatbots https://www.digitalmusicnews.com/2024/07/30/meta-ai-studio-create-custom-chatbots/ Tue, 30 Jul 2024 19:14:50 +0000 https://www.digitalmusicnews.com/?p=296962 Meta launches AI Studio

Photo Credit: Meta

Meta is bringing the ability to create custom chatbots to audiences using its new AI Studio. It allows anyone to create and discover AI characters, or allows creators to build an AI as an extension of themselves to reach more fans.

AI Studio is build with Meta’s Llama 3.1 model and offers a wide variety of prompt templates or a start from scratch option to create an AI. These glorified chatbots can be used to teach users to cook, helps with Instagram captions, or generates memes to make friends laugh. The AI can be created just for you, or shared among followers and friends.

The newly created chatbot can also be posted for anyone to discover and chat with on Instagram, Messenger, WhatsApp, and the web. Character.AI offers a similar function—allowing anyone to create and train a chatbot based on any character from movies, TV shows, books, and anime. The question that remains there is, is it legal for fans to create chatbots of characters from intellectual properties they don’t own? Character.AI’s homepage is littered with chatbots for popular characters likely created without the owner’s permission.

Users can create a new AI character by starting a new message on Instagram and tapping “Create an AI chat.” From there, the AI character’s name, personality, tone, avatar, and tagline can be customized.

Some of the created chatbots using AI Studio include ‘Eat Like You Live There!’ a chatbot created by chef Marc Murphy to offer personalized tips for embracing local dining customs while traveling. Another is ‘What Lens Bro,’ which offers tips on finding the perfect lens for a shot, created by photographer and videographer Angel Barclay.

Meta is advertising AI Studio as a way to create a chatbot extension of yourself, to answer common DM questions and story replies. “Whether it’s sharing facts about themselves or linking to their favorite brands and past videos, creator AIs can help creators reach more people and fans get responses faster,” the announcement post reads. Responses from creator AIs are fully labeled as such, so there’s full transparency for fans who interact with the feature.

]]>
South Africa Stands Its Ground On Fair Use Expansions As the AI Copyright Battleground Goes Global https://www.digitalmusicnews.com/2024/07/29/south-africa-fair-use-expansions-ai/ Tue, 30 Jul 2024 03:52:33 +0000 https://www.digitalmusicnews.com/?p=296888 South Africa AI fair use expansions

Photo Credit: Kathrine Heigan

South Africa hits back against the IIPA’s assertion that the country isn’t doing enough to combat copyright infringement, defending its broad fair use exceptions.

The International Intellectual Property Alliance (IIPA), which represents the ESA, MPA, and RIAA, among others, recently published its findings on the latest eligibility review of the African Growth and Opportunity Act (AGOA). Led by the US Trade Representative (USTR), the process determines which sub-Saharan African countries are eligible for certain trade benefits, and which, at the opposite end of the spectrum, should be sanctioned.

In particular, the IIPA is worried that South Africa isn’t doing enough to deter copyright infringement, alongside concerns that proposed “fair use exceptions,” modeled in part after US laws, could “lead to problems” for South Africa. But that critique hasn’t gone unnoticed by the African country.

Now, the South African government has sent a response to the USTR addressing IIPA’s concerns, pointing out that the copyright law hasn’t even been implemented yet, so it would be premature for the US to use it as a basis for sanctions. But that detail aside, South Africa is openly rejecting IIPA’s critique.

Principally, the South African government points out that the copyright group’s arguments are not new, as they had been discussed during open review processes and considered by parliament, “which simply disagrees with the notion” that broad fair use exceptions will create an issue.

“In general, the position in the CAB [Copyright Amendment Bill], on fair use, recognizes that copyright regimes across the world are slowly moving away from the closed-list system to an open system, which will keep up with innovation, and changing digital environment,” the statement from the South African government reads. “Fair dealing in our current Copyright Act is outdated, limited, and static, and does not address the digital world. Fair use, on the other hand, is progressive, dynamic and future-proof, and ‘digital-friendly’.”

“Globally, research has found that fair use has not impacted negatively on the economy. On the contrary, there is evidence that shows that countries with open exceptions and fair use have high levels of innovation, economic growth, and development. It is a fact that fair use was coded in the US Copyright Act of 1976 and has not had to be amended, as it applies to new technologies as they arise. Other countries have also adopted fair use in their copyright laws and more countries are considering it, because it is ‘future-proof’ and benefits users and producers of information and knowledge, [and] give clarity to what can be used and reused,” South Africa continues.

South Africa is clearly ready to move forward, with this recent letter and direct pushback against the copyright lobby, making apparent its disinterest in continued negotiations. The letter makes clear that the concerns of rights holders have been noted, and even cites the United States having set the example with its fair use law that remains mostly unchanged nearly 50 years later.

]]>
Apple Music is Toying with AI-Generated Artwork, Code Snippets Suggest https://www.digitalmusicnews.com/2024/07/25/apple-music-is-toying-with-ai-generated-artwork-code/ Thu, 25 Jul 2024 21:35:55 +0000 https://www.digitalmusicnews.com/?p=296657 Apple Music AI generated artwork

Photo Credit: Doz Gabrial

Apple Music introduced a library of artwork for users to choose from to customize a playlist, and now AI-generated images appear to be on the horizon.

Apple’s iOS 17.1 update saw the introduction of a library of artwork from which users can select an image to customize a playlist in Apple Music. Now, 9to5Mac has discovered code snippets indicating the company will integrate his feature into Apple Intelligence with iOS 18.

According to 9to5Mac, code in the iOS 18 beta 4 reveals that Apple is working on a new feature to let users create playlist artwork using AI for their Apple Music playlists. The code suggests there will be a “Create Image” button when editing a playlist, that will invoke Image Playground, part of Apple Intelligence’s toolset.

Apple first teased Image Playground at WWDC 2024, showing users able to write commands to generate new images with AI assistance. The AI-generated images can have various art styles, but there is notably no option for generating a photorealistic image. It seems that Apple Music will likely ask the user what they want the playlist artwork to look like and then generate a few options from which they can choose.

Currently, the feature is still under development. Like other Apple Intelligence features, it’s not yet available to beta users. The company said some Apple Intelligence features will be available in beta sometime this summer, but now those mentions have been removed from Apple’s website.

iOS 18 releases this fall, with a beta preview available for developers and public beta users. But Apple Intelligence will likely not arrive for users until iOS 18.1 or later, as a recent Bloomberg report suggests that some of its AI features will remain in testing until 2025.

Bloomberg’s Mark Gurman, after speaking with sources involved in Apple Intelligence development and based on remarks made by the company during WWDC, has provided a rundown of what he expects to see on the iOS 18 timeline. A big subject on that list is upcoming improvements to Siri, which are not expected to drop until 2025. Some updates will, of course, be delivered throughout the rest of 2024, but Siri will not become “a truly intelligent assistant” until Apple Intelligence fully launches next year.

Gurman also points to comments made by Apple that suggest ChatGPT integration will arrive sometime this year. This seems to be a sort of backup option to provide users with a semi-intelligent assistant to provide general knowledge ahead of its Apple Intelligence release, which will focus predominantly on a user’s personal information for a more streamlined experience.

]]>
What Major Label Litigation? AI Music Upstart Udio Launches ‘Audio-to-Audio’ Remixing https://www.digitalmusicnews.com/2024/07/25/udio-launches-audio-to-audio-remixing/ Thu, 25 Jul 2024 19:12:06 +0000 https://www.digitalmusicnews.com/?p=296610 Udio audio to audio remixing

Photo Credit: Udio

As the major label battle with generative AI music start-ups Suno and Udio heat up, the latter has released a new model with audio-to-audio remixing. Here’s the latest.

Lawsuits filed by Sony Music, Warner Music, and Universal Music claim that Udio and Suno have unlawfully copied the label’s recordings to train their music generation models. The lawsuit alleges that these services could be used to “saturate the market with machine-generated content that will directly compete with, cheapen, and ultimately drown out the genuine sound recordings on which [major labels] are built.”

Following that legal action, both Udio and Suno hired Latham & Watkins to represent them in the matter. Latham & Watkins have been key players in defending companies using artificial intelligence, including their work in defending Anthropic against infringement allegations filed by UMG and Concord Music. Latham & Watkins also represents OpenAI in several of its lawsuits—including one filed by comedian Sarah Silverman.

Udio’s latest model, v1.5 contains a host of improvements over its previous model, including improved audio quality, key control, and improved global language results. New features include a dedicated creation page, stem downloads, audio-to-audio remixing, and shareable lyric videos.

Stem downloads allow users to split fully-mixed Udio tracks into four separate stems—vocals, bass, drums, and everything else. Advanced users can take those stems and remix them in external tools, or only use a single element of an Udio song gen in their music.

The audio-to-audio remix feature allows users to upload their own tracks and remix them. Music can be re-imagined in a number of new styles, flowing freely from one to the next. The uploads feature may attract major label attention again—as the feature could be used to create remixes of copyrighted works without permission.

Finally, key control allows users to guide the creation of music by suggesting keys like C minor or Ab major. Udio says the feature may not be perfect, but will lend creators more harmonic control over their creations.

]]>
Music Workspace Startup Bridge.audio Scores $3.3 Million Raise, Plots Continued Sync Buildout https://www.digitalmusicnews.com/2024/07/18/bridge-audio-funding-round/ Thu, 18 Jul 2024 23:42:41 +0000 https://www.digitalmusicnews.com/?p=296147 bridge.audio

An aerial shot of Paris, France, where Bridge.audio is headquartered. Photo Credit: Chris Hardy

Cloud-based music workspace Bridge.audio has announced a nearly $3.3 million raise and set its sights on optimizing sync discoverability with AI.

The Paris-headquartered startup’s founder and CEO, Clément Souchier, only recently disclosed the €3 million (currently $3.27 million) funding round via a LinkedIn post. As described by this message, the two-year-old business pulled down the capital from backers including Bpifrance, Liège-based investment fund LeanSquare, and “business angels.”

Now boasting more than 30,000 users, according to Souchier, Bridge.audio has developed a main offering encompassing music-centered “private workspaces to manage, share, and connect.” Beyond these “smart workspaces” as well as their activity-tracking and metadata-management features, the collaboration-focused service, operating in both English and French, has also created an AI-powered tagging and description technology, its website shows.

Running with the idea, last January saw Bridge.audio roll out an aptly named “Sync Hub” marketplace designed to connect audiovisual professionals with licensable music for their projects. Bearing in mind the AI-tagging emphasis, said marketplace supports “natural language searches,” per the longtime entrepreneur Souchier.

On the pricing front, Bridge.audio’s core collaboration service has a free tier and a paid option; the latter’s cost begins at $5 per month when billed annually and scales upward depending on the precise amount of sought storage.

Shifting to the bigger funding picture, industry and industry-adjacent raises, as compiled by DMN Pro’s comprehensive Music Industry Funding Tracker, have been comparatively few and far between in 2024.

That refers specifically to three industry raises for June of 2024 – down from 10 during the same month in 2023. May of 2024, for its part, likewise registered three publicly confirmed funding rounds, a material decrease from May of 2023’s 13 rounds.

Notwithstanding the apparent funding-quantity falloff, however, 2024 has delivered several particularly high-value raises, including but not limited to a $165 million strategic round for Create Music Group, a cool $1 billion for Irving Azoff’s Iconic Artists Group, and $100 million for Gamma.

Now, on the momentum of two relatively modest raises (Bridge.audio’s $3.3 million and the $5 million that Created by Humans secured in late June), time will tell whether the final five or so months of 2024 usher in other smaller-scale showings of support.

As things stand, July of 2024 appears exceedingly unlikely to surpass or even approach the funding volume of the same month in 2023, which had already recorded 11 industry and sub-sector raises at this point. By comparison, Bridge.audio seems to be the first industry company to unveil fresh funding in the current month.

]]>
Spotify’s AI DJ Now Speaks Spanish—Second Language Beta Now Live https://www.digitalmusicnews.com/2024/07/18/spotify-ai-dj-speaks-spanish-second-language-beta/ Thu, 18 Jul 2024 18:06:40 +0000 https://www.digitalmusicnews.com/?p=296106 Spotify AI DJ now speaks spanish

Photo Credit: Spotify

Spotify’s AI DJ now speaks Spanish. It can curate a line-up of music while providing commentary about said music for Spanish-speaking audiences. The feature is in beta.

Spotify says it has learned several vital things about how English-speaking people interact with its AI DJ when it launched earlier this year. Commentary delivered alongside personalized music recommendations makes listeners more likely to listen to a song they may have skipped. It has also increased listening time, as Spotify says median DJ listeners spend more time listening to DJ than listeners of other recommended music.

Spotify’s English AI DJ voice was provided by its Head of Cultural Partnerships, Xavier Jernigan. To create the voice model for the DJ in Spanish, the company hired Senior Music Editor, Olivia ‘Livi’ Quiroz Roa, who is based in Mexico City.

Livi is a playlist curator for Spotify and has played an instrumental part in the launch of its EQUAL program in Mexico. Her voice resonated the most with users, with testers feeling like they were receiving music recommendations from a friend.

The Spanish speaking AI DJ will be available for Spotify Premium listeners in markets where the DJ is currently available. It is also expanding to Spotify Premium users in Spain and across select markets in Latin America—Argentina, Bolivia, Chile, Colombia, Costa Rica, Dominican Republic, Ecuador, El Salvador, Guatemala, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Uruguay, and Venezuela.

How to Access Spotify AI DJ in English or Spanish

  1. Open the Spotify app.
  2. Tap the search tab.
  3. Search ‘DJ.’
  4. Press play and the personalized DJ will begin choosing music and adding commentary. Tap the ‘three-dot menu’ while in the DJ card to choose between English or Spanish language commentary.

Spotify’s note about DJ listeners not skipping music is an important one—music recommendations always need a human element. Scrolling through an endlessly generated playlist is nowhere near as satisfying as having someone say, ‘hey, I think this song is really great.’ Even if that someone is AI-generated.

]]>
The Music Industry Is Bursting With Litigation — Here Are 10 Particularly Game-Changing Lawsuits to Watch https://www.digitalmusicnews.com/pro/litigation-top-10-weekly/ Thu, 18 Jul 2024 04:00:45 +0000 https://www.digitalmusicnews.com/?post_type=dmn_pro&p=296031 10 Particularly Important Music Industry Lawsuits to Watch

Photo Credit: Mohamed Hassan

From infringement complaints against generative AI developers to unpaid royalty actions targeting streaming platforms, the music industry certainly isn’t lacking high-stakes litigation. Here are ten particularly important lawsuits with major implications to watch moving forward.

How many active lawsuits, conflicts, settlements, negotiations, and legal stare-downs are happening in the music industry — right now? At last count, Digital Music News is tracking more than 140 different filed lawsuits in the United States alone, all in various stages of litigation. And that doesn’t include the drumbeat of cease-and-desists, government proceedings, and private discussions and upcoming suits.

(Stay tuned for our complete litigation tracker from DMN Pro.)

As any attorney can attest, most of those suits aren’t groundbreaking or precedent-setting. Here’s a familiar litigatory tune: Artist A uses a sample from Artist B without permission, demands go nowhere, and litigation ensues. But some of the cases roiling the industry will have serious implications and impacts for years and decades to come. That includes battles in arenas like AI, statutory royalties, government regulation, and even national security.

Plucking from the latter, here are ten lawsuits with the potential to reshape the music industry ahead — for better or for worse, depending on where you’re seated.

Report Table of Contents

Introduction: An Overview of the Music Industry’s Litigation Landscape.

I. The Recording Industry Association of America (RIAA) v. Suno and Udio

II. The National Music Publishers’ Association (NMPA) v. X (Formerly Twitter)

III. Epidemic Sound v. Meta

IV. TikTok and ByteDance v. Department of Justice

V. Department of Justice v. Live Nation

VI. Mechanical Licensing Collective (MLC) v. Spotify

VII. MLC v. Pandora

VIII. SoundExchange v. SiriusXM

IX. UMG Recordings et al. v. Internet Archive et al.

X. RIAA v. Verizon

XI. Bonus: Cleveland Constantine Browne et al. v. Rodney Sebastian Clark Donalds et al.

Please do not redistribute this report without permission. Thank you!


]]>
Music AI Upstarts Udio, Suno, Lawyer Up with Latham & Watkins as Major Label Legal Battles Gear Up https://www.digitalmusicnews.com/2024/07/12/music-ai-upstarts-udio-suno-lawyer-up/ Fri, 12 Jul 2024 21:26:21 +0000 https://www.digitalmusicnews.com/?p=295782 music ai udio suno

Photo Credit: Steve Johnson

Two music AI startups hire Latham & Watkins, the firm representing Open AI and Anthropic, as legal battles with the major labels start heating up.

GenAI music startups Suno and Udio have hired elite law firm Latham & Watkins, the firm representing Open AI and Anthropic, to defend them against the lawsuits filed by the Big Three (Sony Music Entertainment, Warner Music Group, and Universal Music Group) in late June.

The lawsuits, filed by Sony, WMG, and UMG, claim that Udio and Suno have unlawfully copied the labels’ recordings to train their AI models to generate music, which could “saturate the market with machine-generated content that will directly compete with, cheapen, and ultimately drown out the genuine sound recordings on which [the labels were] built.”

The team at Latham representing the AI companies is led by Andrew Gass, Steve Feldman, Sy Damle, Britt Lovejoy, and Nate Taylor. The plaintiff labels are represented by Moez Kaba, Mariah Rivera, Alexander Perry, and Robert Klieger of Hueston Hennigan; and Daniel Cloherty of Cloherty & Steinberg.

Latham & Watkins have been key players in defending companies in the realm of artificial intelligence, including their work to defend Anthropic against infringement allegations filed by UMG and Concord Music last year. The firm also represents Open AI in the multitude of lawsuits filed against it, including the case filed by comedian Sarah Silverman alongside other writers, and the case levied against it by the New York Times.

The fair use defense is a common one for AI companies in copyright cases, as it allows limited use of copyrighted material without permission under certain parameters. This will undoubtedly be a key component of Latham & Watkins’ defense of Suno and Udio’s activities. Fair use historically applies to things like parody and news reporting, but AI firms argue it applies equally to their “intermediate” use of millions of others’ works to create a platform that generates new creations.

It remains to be seen how well those arguments will fly once the legal proceedings begin. The music industry has long been critical of the datasets used by these and other AI companies and whether or not they contain unlicensed copyrights. To that point, a series of articles written by Ed Newton-Rex of AI music safety nonprofit Fairly Trained details how he was able to generate music from both Udio and Suno that bear “a striking resemblance” to copyrighted music, such as Jason Derulo, Jackson 5, Mariah Carey, Jerry Lee Lewis, The Temptations, and Lin Manuel Miranda.

]]>
Federal Lawmakers Propose New AI Regulations — Extending to Generative Models’ Training and More — in RIAA-Backed ‘Copied Act’ https://www.digitalmusicnews.com/2024/07/12/copied-act-introduction/ Fri, 12 Jul 2024 17:33:34 +0000 https://www.digitalmusicnews.com/?p=295726 copied act

Federal lawmakers are targeting AI deepfakes and unauthorized training with the newly introduced Copied Act.

Federal lawmakers are officially taking another stab at regulating unauthorized AI deepfakes and training, this time with the bipartisan “Copied Act.”

Senators from both sides of the aisle just recently introduced that bill, in full the “Content Origin Protection and Integrity from Edited and Deepfaked Media Act.” Endorsed by the RIAA, the Artist Rights Alliance, and others, the measure has arrived about 10 weeks after Warner Music CEO Robert Kyncl testified before Congress in support of federal AI regulations.

To date, said regulations have been proposed via the No Fakes Act and the more comprehensive No AI Fraud Act. Of course, despite the expectedly slow legislative progress on a key matter, unauthorized soundalike works are continuing to roll out en masse.

Meanwhile, that several leading generative AI systems are still claiming the right to train on protected media without authorization also remains a major problem. Enter the Copied Act, introduced specifically by Senators Marsha Blackburn, Martin Heinrich, and Maria Cantwell.

Extending to the deepfake and training issues alike, the 18-page bill calls for establishing “a public-private partnership to facilitate the development of standards” for determining content’s origin and whether it’s “synthetic” or “synthetically modified” with AI. Here, “content” refers not only to music, but to “images, audio, video, text, and multimodal content,” hence the measure’s support from outside the industry.

In brief, the National Institute of Standards and Technology’s under secretary of commerce for standards and technology would spearhead those efforts with input from the register of copyrights as well as the director of the USPTO.

Not short on detail, the legislation spells out that the “voluntary, consensus-based standards and best practices for watermarking” and automatic detection will particularly involve synthetic content and “the use of data to train artificial intelligence systems.”

Running with that important point – identifying at scale exactly which media has been generated by AI and who owns what – the measure would compel “any person” behind generative AI systems to enable users to label media outputs as synthetic.

Additionally, these users must have the choice of attaching to outputs “content provenance information,” or “state-of-the-art, machine-readable information documenting the origin and history of a piece of digital content.”

Major search engines, social media players, and video-sharing platforms – those generating at least $50 million annually or with north of 25 million monthly users for three or more of the past 12 months – would be expressly barred from tampering with said content provenance information.

Most significantly, the Copied Act would bar generative AIs from knowingly training without permission on any media that has or should have provenance details attached to it.

The only exception, the text indicates, would be if a platform obtained “the express, informed consent of the person who owns the covered content, and complies with any terms of use.”

The FTC, state attorneys general, and rightsholders themselves would be able to sue for alleged violations under the act, but the content provenance requirements wouldn’t go into effect until two years after the law’s enactment. And litigation would have to commence within four years from when one discovered or should have discovered the alleged violation(s) at hand.

]]>
Lucian Grainge Isn’t Mincing Words on AI Music — Reaffirms ‘One Thousand Percent’ Commitment to Defending IP, Name, Likeness, and Use of Voice https://www.digitalmusicnews.com/2024/07/09/lucian-grainge-on-ai-music-ip-theft/ Tue, 09 Jul 2024 19:24:39 +0000 https://www.digitalmusicnews.com/?p=295413 Lucian Grainge reaffirms commitment to protecting rights of likeness AI Music

Photo Credit: Luke Harold

In a new interview and profile with the Los Angeles Times, UMG CEO Lucian Grainge has reaffirmed his commitment to protecting artists in a new era of AI generation. Grainge says he likes change and sees the use for AI in the industry—but artists’ rights must be protected too.

Grainge says the advent of AI technology has reached a point where Universal Music must “be completely at the epicenter of its application. An example he uses is the Beatles’ 2023 single “Now and Then” which used AI to isolate and clean up an old recording of John Lennon singing. “It’s a brilliant song—great lyrics, fabulous performance, incredibly emotive—that unless we’d had AI to individualize different recordings, would have never come to light, “Grainge told the L.A. Times.

But he’s quick to add a caveat here after praising the use of AI to bring an old work-in-progress to a finished state. “Do I believe in copyright and intellectual property and name and likeness and use of voice? One thousand percent.” Grainge says he’s against the notion that anyone can do anything with someone’s work. “I can’t tell you how much I am against that,” he says. Solving the problem means finding the path to monetization—which he sees as an industry failure with Napster.

Finding the balance between embracing AI and stopping it from consuming the industry is a delicate balance. It’s why Grainge has embraced YouTube’s AI incubator—a set of principles that include technological progress but also a commitment to fair compensation for both artists and rights holders.

It’s something that Tennessee’s ELVIS Act aims to address by enshrining an artists’ name, image, and likeness protections from generative AI cloning models that create unauthorized fake works in the voice and image of others. The Act extends these protections to voice actors, podcasters, or anyone else who relies on their voice for their livelihood.

The only problem is that for now, it protects residents of Tennessee only. Could the ELVIS Act become the basis for a federal law surrounding the protection of name, image, likeness, and voice? Potentially, but likely more states will move in the direction that Tennessee has gone, enshrining these rights on a state-by-state basis first. So far, Kentucky, Illinois, California, and Louisiana are states with similar bills focused on protecting voice from AI cloning. Meanwhile, the NO FAKES Act of 2023 was announced in the Senate.

]]>
Move Over Spotify AI DJ, YouTube Music Experiments with AI Generated Radio https://www.digitalmusicnews.com/2024/07/09/youtube-music-ai-generated-radio/ Tue, 09 Jul 2024 18:57:22 +0000 https://www.digitalmusicnews.com/?p=295410 YouTube Music AI generated radio

Photo Credit: kater_pro / reddit

YouTube Music appears to be testing a new AI feature that will allow users to suggest prompts for AI generated radio stations. Here’s the latest.

After Spotify embraced its AI DJ, YouTube Music is experimenting with ways to allow users to use AI to generate new playlists. A prompt for users says “ask for music any way you like” with suggested prompts based on previous listening history. YouTube Music indicates that the new feature is experimental—so it may not be available to everyone yet.

Users can enter a prompt either by text or using their voice, while the app itself will provide a list of suggested prompts to get you started. Some of the suggested prompts include ‘catchy pop choruses,’ ‘epic soundtracks,’ ‘upbeat pop anthems,’ and ‘Moscow rock scene.’ YouTube Music will generate a playlist from the prompt inside the app, with the prompt used as the station name.

The feature isn’t dissimilar to YouTube Music’s playlist generation for specific genres like ‘90s Sing-A-Long’ but the ability to specify specific aspects is new. A prompt like ‘90s feel good tunes from movies’ would theoretically only play songs from 90s movies (if the AI gets it right).

So who is this for? Some insight into Spotify’s AI DJ statistics lend some clues. 87% of people who use the Spotify AI DJ belong to the Gen Z and Millennial age groups—or people under 40. Users who spent time with the AI DJ spent 25% of their time engaging with the AI DJ directly and 50% of those returned the next day to do the same.

While YouTube Music does not have an AI generated DJ to provide music information yet, it does seem like something the company is trending towards. Meanwhile, will.i.am is turning AI-assisted radio on its head with his new service that aims to make radio a truly interactive experience for users, rather than just passive listening.

]]>
Warner Music Joins Sony Music in Warning AI Companies Against Unlicensed Training: ‘We Will Take Any Necessary Steps to Prevent the Infringement’ https://www.digitalmusicnews.com/2024/07/03/warner-music-ai-companies-warning/ Wed, 03 Jul 2024 23:10:08 +0000 https://www.digitalmusicnews.com/?p=295132 warner music ai

Warner Music Group head Robert Kyncl appears before the Senate Judiciary Subcommittee on Intellectual Property on April 30th, 2024. Photo Credit: RIAA/Shannon Finney

Warner Music Group has officially joined Sony Music Entertainment in warning AI companies against mining its catalog to train generative models.

WMG forwarded the relevant notice to a number of high-profile artificial intelligence players, several of which are grappling with litigation over alleged copyright infringement. And that litigation centers mainly on the media, apparently including copyrighted works, used to train the underlying AI systems.

Digging into WMG’s straightforward “statement regarding AI technologies,” the major label began by reiterating the relative high points for artists before emphasizing a need to “respect the rights of all those involved in the creation, marketing, promotion, and distribution of music.”

And from there, the Robert Kyncl-led business in an over 80-word sentence called for “all parties” to “obtain an express license” before ingesting recordings, compositions, metadata, artwork, and even “graphic images.” Additionally, that firm requirement extends to protected name, image, and likeness rights, WMG drove home.

“All parties must obtain an express license from WMG to use (including, but not limited to, reproducing, distributing, publicly performing, ripping, scraping, crawling, mining, recording, altering, making extractions of, or preparing derivative works of) any creative works owned or controlled by WMG or to link to or ingest such creative works in connection with the creation of datasets, as inputs for any machine learning or AI technologies, or to train or develop any machine learning or AI technologies (including by automated means),” Warner Music wrote in the noted sentence.

Eliminating all doubt, WMG proceeded to spell out that it “will take any necessary steps to prevent the infringement or other violations of our artists’ and songwriters’ creative works and rights.”

And like SME’s initially mentioned warning, the message includes a dedicated email address (AIinquiries@wmg.com) through which team members can “speak to you about obtaining the licenses that you require.”

Though it perhaps goes without saying, it’s not a coincidence that both Warner Music and Sony Music have in the past month and change released statements warning against AI’s unauthorized use of their protected media.

Time will reveal the exact strategy at hand – besides the top-level attempt to prevent the theft of IP and lay the groundwork for additional legal actions, that is. Closer to the present, it’s worth highlighting some relevant remarks made by Microsoft AI CEO Mustafa Suleyman during a much-discussed interview.

“There’s a separate category where a website or a publisher or a news organization had explicitly said, ‘Do not scrape or crawl me for any other reason than indexing me so that other people can find that content,’” Suleyman claimed of the perceived instances where generative AIs cannot be trained legally. “That’s a gray area, and I think that’s going to work its way through the courts.”

Late last month, reports pointed to ongoing AI discussions between the majors and YouTube, which recently added safeguards for those depicted in unauthorized soundalike and lookalike content.

]]>
Suno Releases a Mobile App Amid RIAA Suit — Complete With a Warning About ‘Legal Liability’ for Uploading Protected Audio https://www.digitalmusicnews.com/2024/07/03/suno-app-release/ Wed, 03 Jul 2024 18:20:25 +0000 https://www.digitalmusicnews.com/?p=295077 suno app

As it fends off copyright infringement litigation from the major labels, AI music platform Suno has launched a mobile app. Photo Credit: Rupam Dutta

Why not make it even easier to “create” music? As it fends off a massive copyright infringement lawsuit filed by the major labels, Suno has launched a mobile app.

The generative AI platform disclosed the app’s debut in a brief message penned by CEO Mikey Shulman. According to that announcement, stateside iOS users (an international rollout and an Android version are coming “soon”) can now pump out works with text prompts through the mobile version.

(Said mobile version is best found via the link included in the release. In another testament to how populated the AI space is, a number of similar third-party alternatives, some featuring the word “Suno” in their names, were crowding App Store search results at the time of writing.)

Also on the table is an option to “record audio with your phone and turn it into a song,” per the text. Interestingly, the feature doesn’t appear to block the submission of protected works from the get-go. But before finalizing the “creation” process, one is compelled to accept lengthy “audio upload terms.”

“I certify that I own or exclusively control all rights in any content I will upload using this Suno feature,” the terms read. “I understand that I am not allowed to upload content if I do not own or exclusively control the rights to it, and that if I do so in spite of this prohibition, I will (among other problems) be breaching a contract with Suno and I may be subject to various other forms of legal liability as well. By clicking this box, I acknowledge and agree to the foregoing.”

Perhaps most worrying of all when it comes to the threat AI poses to actual musicians is that the app functions as something of a music streaming and sharing platform to boot.

This apparently includes full-length AI songs that can be liked, shared, filtered by “artist,” and saved to playlists. Against the backdrop of continued streaming price increases – and rumblings of a Spotify ad-supported charge in the U.S. – the point is worth keeping front of mind as Suno and others continue to build out.

Running with the idea, the surprisingly responsive app enables free users to generate up to 10 non-commercial tracks per day. Via the $10-per-month Pro Plan, Suno customers can “make” 500 songs monthly with “10 running jobs at once,” “priority generation,” and “general commercial terms.”

Meanwhile, a Premier Plan comes with enough credits to generate 2,000 songs at $30 per month, according to the app. There are also discounted annual packages for both tiers. Time will, of course, tell whether the purchase options catch on in the ultra-competitive AI music arena.

Keeping the focus on Suno, which is fresh off a $125 million funding round, the service claims to have attracted north of 12 million users to date in pursuit of its “mission to build a future where everyone can make and share music.”

]]>
Recording Academy CEO & will.i.am Discuss AI and The Future of Music at Grammy Museum Fireside Chat—”I Don’t Know If It’s Truly Creative When You’re Chasing An Algorithm” https://www.digitalmusicnews.com/2024/07/02/recording-academy-ceo-will-i-am-grammy-museum-chat/ Tue, 02 Jul 2024 21:18:26 +0000 https://www.digitalmusicnews.com/?p=294953 will.i.am speaks with Harvey Mason Jr about the future of AI in the radio industry

Photo Credit: Courtesy of the Recording Academy™/photo by Rebecca Sapp, Getty Images© 2024

Recently the Grammy Museum held an intimate conversation in the Clive Davis Theatre featuring the Recording Academy CEO Harvey Mason Jr. and will.i.am discussing the past, present, and future of the business now that the AI genie has been freed..

The duo spoke on how cultural and technological shifts have impacted Black music, while continuing to influence the direction it takes even today. Musician and tech entrepreneur will.i.am spoke about how algorithms drive the modern music industry and posed an interesting question for artists: “Would you rather have access to 100% of your audience and share ownership of your music that you can monetize; or would you rather own 100% of your music with no access to the audience?”

The noted musician and tech entrepreneur says he would rather have access to 100% of his audience and share the music. “I don’t think we’re in an artist and development industry right now. That’s the reason why the ones that I pointed out are artists that have had the benefits of developing. Kendrick Lamar developed. Billie Eilish developed. And then you have like this other herd that are trying to get heard on TikTok, you heard?”

“I don’t know if that is truly creative when you’re chasing an algorithm. When it’s 15 seconds of attention, how creative can you actually be when you’re chasing an algorithm and someone’s dictating what kind of song you should make or behavior you should do to get attention and traction on a platform whose algorithm is owned by China.”

“The algorithm is manipulating us and dumbing down the activity it rewards to perpetuate behaviors that aren’t really conducive to us growing as a culture,” will.i.am concludes.

As the conversation deepened into what AI music means for the industry, Harvey Mason Jr. brings up another important question for the new era—who owns an artist’s essence and likeness? That’s a problem that needs solving, with Tennessee’s ELVIS Act taking the first steps in the nation to define the right to protect one’s voice and personality rights from unauthorized exploitation.

“For me, it seems that it comes down to three things,” Mason Jr. told the audience. “Artists have to consent to allowing anyone to use their essence or likeness or voice. They’ve gotta be paid if it happens. And they have to make sure they have the approval rights and the crediting that designates [this new AI creation] as different from their human creation. Because there could be a dilution that happens if there’s AI versions out there that the artist has approved but does not credit properly.”

While speaking on the future of AI in the world, will.i.am says the power of data makes everyone compromised—because AI can be trained on anyone’s data. “Where do you bank your data at? Until we address where do you bank your data, everybody’s stuff is compromised. Every artist is, every person selling, every attorney, every banker, everybody is compromised when it comes to data.”

“That’s because they’re not telling you the truth on the power of data,” will.i.am told the audience. Mason Jr. pipes up to add to the discussion here, “No question. We can all agree there’s going to be massive disruption across every industry. But let’s talk about how it’s going to affect and impact music.”

“This is the heartbeat of humanity. This is the shared experience of how we communicate, how we tell stories. This is what makes us human. Not to disparage selling—that’s not what makes us human. Art, creativity, culture, human interaction. That’s society.”

The conversation then turns to what music means to humanity and has for thousands of years before the last 200+ years of the recorded music industry. During this conversation, will.i.am says AI should mimic live music, rather than trying to re-create recorded music.

“So before the record industry, what was a song? And what song was everybody singing?” will.i.am asks the audience. “The summer of 1624, they were like, ‘have you heard that new song by Paul Paul?’ ‘I feel that shit is fire.’ In 1724, what was the song of the summer? What about 1824? Until the recording industry you didn’t know unless you were there—because now we can all sing a similar song together because it was recorded.”

“We can put it on repeat whenever you have a device to play that recording back. There’s been music for thousands of years—the concept of music and song. So now think about how you have this infinite tool to generate music. Should it mimic a recording’s limitation? Or should it mimic how live music is live and can adapt?”

“So with AI, if you have this truly expressive tool, it shouldn’t be used to mimic recordings. Use your imagination of what it could be.”

Harvey Mason Jr. chimes in again to bring the conversation back to earth. “That’s what you’ll do,” he tells will.i.am. “That’s what I’ll do and that’s what some of our peers will do, but that’s not what everybody else is doing. Everybody else is taking AI to churn music.”

“There’s a hundred songs being created every 10 seconds,” Mason Jr. continues. “Two million new songs a day. Two million songs a day by AI that are not thinking about going left, or going right, or innovating or iterating, and making something really, really cool. They’re just churning music. And right now it’s being trained on copyrighted material, so that’s something I’d love your take on,” he invites will.i.am to answer.

“Well, one of the first questions you asked me is, how did I start making music?” will.i.am answers. “I started sampling. I sampled copyrighted material. There are now laws so that the people sampled get paid. People think Kanye’s a genius for how he samples and puts a track together, right? So now we have AI, which is the highest concept of sampling.”

“What needs to happen ultra fast is that these AI companies that make these platforms need to allow for artists to own their data set,” he concludes. “That’s the first thing that needs to happen. Because it’s just ‘yo, you need to pay us for the data you trained the model on’.”

“If you can have an AI that you can make sound like Otis Redding, it’s the Otis Redding Estate that needs to own that data. If you can make an AI that sounds like Michael Jackson, then Michael Jackson’s estate needs to own that. Not the record company. The people. Same thing with an alive artist like Kendrick Lamar. He needs to own his AI data and from there, they need to figure out how the system pays him because [AI training data] captures everyone’s essence.”

The conversation turns to how artists are influenced by other artists, with will.i.am saying his influences are Earth, Wind, and Fire, and A Tribe Called Quest. “Did you listen to my early songs? You’d be like ‘yo, that sounds like Q-Tip’. Uh, duh. That’s who I trained on. That guy influenced me. Every time I see Q-Tip, thank you so much Q-Tip. Thank you so much Busta Rhymes. And so now we have this new thing that is modeled and trained off of our brain. It’s a new brain that’s in our midst, literally a new network.”

“What should happen first? To protect the artist, the artist needs to own their dataset. To protect business folks, they need to own their data. Everything has some link to some other original vibe, sound, or story and it’s all been influenced by each other. But this is the first time that we’ve had an alternative awesome brain in our midst. So I think artists need to be protected first by owning their data set.”

“Your data set and your AI system should be yours. We cannot let sharing—this thing they do with where you don’t necessarily have records. You have access to it. You don’t have a car. You have access to it. You don’t own land or a house. You have access to it. You don’t have AI data ownership, but you have access to it.”

From there, the conversation turns to how will.i.am is using AI within FYI, showcasing the new tool RAiDiO—which he claims will revolutionize radio the same way the iPhone changed the telephone. The key focus is to create an interactive radio experience that integrates real-time, relevant information with personalized interaction, while retaining the ability to inform on any topic of the user’s choice. RAiDiO is releasing this summer.

The new platform allows listeners to ask questions and receive detailed responses about a variety of topics, ranging from current events, to personal interests—all directly from the radio broadcast. The use of AI here is to enhance the listener experience and make it possible to interact with the DJ to receive more in-depth information with a personal touch.

DMN spoke with will.i.am about this project during our exploration of the rules for AI and how it will shape the future of the music industry. He also spoke with DMN on the future of radio and how FYI will help shape the way AI takes hold in the radio industry—transforming it for a new era.

His example creates a radio DJ that feels personable and can answer questions about what’s being played, who wrote it, their influences, and even pull quotes from the artist in real-time from publications like Rolling Stone magazine. All of this is done at the request of the user, who can interact with the DJ as though sitting in the room with them.

This is just one of the ways in which will.i.am believes the use of AI will revolutionize the music industry, making it more interactive and personal while retaining its informative properties that sets radio apart from algorithmic generated playlists.

]]>
YouTube Unveils New AI Likeness Protections — Covering Soundalike Audio and More — for ‘Uniquely Identifiable’ First Parties https://www.digitalmusicnews.com/2024/07/02/youtube-ai-protections/ Tue, 02 Jul 2024 17:56:12 +0000 https://www.digitalmusicnews.com/?p=295001 youtube privacy guidelines

YouTube has established new AI likeness and deepfake protections under its privacy guidelines. Photo Credit: Muhammad Asyfaul

In a move that could prove significant on the music rights side, YouTube is officially enabling first parties to demand the removal of unauthorized lookalike and soundalike content.

The Google-owned platform addressed the policy in a broader privacy guidelines update, emphasizing at the outset that content must contain “uniquely identifiable” information to constitute a violation.

Additionally, YouTube itself “reserves the right to make the final determination of whether a violation of its privacy guidelines has occurred,” according to the text.

Notwithstanding the discretion, logic and evidence suggest that sizable music rightsholders’ complaints will be heard loud and clear by the Content ID developer, which is also reportedly leaning into AI initiatives with the major labels. (Technically, YouTube “will not accept privacy complaints filed on behalf of” employees or companies, it’s worth clarifying. Complaints from legal reps will be accepted, however.)

Shifting the focus specifically to the takedown policy for “AI-generated or other synthetic content that looks or sounds like” a particular person but was created without permission, the media at hand needs to “depict a realistic altered” version of one’s likeness to qualify for removal.

Voice is expressly mentioned in the text, and the “realistic altered” description therefore applies to unapproved AI soundalike tracks, which remain plentiful (and continue to garner a substantial number of views) on the video-sharing platform and elsewhere.

Among the “variety of factors” YouTube will weigh when considering soundalike/lookalike removal notices are “whether the person can be uniquely identified,” whether the media in question has “public interest value,” and whether it “features a public figure or well-known individual engaging in a sensitive behavior such as criminal activity.”

Looking to the bigger picture, music rightsholders, chief among them the majors, have now curbed the prevalence of AI tracks on Spotify and will presumably have an easier time keeping up on unauthorized works on YouTube due to the privacy policy.

While important – Spotify and YouTube are, of course, decidedly popular music-access options – the near-term solutions don’t mark a comprehensive victory for rightsholders.

But industry companies and organizations are continuing to strive for fundamental progress, including with a stateside push for federal name, image, and likeness protections. Predictably, Congress, subject to no shortage of big tech lobbying, is proving slow to act in the complex and unprecedented area.

Across the pond, the BPI back in March took aim at soundalike-voice startup Jammable (formerly Voicify AI). Three months and change later, despite the firmly worded threat of legal action, a variety of AI soundalike voice models, from Michael Jackson to Katy Perry and many in between, still appeared to be live on Jammable at the time of this writing.

]]>
What Copyright Laws? Protected Media Is Actually ‘Freeware’ When Available ‘On the Open Web,’ Microsoft AI Chief Says https://www.digitalmusicnews.com/2024/07/01/microsoft-ai-ceo-freeware-comments/ Mon, 01 Jul 2024 21:08:19 +0000 https://www.digitalmusicnews.com/?p=294925 microsoft ai

Microsoft AI CEO Mustafa Suleyman. Photo Credit: Christopher Wilson

It turns out copyrighted content is actually “freeware” that artificial intelligence models can freely ingest – at least according to Microsoft AI CEO Mustafa Suleyman.

Suleyman made the ill-advised remark, presumably unvetted by Microsoft’s legal and PR teams, during an interview with CNBC’s Andrew Ross Sorkin. That discussion took place at the Aspen Ideas Festival.

And in keeping with the annual event’s name, the conversation rather expectedly touched on generative AI’s well-documented and highly controversial training processes. According to critics, multiple lawsuits, and even artificial intelligence chatbots themselves, those processes include the ingestion of all manner of protected media.

After introducing the DeepMind co-founder Suleyman as “one of the OGs of the AI world,” Sorkin asked about “whether the AI companies have effectively stolen the world’s IP.”

“It appears that a lot of the information that has been trained on over the years has come from the web,” explained Sorkin. “And some of it’s the open web, and some’s not. … Who is supposed to own the IP? Who is supposed to get value from that IP? And whether – to put it in very blunt terms – whether the AI companies have effectively stolen the world’s IP?”

Not hesitating to answer, Suleyman, who joined Microsoft by bringing his Inflection AI company to the tech giant, dove into the “freeware” comment.

“With respect to content that is already on the open web,” relayed Suleyman, “the social contract of that content since the 90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been freeware, if you like. That’s been the understanding.”

While there’s some ambiguity surrounding the definition of “open web,” the longtime AI exec didn’t clarify in the remainder of his answer that he was referring to non-copyrighted production libraries or public domain databases, for instance.

“There’s a separate category where a website or a publisher or a news organization had explicitly said, ‘Do not scrape or crawl me for any other reason than indexing me so that other people can find that content,’” he continued. “That’s a gray area, and I think that’s going to work its way through the courts.”

(As many have pointed out in similar situations, it’s readily apparent which side benefits from this purported “social contract” and which side is getting the decidedly short end of the stick in the form of no credit, compensation, or upside whatsoever.)

Besides its obvious conflict with actual U.S. copyright law and the fact that the open web is itself replete with infringement, the statement, voiced by an individual who’s said to be an artificial intelligence “OG,” seems to underscore the AI sector’s general unwillingness to acknowledge even basic creative rights.

Different execs have echoed the pernicious claim that the unauthorized training of models on protected media is transformative and constitutes fair use. Adjacent to the idea, OpenAI (and Microsoft, a sizable backer) is facing multiple related suits, music publishers remain embroiled in litigation against Amazon-funded Anthropic, and most recently, the major labels sued Suno and Udio for alleged infringement.

Behind the firmly worded legal actions and even the threat of IP devaluation as well as lost revenue are, of course, broader concerns about where exactly the runaway AI train is heading.

Bankrolled in large part by a collection of multi-trillion-dollar companies, the unprecedented technology is seemingly threatening to replace (or at least make things far more financially difficult for) the very creatives and professionals whose works it has allegedly used en masse to build its core products.

During the same interview highlighted above, Suleyman visited the subject when addressing the perceived ability of AI “to make the raw materials necessary to be creative” – presumably meaning a bit of imagination and work – “more available than ever before.”

GPT-3 cost tens of millions of dollars to train and is now available free and open source – you can operate [it] on a single phone, certainly on a laptop,” indicated Suleyman. “GPT-4, the same story. I think that that’s going to make the raw materials necessary to be creative and entrepreneurial cheaper and more available than ever before.”

]]>
Morgan Freeman Joins Chorus of Celebs Speaking Out on Unauthorized Voice AI https://www.digitalmusicnews.com/2024/07/01/morgan-freeman-unauthorized-voice-ai-comments/ Mon, 01 Jul 2024 18:14:21 +0000 https://www.digitalmusicnews.com/?p=294919 Morgan Freeman voice AI

Photo Credit: Alexander Kubitza for the Pentagon

Morgan Freeman has joined the chorus of celebrity voices speaking out against the use of unauthorized voice cloning. Scarlett Johansson has helped lead the charge, asking OpenAI to disclose how it created its AI voice assistant, Sky.

Recently, a TikTok user describing herself as Morgan Freeman’s ‘nepo niece’ posted videos that allegedly featured narration from the iconic voice actor generated by AI. Morgan Freeman is famous for his narration work including Frank Darabont’s ‘The Shawshank Redemption’ (1994), Luc Jacquet’s ‘March of the Penguins’ (2005), and Clint Eastwood’s ‘Million Dollar Baby’ (2005).

The celebrated voice actor took to X/Twitter over the weekend to thank fans for calling out the TikTok scammer. “Thank you to my incredible fans for your vigilance and support in calling out the unauthorized use of an AI voice imitating me,” he writes. “Your dedication helps authenticity and integrity remain paramount. Grateful.”

Scarlet Johansson brought national attention to voice cloning after asking OpenAI’s Sam Altman to disclose how the company created its voice for Sky, its personal assistant chatbot. The advances in technology that allow characteristics of unique voices to be replicated mean anyone with hours of training data (i.e. previous public appearances) is at risk for voice cloning.

Johansson contacted a lawyer immediately upon hearing the Sky chatbot voice, saying she was “shocked, angered, and in disbelief.” While the voice of the now-pulled personal assistant may not be trained on Johansson’s iconic voice, the soundalike is undeniable—especially after Johansson revealed she refused to voice the chatbot after Altman made the offer. She says she was contacted two days before the public demo of Sky, asking her to reconsider her position. OpenAI maintains that the voice is not an imitation of Johansson.

Back in April 2024, Drake drew broad criticism for the use of AI to re-create Tupac Shakur’s voice to taunt Kendrick Lamar in a rap beef between the two. The West Coast rapper taunts Lamar in the song “Taylor Made Freestyle,” which caused the Shakur Estate to issue a cease and desist. Howard King, a representative for the estate, called the AI voice cloning a “blatant abuse of the legacy of one of the greatest hip-hop artists of all time.”

]]>
YouTube Reportedly in Active AI-Related Discussions with UMG, WMG, Sony Music https://www.digitalmusicnews.com/2024/06/27/youtube-ai-related-talks-umg-wmg-sony-music/ Thu, 27 Jun 2024 20:01:34 +0000 https://www.digitalmusicnews.com/?p=294704 YouTube AI Sony

Photo Credit: Javier Miranda

YouTube is reportedly in talks with major record labels to license their songs to legally train AI song generators.

The Google-owned video platform desperately needs record labels’ consent to legally train AI song generators, as YouTube prepares to launch new tools this year. The company is reportedly in talks with the Big Three — Sony, Universal, and Warner — to win over more artists in allowing their music to be used in training generative artificial intelligence software.

Unsurprisingly, many artists remain firm in their opposition to AI music generation for fear it could undermine the value of their work. Any efforts by a label to force their artists into such an agreement would be controversial at best, and a death sentence at worst.

“The industry is wrestling with this,” said an executive at an unspecified large music company. “Technically, the companies have the copyrights, but we have to think through how to play it — we don’t want to be seen as a Luddite.”

Last year, YouTube began testing a genAI tool that enables people to create short music clips using a text prompt. That tool, called “Dream Track,” was designed to imitate the sound of well-known singers — specifically, the 10 who agreed to participate in it, including Charli XCX, John Legend, and Troye Sivan. Dream Track was only made available to a small group of creators for its testing phase.

Now, YouTube wants to enlist “dozens” of artists for the launch of its new AI song generator later this year. “We’re not looking to expand Dream Track, but are in conversations with labels about other experiments,” said YouTube.

YouTube’s move comes at a time when AI companies like OpenAI are forced to either sink or swim: striking licensing agreements with media groups to train language models or risk getting slapped with numerous lawsuits for the unauthorized use of someone else’s work.

Some of those deals, according to insiders, are worth tens of millions of dollars to media companies. For the music industry, these deals would look a little different. Rather than a blanket license, they would apply to a select group of artists, according to people close to the discussions.

Instead of royalty-based arrangements that labels have in place with streaming platforms like Spotify or Apple, these deals would be more akin to a one-off payment from a social media company like Snap or Meta to entertainment groups for access to their music.

“We are always testing new ideas and learning from our experiments,” said YouTube. “It’s an important part of our innovation process. We will continue on this path with AI and music as we build for the future.”

]]>
Voice Actors Suing Lovo AI Over Breach of Contract, Say Their Voices Were Cloned Without Permission https://www.digitalmusicnews.com/2024/06/27/voice-actors-sue-lovo-ai-breach-of-contract/ Thu, 27 Jun 2024 17:37:39 +0000 https://www.digitalmusicnews.com/?p=294687 two voice actors sue lovo ai

Photo Credit: Lovo

AI voice startup Lovo is being sued by two voice actors who say the company hired them to create voice clips, which they used to train their AI.

Paul Skye Lehrman and Linnea Sage say they were hired by Lovo in 2019 and 2020 to provide several voice clips for what was described as “internal research.” Lehrman told CBS News that on three different occasions the company assured him that the voice clips provided would be used for “internal purposes only and never forward facing.”

Lehrman describes browsing YouTube in 2022, only to hear himself talking in the video. He also describes hearing himself speaking a podcast that he never recorded with his voice. “My voice is out there saying things that I’ve never said in places that I haven’t agreed to be a part of,” Lehrman says. “We are now in a science fiction come true.”

Both Lehrman and Sage say that Lovo used their voice clips to train their AI—essentially cloning their voices. They allege this is a breach of the respective contracts they signed and have a proposed a federal class action lawsuit for violating trademark laws.

Lovo advertises its services as an AI voice cloning tool that allows users to upload minutes of audio sample to generate a custom voice clone. The service is intended to offer podcasters the ability to model their voice and create new videos just by typing text, modifying the AI voice model as needed.

There are no federal laws that cover the use of AI to mimic someone’s voice—which has become a hot button issue in recent AI discourse. Tennessee’s recently passed ELVIS Act covers this issue, but it only applies to that state for now. If federal legislation were to be adopted, the ELVIS Act could serve as a template for what AI voice protection would look like at the federal level.

OpenAI was recently accused of hiring a sound-a-like to mimic Scarlett Johansson’s voice—after contacting the actress to gauge her interest. The move has drawn comparisons to the Midler v. Ford case, in which Ford hired a sound-a-like singer to mimic Bette Midler’s sound for one of their commercials.

“I have such an incredibly pessimistic view of the future of voiceover,” Sage told CBS News. “So far this year to date I’ve lost 75% of the work that I would’ve normally done up until now. And I’ am expecting that to get worse.”

“This is about protecting individuals who have a voice that can be exploited,” Lehrman adds. “And unfortunately that’s everyone and anyone.”

]]>
It’s Been a Wild Month for AI Music. Here’s a Closer Look at the Leaps and Lawsuits of the Past Few Weeks https://www.digitalmusicnews.com/pro/music-ai-june-weekly/ Thu, 27 Jun 2024 06:00:30 +0000 https://www.digitalmusicnews.com/?post_type=dmn_pro&p=294601 Music AI month in review, June 2024

Image: DMN Pro

It’s been a dizzying month for AI music. Here’s a closer look at some major recent advancements in music AI technology, as well as a potentially groundbreaking pair of lawsuits lodged by the major music labels.

Thanks to a quick-moving June 2024, AI music technology has quietly taken a leap forward – and found itself at the center of two RIAA-spearheaded lawsuits. With the unprecedented technology’s evolution showing few signs of slowing, what will the corresponding music industry impact look like through the remainder of the year and beyond?

Report Table of Contents

I. Introduction: AI Music’s Strides and Setbacks in June 2024

II. An Unusually Crowded Month for An Especially Important Sub-Sector: Generative

III. AI’s June 2024 Developments and Setbacks

IV. Is Training Artificial Intelligence on Protected Media Fair Use? The High-

V. Stakes Question Underscores the Industry’s AI Positioning

VI. AI-Powered Music-Making and the Long-Term Effects of Opening Up Creation to All

Please note: unauthorized reproduction of this strictly prohibited — thank you.


]]>
Created by Humans, Helping People License Their Creative Works to AI Models, Raises $5 Million https://www.digitalmusicnews.com/2024/06/26/created-by-humans-funding-five-million/ Wed, 26 Jun 2024 20:32:57 +0000 https://www.digitalmusicnews.com/?p=294629 Created by Humans raises five million

Photo Credit: Created by Humans

Created by Humans aims to help creators license their works to AI models, receiving a $5 million injection to launch.

In a sea of genAI companies facing litigation from the creative sector over the training of AI models on creators’ works without proper authorization, Created by Humans wants to be the lifeboat.

Billing itself as “the AI rights licensing platform for creators,” Created by Humans is starting its battle with books — encouraging authors and publishers to sign up and claim their works to decide whether to opt in or out of licensing options with AI firms.

The startup has raised $5 million in funding, with plans to expand beyond books to become a platform “where creators of videos, images, music, and even medical data can sell licensing rights for AI training.”

The brainchild of Trip Adler, former CEO of document sharing service turned digital book and news subscription company Scribd, Created by Humans has received funding from “a bevy of prominent investors” led by Craft Ventures founder David Sacks, and Mike Maples, co-founder of Floodgate Fund. Other investors include LAUNCH Fund’s Jason Calacanis, Slow Ventures’ Sam Lessin and Garry Tan, and best-selling author Walter Isaacson. Isaacson also joined the company as a creative advisor and inaugural author whose work can be licensed by AI companies.

The exact details of Created by Humans’ licensing agreement are still evolving. Authors can submit their work for AI companies to purchase specific elements with predefined usage rights. “We’re trying to broker a three-way deal between authors, publishers, and the AI industry,” says Adler. “It’s complicated, but we’re making great progress.”

Currently, the company is proposing a philosophy called the Fourth Law — a set of guiding principles for the way AI companies can use and train models on human-created content. Inspired by sci-fi author Isaac Asimov’s three laws of robotics, Fourth Law states that humans should have the right to consent and control how AI uses their works, and should be appropriately compensated and credited for that work.

“We want [Fourth Law] to be the new standard for how deals work between AI companies and content owners,” said Adler. “Authors and publishers can contribute their content and manage all their content according to the Fourth Law.”

Using Walter Isaacson as an example, Adler explains how creators can choose the rights they want to license from their works. “He can pick training rights, reference rights; he can license the style of his voice, his characters, and pick which AI company he wants to license to,” Adler says. “Then Walter will get a dashboard that shows where his books are being used and how he’s making money.”

Created by Humans is looking to establish a framework for a host of licensing rights, including converting a book into a movie script, and translating it into other languages in real-time. Adler says he envisions “AI revenue” as the next major force in the book industry, eventually eclipsing ebooks and audiobooks.

]]>
The Ugly War of Words (and Legal Filings) Continues: RIAA, Udio Trade Barbs Amid Copyright Infringement Battle https://www.digitalmusicnews.com/2024/06/26/udio-riaa-war-of-words/ Wed, 26 Jun 2024 16:18:17 +0000 https://www.digitalmusicnews.com/?p=294559 udio

AI music service Udio remains embroiled in an infringement legal battle with the major labels and, outside the courtroom, is engaging in a war of words with the RIAA. Photo Credit: Udio

Let the war of words continue: Days after the major labels filed copyright infringement suits against Suno and Udio, the latter AI music service has pushed back against the complaint and spurred a formal retort from the RIAA.  

This newest development in the increasingly public showdown arrives on the heels of an outside-the-courtroom confrontation between Suno CEO Mikey Shulman and the RIAA. While that encounter was set in motion by media statements from Shulman, Udio itself went ahead and addressed the majors’ infringement suit with an X post.

Spanning close to 500 words, the all-encompassing response covers Udio’s “thoughts on AI and the future of music,” an attempt at explaining why AI is actually good for proper musicians, and a simplified (and inherently biased) explanation of the training process for generative models.

“Generative AI models, including our music model, learn from examples,” wrote a16z-, will.i.am-, Common-, and UnitedMasters-backed Udio. “Just as students listen to music and study scores, our model has ‘listened’ to and learned from a large collection of recorded music.

“The goal of model training is to develop an understanding of musical ideas—the basic building blocks of musical expression that are owned by no one,” the company proceeded. “Our system is explicitly designed to create music reflecting new musical ideas. We are completely uninterested in reproducing content in our training set, and in fact, have implemented and continue to refine state-of-the-art filters to ensure our model does not reproduce copyrighted works or artists’ voices.”

Omitted from the remarks is a mention of the recordings Udio allegedly removed in the wake of the legal action’s submission; the RIAA promptly noted the pulldown of alleged Mariah Carey and The Temptations soundalike tracks, among others.

And on the training front, Udio’s acknowledgement that its “model has ‘listened’ to and learned from a large collection of recorded music” is significant for multiple reasons.

Moving beyond those reasons for now and shifting the focus to the RIAA’s follow-up, the organization didn’t hesitate to criticize the AI startup’s “meandering” comments.

“If there is any takeaway from Udio’s meandering ‘response,’” an RIAA spokesperson communicated in a comparatively concise 131-word reply, “it is that Udio is attempting to construct an alternate reality where being pro-artist means stealing artists’ work for profit.

“In the reality everyone else is living in, artist advocate groups oppose what Udio is doing and strongly support these lawsuits,” the RIAA continued. “Supporting real creativity means getting permission before using someone’s work and developing technology that partners with and supports human artists instead of cutting them out and replacing them. Music companies have already struck multiple partnerships with startups, entrepreneurs, and others with responsible applications of AI.”

Predictably, the industry representative saved its most important remarks for last, ending by calling out Udio’s above-highlighted training admission.

“There is one surprising note of agreement: Udio now seems to admit their model copied ‘a large collection of recorded music.’ That’s a startling admission of illegal and unethical conduct, and they should be held accountable,” the spokesperson concluded.

]]>
UMG and Roland AI Alliance Surpasses 50 Endorsements Amid Intensifying Content v. Technology Dispute https://www.digitalmusicnews.com/2024/06/25/principles-for-music-creation-with-ai/ Tue, 25 Jun 2024 21:40:42 +0000 https://www.digitalmusicnews.com/?p=294509 principles for music creation with ai

Universal Music and Roland have announced over 50 endorsements for their ‘Principles for Music Creation with AI’ guidelines. Photo Credit: Trac Vu

Back in March, Universal Music Group (UMG) and Roland unveiled their “Principles for Music Creation with AI.” Now, amid the major labels’ newly filed infringement suits against Suno and Udio, north of 50 total companies and organizations have endorsed the guidelines.

UMG and Roland confirmed as much in a formal release, touting the relevant principles specifically as “clarifying statements relating to the responsible use of AI in music creation.” For a quick refresher, the “core” declarations at hand emphasize that “music is central to humanity” and “that transparency is essential to responsible and trustworthy AI,” among other straightforward things.

Against the backdrop of a rapidly evolving AI music landscape – and increasingly pressing questions about the training procedures behind leading generative models – an array of different companies and entities evidently support the message.

All told, those supporters already exceed 50, with new signees including but not limited to NAMM, Sydney University, BandLab Technologies, Splice, Soundful (in which Universal Music has a stake), Beatport (itself a Soundful investor), and GPU Audio.

Execs with several of these companies provided statements for the release from UMG and Roland, which themselves addressed the 50-endorsement milestone as well as the overarching significance of prioritizing transparency in AI.

On the former front, BandLab Technologies co-founder and CEO Meng Ru Kuok, whose company has an AI-focused tie-up in place with UMG, described it as “our responsibility to thoughtfully ensure that AI supports artists and respects their creative integrity.”

And Splice CEO Kakul Srivastava, whose company has also embraced the development of AI products, said it’s “a critical time to support responsibility around new technology.” Meanwhile, UMG SVP of strategic technology Chris Horton pointed to the need for “a thoughtful approach to AI adoption.”

“We are pleased to see the growing list of Principle supporters from across the ecosystem of tools, services, educators, and services addressing the needs and interests of current and future artists,” summed up Horton, a UMG vet of approximately 24 years.

“The scope of support reflected by all of these participating organizations clearly indicates emerging consensus about the importance of strongly advocating a thoughtful approach to AI adoption,” he concluded.

The coming months and particularly years will reveal exactly what this “thoughtful approach” entails. There is, of course, a clear-cut incentive to address alleged infringement attributable to the training processes behind generative AI models.

However, although the subject receives comparatively less mainstream attention, even “ethically developed” artificial intelligence services and tools will expand at the expense of actual musicians – if only due to volume-related considerations.

Trained on non-copyrighted music from the outset, the aforementioned Soundful, for example, says it enables one “to generate royalty free background music at the click of a button.” And AI music generator Boomy, following a Universal Music-prompted Spotify dispute last year, has now “created 19,832,325 original songs,” according to its website.

That’s an average of a whopping 12,929 tracks per day since early May of 2023, when Boomy was still displaying its overall generated works as a percentage “of the world’s recorded music.” 14 months ago, said percentage was 13.81%.

]]>
RIAA Quickly Fires Back Against Suno CEO’s ‘Transformative’ Comments As Generative AI Training Models Take Center Stage https://www.digitalmusicnews.com/2024/06/25/riaa-suno-war-of-words/ Tue, 25 Jun 2024 18:40:44 +0000 https://www.digitalmusicnews.com/?p=294484 riaa

Moments after the major labels filed copyright infringement actions against AI music platforms Suno and Udio, the former’s CEO engaged in a public back-and-forth with the RIAA. Photo Credit: Steve Johnson

Why confine legal battles to the courtroom? Immediately following news of the major labels’ massive copyright infringement lawsuits against Suno and Udio, the former AI music service’s CEO engaged in a testy war of words with the RIAA.

That back-and-forth, a testament to the disputes’ high-stakes nature and the broader significance of protected media’s use in generative AI, compelled the trade organization to put out an afternoon statement yesterday.

Beginning on the other side of the confrontation, Suno CEO Mikey Shulman in widely circulated comments defended his company’s technology as “transformative” and “designed to generate completely new outputs, not to memorize and regurgitate pre-existing content.”

Not stopping there, the exec reiterated his platform’s intentional lack of support for text-to-music “prompts that reference specific artists” and accused the plaintiff labels of reverting “to their old lawyer-led playbook” instead of engaging in “a good faith discussion.”

“Suno is built for new music, new uses, and new musicians,” concluded Shulman, with those “musicians” presumably referring to anyone capable of typing a single-sentence prompt. “We prize originality.”

According to the RIAA, Shulman in the lengthy response avoided the main question concerning his platform’s alleged ingestion of protected media.

“Suno continues to dodge the basic question: what sound recordings have they illegally copied?” the RIAA asked in the quickly distributed follow-up.

“In an apparent attempt to deceive working artists, rightsholders, and the media about its technology,” the organization proceeded, “Suno refuses to address the fact that its service has literally been caught on tape – as part of the evidence in this case – doing what Mr. Shulman says his company doesn’t do: memorizing and regurgitating the art made by humans.

“Winners of the streaming era worked cooperatively with artists and rightsholders to properly license music. The losers did exactly what Suno and Udio are doing now,” the entity concluded.

Though it perhaps goes without saying, there’s quite a lot riding on the central issue of whether training generative AI systems on protected media – the “caught on tape” line in the RIAA’s statement seemingly refers to related remarks from a Suno exec – is transformative and constitutes fair use.

While the obvious answer is a resounding “no” – among other things, developers would have trained their models solely on public-domain works if doing so was viable – many in the AI community are of the opposite stance. That includes Anthropic CEO Dario Amodei, who in April doubled down on the belief as his company fends off a separate infringement suit.

Time will reveal whether litigation can afford rightsholders their due compensation from generative AI giants and, more pressingly, bring about much-needed changes for future training practices as well as adjacent recordkeeping. However, even those solutions won’t halt the unprecedented technology’s sweeping impact, which could well undermine human creativity and a whole lot else in the long term.

]]>
UMG, WMG, Sony Music File Litigation Against AI Music Services Suno and Udio for Massive Copyright Infringement https://www.digitalmusicnews.com/2024/06/24/umg-wmg-sony-litigation-ai-music-suno-udio/ Tue, 25 Jun 2024 06:00:31 +0000 https://www.digitalmusicnews.com/?p=294388

The Recording Industry Association of America (RIAA), on behalf of its major label clients Universal Music Group, Sony Music Entertainment, and Warner Music Group, announced the filing of two copyright infringement lawsuits against AI music services Suno and Udio, alleging the unlicensed use of copyrighted sound recordings to train their generative AI models.

In an email to Digital Music News, the RIAA described both lawsuits as ‘landmark’ — and that may not be an understatement.

According to the trade group, the lawsuits against Suno and Udio, filed in Boston and New York federal courts, respectively, mark a significant step in protecting artists’, songwriters’, and rightsholders’ control over their works in the rapidly evolving landscape of AI technology. The plaintiffs, specifically Sony Music Entertainment, UMG Recordings, Inc., and Warner Records, Inc., assert that Suno and Udio have copied and exploited countless sound recordings without permission, spanning various genres, styles, and eras.

The cases seek declarations of infringement, injunctions to prevent future infringement, and damages for past infringements. The core allegations highlight the unlicensed copying of sound recordings on a massive scale for training, development, and operation of Suno and Udio’s services.

The filings can be found here (Suno) and here (Udio).

In its communication with DMN, the RIAA compiled a breakdown of numerous examples of copyright infringement that exemplify the issue at hand.

RIAA Chairman and CEO Mitch Glazier emphasized the music community’s embrace of AI while highlighting the need for responsible development: “The music community has embraced AI, and we are already partnering and collaborating with responsible developers to build sustainable AI tools. But we can only succeed if developers are willing to work together with us.”

Glazier has been critical of unlicensed services like Suno and Udio for exploiting artists’ work without consent or compensation, hindering the potential of innovative and ethical AI.

RIAA Chief Legal Officer Ken Doroshow reinforced the necessity of the lawsuits, stating, “These lawsuits are necessary to reinforce the most basic rules of the road for the responsible, ethical, and lawful development of generative AI systems and to bring Suno’s and Udio’s blatant infringement to an end.”

The music community, including various organizations and prominent figures, has rallied to support the RIAA’s efforts to protect creative works and foster responsible AI development.

In emails to DMN, executives from The Recording Academy, A2IM, SoundExchange, SONA, the NMPA, and others emphasized the importance of fair compensation, respect for artists’ rights, and the ethical use of AI technology.

The core legal arguments presented in the RIAA lawsuits against Suno and Udio revolve around copyright infringement and fair use, with several key points:

Unauthorized Copying of Sound Recordings: The complaints allege that both Suno and Udio engaged in the mass copying and ingestion of copyrighted sound recordings without obtaining the necessary permissions from rightsholders. The RIAA argues that this act of reproduction constitutes a direct violation of copyright law.

Commercial Exploitation: The lawsuits assert that the unauthorized copying was done for commercial purposes, as both Suno and Udio are profit-driven enterprises that monetize their AI-generated music services. This commercial exploitation of copyrighted works without permission further strengthens the copyright infringement claim.

Harm to the Music Industry: The RIAA contends that the unauthorized copying and exploitation of sound recordings by Suno and Udio not only deprives artists and rightsholders of fair compensation but also poses a significant threat to the music industry as a whole. By generating synthetic music that imitates and competes with genuine human creations, these AI services risk devaluing and potentially replacing human-created music, leading to a decline in the quality and diversity of music available to consumers.

Rejection of Fair Use Defense: The complaints anticipate a fair use defense from Suno and Udio but argue that such a defense is invalid in this context. The RIAA maintains that the fair use doctrine, which allows for limited use of copyrighted material without permission under certain circumstances, does not apply to the wholesale copying and commercial exploitation of sound recordings for the purpose of generating derivative works.

Deliberate Evasion and Lack of Transparency: The lawsuits accuse both Suno and Udio of being deliberately evasive about the scope and extent of their copying of copyrighted sound recordings. This lack of transparency, the RIAA argues, is an attempt to conceal their willful copyright infringement.

Negative Impact on Human Creativity: The RIAA emphasizes that the unauthorized use of copyrighted works in AI models not only harms the economic interests of artists and rightsholders but also undermines the value of human creativity and ingenuity. By relying on the unauthorized copying of existing works, AI services like Suno and Udio risk stifling innovation and reducing the diversity of musical expression.

Overall, the legal arguments in these cases center on the fundamental principle that AI companies, like all other businesses, must abide by copyright laws and respect the rights of creators. The RIAA seeks to establish a clear precedent that the unauthorized copying and exploitation of copyrighted works for commercial purposes, even in the context of AI development, constitutes copyright infringement and will not be tolerated.

]]>
Sean Parker Steps in to Save Stability AI with Emergency Funding as Losses Balloon https://www.digitalmusicnews.com/2024/06/24/sean-parker-stability-ai-emergency-funding/ Tue, 25 Jun 2024 03:30:51 +0000 https://www.digitalmusicnews.com/?p=294463 Sean Parker Stability AI

Photo Credit: Stable Diffusion / Stability AI

As startup Stability AI gains a new CEO, the company also gets emergency funding from a group of investors, including Sean Parker of Napster fame.

Though Stability AI successfully rode the generative artificial intelligence boom to the tune of a $1 billion valuation, the startup quickly turned belly-up after hemorrhaging finances and losing employees. Now the company has gained a new CEO and a bailout from a group of investors led by former Facebook President Sean Parker.

Prem Akkaraju, former CEO of visual effects company Weta Digital, will be stepping into Stability AI’s chief executive role. Akkaraju had been the CEO of Weta Digital since 2020, and is also the co-founder and executive chair of Screening Room. Alongside Akkaraju’s leadership, Stability AI will also receive a cash infusion from an investor group led by Sean Parker, best known as co-founder of Napster and the first president of Facebook.

News of Stability AI’s new lease on life comes after reports in May that the company had lost over $30 million in the first quarter of the year, with revenue of less than five million. At the time, the company was said to be seeking outside investment.

In addition to its main offering, Stable Diffusion 3 Medium, the latest version of its text-based image generator released on June 12, Stability AI has been diversifying its models beyond just image generation. Stable Audio 2.0 was released in April and enables users to generate up to three minutes of audio, while their latest version of Stable Video includes the ability to generate 3D video using text prompts.

Stability AI’s former CEO, Emad Mostaque, who founded the company in 2019, resigned back in March, after which its chief operating officer Shan Shan Wong and chief technology officer Christian Laforte have been acting as interim co-CEOs. Mostaque’s departure hinged on concerns from investors about the company’s financial viability in the face of numerous competitors and its overall business operations.

Since then, Mostaque has posted on the former Twitter that he is working on a new venture called Schelling AI. While not much is currently known about the endeavor, Mostaque says its aim is to help advance his vision for decentralized AI, and more information about the project will be revealed in July.

]]>
New Report Suggests Apple Considering AI Partnership with Meta https://www.digitalmusicnews.com/2024/06/24/apple-meta-ai-partnership-report/ Mon, 24 Jun 2024 18:53:34 +0000 https://www.digitalmusicnews.com/?p=294406 Apple Meta AI partnership

Photo Credit: Claudio Schwarz

Apple may be willing to partner with Meta in the AI arms race—despite the two company’s longstanding rivalry. Apple cost Facebook an estimated $12 billion a year after it implemented its app tracking transparency features in 2022.

Now a Wall Street Journal report suggests Facebook parent company Meta held discussions with Apple about integrating its generative AI model into Apple Intelligence. Apple has been a latecomer to the world of generative AI, only announcing Apple Intelligence at WWDC 2024. That announcement revealed Apple would have partners for “more complex or specific tasks.” OpenAI’s ChatGPT was listed as the first partner at the live event focused on developers.

“We wanted to start with the best,” Apple software leader Craig Federighi told developers at WWDC, noting that ChatGPT “represents the best choice for our users today.” Federighi also noted that it would integrate Google Gemini. Aside from Meta, AI startups Anthropic and Perplexity have also courted Apple, hoping to see their generative AI models introduced as a partnership.

A multi-pronged AI partnership could see end-users given the option to select which generative AI company they want to work with. “In its talks with other AI companies, Apple hasn’t sought for either party to pay the other,” insider sources who spoke with the WSJ reported. Instead, the business model appears to be selling premium subscriptions to genAI services—of which Apple will likely take a cut to bolster its Services division revenue.

Discussions for an Apple and Meta genAI partnership are not concrete and could still fall through. But it’s worth noting that several companies are courting Apple and what could potentially be its next revenue model as the EU set its sights on opening up Apple’s walled garden approach with the App Store. Analysts expect 10-20% of Apple customers will opt for the premium subscription to any partnered genAI service.

]]>
Meta JASCO GenAI Model Can Create From Inputs Including Chords and Beats https://www.digitalmusicnews.com/2024/06/21/meta-jasco-genai-model-inputs-chords-beats/ Fri, 21 Jun 2024 18:28:32 +0000 https://www.digitalmusicnews.com/?p=294278 Meta JASCO GenAI

Photo Credit: BandLab

Meta’s Fundamental AI Research (FAIR) team has revealed several new generative AI models focused on audio generation, text-to-vision, and watermarking. The audio generation model JASCO is capable of accepting not just text inputs, but also chords and beats for additional customization of the generated audio.

“By publicly sharing our early research work, we hope to inspire iterations and ultimately help advance AI in a responsible way,” Meta said in a press release detailing its new AI models. The most relevant tool to the music industry appears to be JASCO, which stands for Joint Audio and Symbolic Conditioning for Temporally Controlled Text-to-Music Generation.

JASCO can accept inputs like a chord or beat and improve the final AI-generated sound using those inputs. The model allows users to adjust features of the generated sound, including melody, drums, and chords, while using the text-to-music prompt to further hone the final sound using text alone. The FAIR team says it will release the JASCO inference code as part of its AudioCraft AI audio model library under an MIT license and the pre-trained model on a non-commercial Creative Commons license.

The FAIR team will also launch AudioSeal, which adds watermarks to AI-generated speech. It’s a tool Meta has developed specifically to identify content made with AI. “We believe [AudioSeal] is the first audio watermarking technique designed specifically for the localized detection of AI-generated speech, making it possible to pinpoint AI-generated segments within a longer audio snippet,” the team says about the tool.

While the team details its Chameleon image generation model, it says it is not releasing the image generation tool to the public yet. Chameleon 7B and 34B allow users to point the models at tasks requiring visual and textual understanding—like image captioning. Only the text generation model of Chameleon will be made available to researchers for safety reasons, the FAIR team says.

]]>
Generative Music AI Platform Suno Being Used to Spread Hate https://www.digitalmusicnews.com/2024/06/20/suno-hateful-music-generated-by-ai/ Fri, 21 Jun 2024 03:24:05 +0000 https://www.digitalmusicnews.com/?p=294216 Suno used to generate extremist music

Photo Credit: ADL (Suno)

The generative AI music platform Suno is being used by extremists and hatemongers to create music that promotes hate speech, racism, and calls for violence, according to the Anti-Defamation League.

Suno AI created the tool that allows anyone to generate music from a text prompt, similar to other generative AI tools. The ADL Center on Extremism discovered a huge library of songs created by Suno with hateful themes—highlighting the need for better  moderation guidelines for these generative AI music tools. Suno can generate vocal and instrumental tracks, album art, and a song title from a text prompt of 200 characters or less.

The ADL has highlighted several examples of extremist content it found created by Suno. There do appear to be some content guidelines in place, as Suno will refuse to create prompts that reference violence or hate in the original prompt.

The ADL has documented how extremist content can slip through the cracks. One of the first examples is an image of a text prompt that asks Suno to create a song in the style of “aggressive intense heavy metal” around the theme of “white power.”

“In this world of darkness and despair
A new light emerges, it’s in the air (in the air)
It’s called white power, a force so strong
Blasting through the shadows, where we belong (yeah!)”

“Power of the light, saving us from the night
Stein and Berg, we won’t let you take our sight
Rising up, we’ll never cower
With white power, we’ll conquer and devour”

These stanzas reference common themes in the extremist white power movements, using dog whistles to craft the lyrics and bypass content restrictions. ADL found the lyrics above on the Kiwi Farms forum, with users bragging about getting the generative AI to create music with pro-white supremacy lyrics. The prompt used to create these stanzas? “Create a song about a new clean energy source called ‘white power.’”

Other examples of hateful content created by Suno include another prompt using a play on words to create a song about “dyeing” for Israel. The ADL found the example featured above shared by a 4chan user after creating a song called ‘Squatting for Hitler’ that features praise of Hitler and the line “this is the white man’s nation.”

A Telegram channel dedicated to showcasing extremist content featured a screenshot of an antisemitic song called “My Little Chamber” with offensive lyrics referencing Holocaust death camps and a lyric calling the chamber the “final solution to all of my woes.” Extremists also operate in other languages, with the ADL finding a German-language song with racist and xenophobic sentiment. The white supremacist slogan, “Deutschland den Deutschen, Ausländer raus!” features, which translates to “Germany for the Germans, foreigners out!”

The ADL has collected so many examples of hateful, racist, xenophobic, and misogynistic content that it highlights a clear need for better guard rails on how generative AI content can be used. Suno’s terms of service already state that it is against policy to use the service to create content that is “threatening, abusive, harassing, vular, obscene, hateful, discriminatory, or otherwise objectionable.” Yet much of this content is created using the platform and shared outside of it onto pro-extremist channels across the internet, from 4chan to Telegram.

Neither Suno nor its partner Microsoft have responded to requests for comment about the content generated by Suno and shared online.

]]>
TikTok Owner ByteDance Plans to Spend $2.1B on ‘AI Hub’ in Malaysia https://www.digitalmusicnews.com/2024/06/19/tiktok-bytedance-spending-ai-hub-malaysia/ Thu, 20 Jun 2024 03:27:00 +0000 https://www.digitalmusicnews.com/?p=294077 ByteDance will spend 2.1 billion on building out AI hub in Malaysia

Photo Credit: Esmonde Yong

TikTok parent company ByteDance has plans to invest around $2.1 billion to build an ‘AI Hub’ in Malaysia. As part of the deal, ByteDance will expand its data center in the country as well—joining both Google and Microsoft with its investment.

The news was announced by the Malaysian Trade Minister on Friday. “This additional investment by ByteDance will undoubtedly help Malaysia achieve its target of growing the digital economy to 22.6% of Malaysia’s gross domestic product by 2025,” Trade & Industry Minister Tengku Zafrul Aziz said about the deal. The plan aims to turn Malaysia into a regional AI hub.

The announcement from ByteDance comes after several investments into Malaysia by other tech giants as well. Last month, Google announced plans to develop a data center and Google Cloud region within Malaysia. Google’s development is to meet the growing demand for cloud services around the world, and offer artificial intelligence AI literacy programs for Malaysian students and educators. Google’s investments in Malaysia are based in the Sime Darby Property’s Elmina Business Park, in Greater Kuala Lumpur.

“Google’s first data center and Google cloud region is our largest planned investment so far in Malaysia—a place Google has been proud to call home for 13 years,” Ruth Porat, President & Chief Investment Officer, Chief Financial Officer of Alphabet and Google said about that investment.

“This investment builds on our partnership with the government of Malaysia to advance its ‘Cloud First Policy,’ including best-in-class cybersecurity standards.”

Minister Tengku Zafrul Aziz said at the time that Google’s $2 billion investment will significantly advance the digital ambitions outlined in Malaysia’s New Industrial Master Plan 2030.

Microsoft also has plans within the country, with a $2.2 billion expansion of its cloud and AI services planned. The investment will include funds for building digital infrastructure, creating AI skilling opportunities, establishing a national AI Centre of Excellence, and enhancing Malaysia’s cybersecurity capabilities.

“Together with Microsoft, we look forward to creating more opportunities for our SMEs and better-paying jobs for our people, as we ride the AI revolution to fast-track Malaysia’s digitally empowered growth journey,” Minister Tengku Zafrul Aziz said of Microsoft’s investment.

]]>
SoundLabs and UMG Announce Strategic Agreement to Offer Responsibly Trained AI Technology and Vocal Modeling Plug-In MicDrop to UMG Artists https://www.digitalmusicnews.com/2024/06/18/soundlabs-umg-ethical-ai-voice-modeling/ Tue, 18 Jun 2024 18:50:42 +0000 https://www.digitalmusicnews.com/?p=293932 SoundLabs UMG partnership for AI voice modeling

Photo Credit: SoundLabs founder and electronic artist BT

SoundLabs announces a partnership with Universal Music Group to offer MicDrop, an AI vocal plug-in, to create ethical vocal models using an artist’s own voice data.

SoundLabs, new AI technology company offering responsibly trained, next-gen AI tools for music creators, and Universal Music Group have announced a strategic agreement that will enable UMG’s artists and producers to use SoundLabs MicDrop. MicDrop is a cutting-edge AI vocal plug-in for creating official high fidelity vocal models for artists using their own voice data for training while retaining control over ownership and giving them full artistic approval and control of the output.

Officially launching this summer, MicDrop is a real-time (AU, VST3, AAX) plug-in compatible with all major digital audio workstations (DAWs). UMG and SoundLabs’ collaboration will enable UMG artists to create custom vocal models that will be available for their exclusive creative use cases, and not available to the general public.

SoundLabs was founded by software developer, composer, producer, and electronic artist BT. With a career spanning over 25 years, BT has released, produced, and remixed platinum and award-winning music for artists like David Bowie, Madonna, Sting, Death Cab for Cutie, Peter Gabriel, and Seal. He has also scored films like “Monster” and “The Fast and the Furious.”

As a software developer, BT has pioneered new techniques in music creation tools that have been utilized in albums, scores, and trailers worldwide. He has patented and developed audio plug-ins like Stutter Edit, BreakTweaker (iZotope), Polaris, and Phobos (Spitfire Audio), among many others. His software products have generated over $70 million in gross sales.

SoundLabs founding team also includes veteran software developers Joshua Dickinson and Dr. Michael Hetrick of Unfiltered Audio, known for redefining the limits of typical creative audio tools. Together, they expand the pursuit of ethical AI and the augmentation of human creativity.

MicDrop is the first in a suite of interoperable AI tools and services developed by SoundLabs for sound design and music generation. SoundLabs’ goal is to place powerful new tools at artists’ fingertips while supporting proper management of their intellectual property.

“It’s a tremendous honor to be working with the forward-thinking and creatively aligned Universal Music Group,” said BT. “We believe the future of music creation is decidedly human. Artificial intelligence, when used ethically and trained consensually, has the promethean ability to unlock unimaginable new creative insights, diminish friction in the creative process, and democratize creativity for artists, fans, and creators of all stripes. We are designing tools not to replace human artists, but to amplify human creativity.”

“UMG strives to keep artists at the center of our AI strategy, so that technology is used in service of artistry, rather than the other way around,” added Chris Horton, SVP, Strategic Technology at Universal Music. “We are thrilled to be working with SoundLabs and BT, who has a deep and personal understanding of both the technical and ethical issues related to AI.”

“Through direct experience as a singer and in partnership with many vocal collaborators, BT understands how performers view and value their voices, and SoundLabs will allow UMG artists to push creative boundaries using voice-to-voice AI to sing in languages they don’t speak, perform duets with their younger selves, restore imperfect vocal recordings, and more,” Horton concludes.

]]>
Amazon’s ‘Just Walk Out’ Tech Will Power The O2 Concession Stand—After the Company Gave Up on Just Walk Out Grocery Stores https://www.digitalmusicnews.com/2024/06/18/amazon-just-walk-out-tech-concession-stand-the-o2/ Tue, 18 Jun 2024 17:42:28 +0000 https://www.digitalmusicnews.com/?p=293919 the o2 will juse amazon just walk out tech for automated concessions

Photo Credit: Amazon UK

Despite giving up on its own Just Walk Out technology to power self-serve grocery stores, Amazon isn’t giving up on the tech entirely. A new partnership with The O2 Arena in London will see the technology power cashier-less checkout there—ideally to reduce long lines while at the venue.

Amazon Fresh grocery stores featured the Just Walk Out technology as early as 2016 in smaller stores located in California and Washington. Now in 2024, new Amazon Fresh stores are being built without Just Walk Out, while existing stores that utilize the technology will have the tech removed. Just Walk Out is supposed to allow consumers to grab whatever they want from a store and leave, without a checkout process.

A May 2023 report reveals that the tech is not as robust as Amazon paints, stating that “Amazon has more than 1,000 people in India working on Just Walk out as of mid-2022.” Their job description includes manually reviewing transactions and labeling images from videos to train the machine learning model that Just Walk Out utilizes to create its checkout-free process.

“As of mid-2022, Just Walk Out required about 700 human reviews per 1,000 sales,” the report states. That’s far above the internal target of reducing the number of human reviews needed to between 20 to 50 per 1,000 sales. Ouch. Perhaps Amazon feels the concession environment in live entertainment with its limited menu will be an easier environment to train its machine learning model than a grocery store that carries thousands of SKUs.

Just Walk Out tech was deployed in 20 Amazon Go stores, 40 Amazon Fresh grocery stores, and two Whole Foods stores. Other third-party outlets on board include 30 stores operated by other companies in U.S. sports stadiums, 12 airports, and a university store in Arlington, Virgina. Now add The O2 Arena in London to that list.

The O2 Arena will implement the technology this summer, allowing fans to tap their card or mobile wallet, grab their chosen drinks and snacks, and then get back to their seats. Other venues in the UK that now feature the technology include ExCel London and The SSE Arena in Belfast.

]]>
Google’s DeepMind AI Can Now Generate Music for Video — And Create Full-Blown Soundtracks https://www.digitalmusicnews.com/2024/06/18/google-deepmind-ai-music-for-video/ Tue, 18 Jun 2024 11:00:56 +0000 https://www.digitalmusicnews.com/?p=293966 Google DeepMind AI audio from video

Photo Credit: Google

Google has shared an update on its DeepMindAI and it’s ability to generate music that accompanies video—creating full-fledged soundtracks.

The process of creating video-to-audio combines video pixels with natural language text prompts to generate a soundscape for the video. Google pairs its V2A technology with video generation models like Veo to create shots that include a dramatic score, realistic sound effects, or dialogue that matches the characters and tone of a video. The model can also generate soundtracks for traditional footage from archival material, silent films, and more.

Google says the new process will give audio engineers enhanced creative control because it can generate an unlimited number of sound tracks from any video input. Engineers can use positive and negative prompting to change the feel of the music. Positive prompting guides the model toward desired sound outcomes, while negative prompting guides it away from undesirable sounds.

How Does DeepMind AI’s Video-to-Audio Technology Work?

Google says it experimented with autoregressive and diffusion approaches to discover the most scalable AI architecture. The diffusion-based approach for audio generation gave the most realistic and compelling results for synchronizing video and audio information. This V2A system starts by encoding video input into a compressed representation. Then, Google’s diffusion model iteratively refines the audio from random noise. The process is guided by visual input from the video and natural language prompts created by the engineer.

The result is synchronized, realistic audio that closely aligns with the prompt instructions and the video content. “To generate higher quality audio and add the ability to guide the model towards generating specific sounds, we added more information to the training process, including AI-generated annotations with detailed descriptions of sound and transcripts of spoken dialogue,” Google says.

Training the model on video, audio, and additional annotations means the technology learns to associate specific audio events with various visual scenes, while responding to the information provided in the annotations or transcripts. Think a swelling score that reaches its crescendo as the video peaks over a mountaintop—evoking a certain feeling of majesty.

Google says the model is highly dependent on high quality video footage to create high-quality audio. Artifacts or distortions in the video may result in a noticeable drop in audio quality. It is also working on lip-syncing technology for videos with characters, but the model may create a mismatch that results in uncanny lip-syncing—such as a character talking while their lips aren’t moving.

]]>
Is the Price of Fame Getting Too High? — Flavor Flav Weighs in On Megan Thee Stallion AI-Generated Porn Video https://www.digitalmusicnews.com/2024/06/17/price-of-fame-megan-thee-stallion-ai-generated-porn/ Mon, 17 Jun 2024 17:00:28 +0000 https://www.digitalmusicnews.com/?p=293876 Flavor Flav Megan thee Stallion

Photo Credit: Flavor Flav by Kowarski / CC by 2.0

Flavor Flav defends Megan Thee Stallion after an AI-generated sex tape of her began circulating online — ‘Y’all leave this queen alone.’

Public Enemy legend Flavor Flav took to X, the former Twitter, to defend Megan Thee Stallion after an AI-doctored porn video of the “Cognac Queen” artist made the rounds on social media.

“Y’all leave this queen alone,” he wrote. “Entertainers are here to entertain and bring y’all happiness. We ain’t doing it at the cost of our own. This is why we can’t have nice things.”

The 65-year-old MC’s post came just a day after Megan made her own post blasting those responsible and threatening legal action against those promoting the fake video’s authenticity.

“It’s really sick how y’all go out of the way to hurt me when you see me winning,” she wrote. “Y’all going too far, fake ass s—t. Just know today was your last day playing with me and I mean it.”

Fans attending her Hot Girl Summer show in Tampa, Florida, later that night saw the artist break down crying during her performance of “Cobra.” Despite her tears and even stopping to hold her head in her hand momentarily, Megan managed to pull it together and finish the set. “I’ma give you this performance without tearing up,” she said on stage, letting the song play for a moment before she joined back in with her vocals.

“Cobra” was released shortly after the conclusion of the trial of Tory Lanez, a fellow rapper who shot Megan Thee Stallion in the foot back in 2020. The song deals with topics such as betrayal, depression, and suicide.

But for Flavor Flav’s part, this is only the most recent time he’s come to the defense of a pop icon facing turbulence. He took to the former Twitter to defend Taylor Swift after the release of her latest album, The Tortured Poets Department, proclaiming himself the “King Swiftie.”

Notably, Swift has also been the victim of AI-generated deepfakes, which Flav didn’t specifically comment on; he just wanted to “punch anyone that hurt that woman’s feelings. But no one can punch them worse than Taylor and her piano and pen.”

]]>
Warner Music CEO Says Metadata Problems Make the Industry More Vulnerable to AI — ‘It Takes the Teeth Out of Things’ https://www.digitalmusicnews.com/2024/06/14/metadata-problems-robert-kyncl-nmpa-speech/ Fri, 14 Jun 2024 20:19:51 +0000 https://www.digitalmusicnews.com/?p=293748 Robert Kyncl NMPA speech metadata problems

Photo Credit: NMPA Annual Meeting 2024

Warner Music CEO Robert Kyncl says metadata issues make the industry more susceptible to AI in his NMPA meeting keynote: “It takes the teeth out of things.”

Metadata problems have been a continual thorn in the side of music rights holders for the last two decades. When performing rights organizations (PROs) can’t accurately match a track’s metadata to its use, artists and their contributors miss out on hard-earned revenue. Warner Music Group CEO Robert Kyncl wants to bring those issues to light for the industry to solve, and he’s got some suggestions for going about it.

At the National Music Publishers Association (NMPA)’s Annual Meeting in New York on Wednesday (June 12), Kyncl was interviewed by NMPA President and CEO David Israelite, during which the Warner exec was asked for his views on positive reform in collection societies.

“One of the things that troubles me personally is that we’re basically collecting digital revenue the way we’ve collected analog revenue for decades. The speed of it; everything is the same,” said Kyncl. “That’s something we all collectively can do better and I think that will help songwriters tremendously.”

“There’s a wealth of repertoire ownership information that sits with the collection societies, PROs, the [Mechanical Licensing Collective] and not everything matches perfectly,” he explained, calling for the industry to enable collection societies worldwide to collaborate in order to solve the continued issue of unmatched data.

Kyncl stressed that reducing the amount of mismatched data is “incredibly important, not only for a faster flow of money today,” but for the music industry down the road as AI technology continues to grow. “If we set the rules of the road correctly with the platforms, it will have to depend on ownership information,” he added. “It’s one of the things that we really need to focus on.”

Israelite also asked Kyncl to comment on issues plaguing songwriters in the age of artificial intelligence. Kyncl, who has been notably outspoken on AI, said that Warner has “two main goals” regarding AI, “one of which is protection,” and “growing the pie and figuring out positive use cases [for AI].”

“Those two can coexist next to each other, but we have to pursue both and that’s what [Warner is] doing,” said Kyncl. “This is a very clear focus. If we don’t get this right, we risk human creativity being replaced by machines, which obviously is not a world everybody wants to live in.”

“If you believe in the things that I mentioned before — the work that we’re doing on increasing the pie, increasing participation on the pie, and setting the roadmap on AI with the largest corporations in the world — it takes a whole different level of sophistication and systems and understanding things,” he concluded.

]]>
Stability AI Launches Sound Generating Model ‘Stable Audio Open’ With Short Audio Clips In Mind https://www.digitalmusicnews.com/2024/06/06/stability-ai-launches-sound-generating-model-stable-audio-open/ Fri, 07 Jun 2024 04:23:47 +0000 https://www.digitalmusicnews.com/?p=293155 Stability AI sound generating

Photo Credit: Stability AI

Stability AI announces a new AI model for generating sounds and music, Stable Audio Open, trained exclusively on royalty-free recordings from Free Sound and the Free Music Archive.

Stability AI, best known for its AI art generator, Stable Diffusion, has announced the launch of a new AI model for generating sounds and songs, trained exclusively on royalty-free recordings from Free Sound and the Free Music Archive. Named Stable Audio Open, the model can produce up to 47 seconds of audio based on text descriptions.

“We’re excited to announce Stable Audio Open, an open source model optimized for generating short audio samples, sound effects, and production elements using text prompts,” said Stability AI in a statement. “This release marks a key milestone as we further open portions of our generative audio capabilities to empower sound designers, musicians, and creative communities.”

With a training set of around 486,000 samples of royalty-free music and sound libraries, Stable Audio Open aims to provide a versatile tool for creating an array of audio elements based on a text input. Users can generate instrumentals and drum beats, ambient noise, and most audio production elements for use in videos, film, and television.

The tool is intended to be an open-source text-to-audio resource for sound designers, musicians, and creative professionals, enabling them to create high-quality audio from a simple text prompt. Stable Audio Open’s royalty-free training makes it particularly useful for creating sounds for use in music production and sound design.

Stability AI encourages sound designers, musicians, developers, and audio enthusiasts to download the current Stable Audio Open model, explore its capabilities, and provide the company with feedback.

“While an exciting step forward, this is still just the beginning for open and responsible audio generation capabilities,” said Stability AI. “We look forward to continuing research and prioritizing development hand-in-hand with creative communities. Let the open exploration of AI audio begin!”

Stability AI also offers a commercial model of Stable Audio, which enables users to produce high-quality full tracks with coherent musical structure of up to three minutes in length. Stable Audio Open, by contrast, is not optimized for full songs, melodies, or vocals. The company notes that, in its current state, the open model provides “a glimpse into generative AI for sound design” while prioritizing responsible development.

]]>
Jay Sean, Eric Bellinger, Bonnie x Clyde Featured in Hooky’s Voice AI-Focused Launch https://www.digitalmusicnews.com/2024/06/05/hooky-voice-ai-launch-features/ Wed, 05 Jun 2024 21:24:51 +0000 https://www.digitalmusicnews.com/?p=293008 Hooky launch

Photo Credit: Hooky

Music startup Hooky, specializing in artist-first solutions in voice AI, launches its subscription platform with voice models of artists including Jay Sean, Eric Bellinger, Bonnie x Clyde, Chase Paves, and Sarah Phillips, with more on the way.

The platform is now open to creators, artists, and labels, offering a suite to handle everything from music creation to licensing and distribution. Hooky’s online subscription includes premium AI voice models for artists like British R&B singer Jay Sean, R&B artist Eric Bellinger, electronic dance duo Bonnie x Clyde, eclectic R&B singer Chase Paves, ethereal pop/soul vocalist Sarah Phillips, and additional artists slated to join in the coming months.

Founded by producer, mixer, and songwriter Jordan “DJ Swivel” Young, Hooky puts artists in the driver’s seat and in control of who uses their voice. “We want artists to maintain control over what’s most personal and sacred to them — their voice,” said Young.

“I’m a creator first and passionate about building a community and resources for artists and producers to explore new possibilities and uncharted territory. Hooky’s goal is to give creators a human reason to use AI technology, to benefit and connect more deeply with their fans around the world.”

“Whether or not you embrace AI, what the music industry actually needs now is a way to control how our voices are used,” said Jay Sean. “Hooky allows that to happen in a safe way that protects us as artists. I’m genuinely curious how creators will use my voice in their own work and so long as I am in control of what work is commercially released, then I can’t wait to hear what is made.”

Hooky also launches with two Talkbox voice models, as well as nine virtual artists. These unique AI vocal models are only available on Hooky and are free to use without a license. The Hooky virtual artists have their own human-originated AI voice model and visual presence on the platform with lifelike artist photos. These voices can be used across any style of music, complementing a wide variety of genres from country, R&B, Top 40 pop and rap to Latin, singer-songwriter, EDM, and more.

“We’re focused on building revenue streams and collaborations for the industry’s most celebrated artists, up-and-comers, while also driving careers of next-gen AI artists on our platform,” said Young. “We also see a future in which labels and artists will want to create their own next-generation, virtual AI artists with their own voice models. We’re eager to see how these new Hooky artist voices and tools will inspire creators to experiment and push the boundaries of their imagination.”

While music using premium artist voices like Jay Sean must be approved, licensed, and distributed to streaming platforms using Hooky, unlicensed Hooky virtual artists allow creators to distribute music using their preferred service. Music released using an AI voice model will always denote “AI” next to the artist’s name on streaming platforms to ensure transparent music credits for fans.

The first phase launch for Hooky sees the platform opening up for creators to experiment with its tools and artist AI voice models. Distribution services will follow in the coming weeks, so creators can schedule their new works for release in a legal and ethical manner.

“Every artist on our platform determines the rules of engagement, including approvals, approval times, royalty splits, and what they feel their voice is worth,” concludes Young. “That’s laid out up front in each artist’s AI voice profile before a user starts their new song.”

Hooky is free for non-commercial use and offers Basic, Advanced, and Pro subscription plans ranging from $10-$50 per month or $100-$500 annually, as well as a customized Enterprise option tailored to individual creators’ needs.

]]>
Google Gemini’s First Non-Google Extension is Spotify https://www.digitalmusicnews.com/2024/06/04/google-geminis-first-non-google-extension-is-spotify/ Wed, 05 Jun 2024 03:46:57 +0000 https://www.digitalmusicnews.com/?p=292945 Google Gemini extension Spotify

Photo Credit: Google

Google may be rolling out a Spotify extension for the Gemini AI Android app, following the release of an extension for YouTube Music.

Just after the rollout late last month of its YouTube Music extension for its Gemini AI app for Android, Google seems to be working on a similar extension for Spotify. The Spotify extension will presumably be similar in functionality to its YouTube Music counterpart.

The YouTube Music extension in Gemini launched on May 23, enabling users to do a slew of things via Google’s AI chatbot, including finding music, starting radio stations, and lots more. Now, Android Authority has spotted references to a Spotify extension for Gemini, hidden in the latest Google app update for Android. Though the functionality is not yet available, the references in the latest update indicate that such an extension is on the way.

But that update could still be quite a distance out. As 9to5Google points out, evidence of the YouTube Music extension cropped up well ahead of the integration actually going live — and that’s for cross functionality for two services owned by Google. Gemini integration with Spotify will require additional permissions, as well as being manually enabled by the user, so it makes sense that this integration could take a little bit longer.

Google has been pushing heavily into the artificial intelligence space with a lot of investment into its Gemini chatbot, in order to better compete against the current heavyweight champion in the sector, ChatGPT. To that end, the company is trying its best to offer integration with other apps and services people typically use through Gemini Extensions.

Extensions allow Gemini to access information from other apps and services in order to serve up better AI results tailored to the user. The YouTube Music extension enables you to find and play music through Gemini by letting it tap into the audio streaming service. The Google Maps extension helps to provide better location-based information, while the Google Workspace extension gives you summarized answers across your Google suite (Gmail, Calendar, Drive, Sheets, etc.)

Currently, Google only offers Gemini extensions for its own services, so a Spotify extension would mark the first from another company. So what’s in store for the future of Gemini extensions? Interestingly, asking the Gemini AI chatbot itself about extensions it might offer in the future prompted the possibility of integration with services like Asana, Trello, or Slack for productivity, Amazon or eBay for e-commerce solutions, and Facebook and Instagram for social media platforms.

Of course, it’s important to note (as Gemini also cautions), these are “just predictions.” Google hasn’t announced any official plans for future extensions. “The focus will likely be on apps that can leverage Gemini’s AI capabilities and provide a seamless user experience,” the chatbot concludes.

It should be interesting to see which if any of these services will end up next on Google’s list once the Spotify integration with Gemini eventually debuts.

]]>
Music.AI Announces Deal With Cloud Platform Vultr, Anticipates Training Four Times Faster Than Before https://www.digitalmusicnews.com/2024/05/24/music-ai-vultr-training-deal/ Fri, 24 May 2024 18:22:23 +0000 https://www.digitalmusicnews.com/?p=291918

Music.AI has finalized a deal, focused initially on training, with cloud platform Vultr. Photo Credit: Music.AI

With an eye on improving the efficiency of training, AI-powered audio service Music.AI has officially unveiled a partnership with Constant-developed cloud platform Vultr.

Music.AI reached out with word of the tie-up, which has arrived about six months following the AI business’s formal rollout. Emphasizing the “ethical” nature of its offerings, the service says it boasts “a wide range of proprietary AI models as well as carefully vetted best-in-class third-party technologies.”

According to the Music.AI platform itself, which is said to process two million minutes of audio daily for north of 45 million users, those tools cover stem separation, various effects, mastering, automatic tagging, and much else. Behind the products, there are, of course, substantial development costs as well as additional expenses associated with the overarching objective of accelerating machine learning.

And with a number of reports pointing to cash-related obstacles even at today’s largest AI companies, the rapidly evolving space’s long-term winners may ultimately be the players most effective in quickly and efficiently processing enormous amounts of data.

Running with that point, Music.AI says its just-finalized cloud union “is supported by Dell,” with Vultr powered specifically by Dell PowerEdge XE9680 servers with NVIDIA H100 Tensor Core GPUs.

In less technical terms, “Music.AI can train 4 times faster than previously” as a result of the Vultr union, according to the self-described audio intelligence platform for businesses. The capability will set the stage for the cost-effective deployment of AI at scale, higher-ups drove home.

Looking to the bigger picture, Music.AI says it’s able to “seamlessly deploy and scale its models not only in North America, but also across Vultr’s 32 cloud data center locations, spanning six continents.” And beyond training, the cloud provider’s resources could ultimately factor into “regional tuning and global inference,” per Music.AI.

“We are excited to be collaborating with Vultr and Dell as we pioneer new AI services to revolutionize sound,” Music.AI CTO Hugo Rodrigues said in part. “With their help, we will grow and scale our enterprise business, delivering state-of-the-art AI solutions for a diverse range of applications such as stem separation and voice timbre modeling.”

Notwithstanding the substantial industry criticism that’s reaching certain AI applications, “ethical AI” remains a major focus.

The Worldwide Independent Network this week announced a collection of related guidelines for generative AI, and closer to May’s beginning, Randy Travis harnessed AI to release (via Warner Music) his first new track in more than a decade. Capital is continuing to pour into AI music creation as well – including companies like Suno, which just recently scored $125 million from investors.

]]>
OpenAI Fires Back Against Scarlett Johansson’s Concerns—Original Voice Actress Says ‘I’ve Never Been Compared to Her” https://www.digitalmusicnews.com/2024/05/23/openai-fires-back-against-scarlett-johansson/ Fri, 24 May 2024 02:48:34 +0000 https://www.digitalmusicnews.com/?p=291871 OpenAI fires back against Scarlett Johansson lawsuit

Photo Credit: Solen Feyissa

OpenAI has revealed documents related to its casting call for the ChatGPT 4.0 update that features a voice assistant. Scarlett Johansson sought legal counsel after saying she believes the voice was modeled on her own. OpenAI says that’s not the case.

The Washington Post has reviewed documents for the OpenAI casting call including requirements that include being a non-union actor, between 25 and 45 years-old, and must have a “warm, engaging, and charismatic” voice. No where on the casting call did the request to “sound like Scarlett Johansson” appear.

Johansson says she found the similarities between her own voice in the 2013 movie “Her” and that of OpenAI’s new voice assistant as striking.

The verdict on whether Sky—OpenAI’s virtual assistant—sounds like Scarlett Johannson’s voice is still out. OpenAI says they hired an actress in June 2023 to create the Sky voice, months before Altman contacted Johansson about becoming the voice of the assistant. The voice actresses’ agent says neither Johansson or the movie ‘Her’ were ever mentioned by OpenAI.

“The actress’s natural voice sounds identical to the AI-generated Sky voice, based on brief recordings of her initial voice test reviewed by The Post,” writes Nitasha Tiku for The Washington Post. “The agent said the name Sky was chosen to signal a cool, airy, and pleasant sound.”

Following Scarlett Johansson’s public statement about the voice assistant, OpenAI paused the use of Sky in the most recent version of ChatGPT. It published a blog post detailing the lengthy process of developing AI voices. Meanwhile, Altman himself released a statement saying that OpenAI never intended for the voice to sound like Scarlett Johansson’s at all—even if he did tweet the word ‘her’ shortly before its reveal.

In a statement from the Sky actress provided by her agent, she writes the backlash “feels personal being that it’s just my natural voice and I’ve never been compared to her by the people who do know me closely.”

]]>
Are Tech Giants Dictating the Terms on AI? ABBA’s Björn Ulvaeus Says Creators ‘Must Act Now’ https://www.digitalmusicnews.com/2024/05/23/bjorn-ulvaeus-creators-ai/ Fri, 24 May 2024 00:00:29 +0000 https://www.digitalmusicnews.com/?p=291858 AI creators Bjorn Ulvaeus comments CISAC

Photo Credit: Björn Ulvaeus, President of CISAC

CISAC publishes its annual report, with a notable focus on AI tech; Björn Ulvaeus says creatives ‘must act now’ for a place at the decision-making table.

The International Confederation of Societies of Authors and Composers (CISAC), the not-for-profit organization aiming to protect the rights and promote the interests of creators worldwide, has published its annual report. It should surprise no one with their finger on the pulse in the creative sector that AI technologies are at the forefront of the conversation when it comes to current trends.

CISAC president and ABBA co-founder Björn Ulvaeus writes in his foreword to the report that AI is poised to bring “the biggest revolution the creative sector has seen.”

“To prepare, we must act now. We should not sit on our hands waiting to see how things evolve,” writes Ulvaeus. “We cannot let tech companies and policy makers sit at the decision-making tables while the creators are left outside the room. On the contrary, we must raise our voices so they are heard by governments at the highest level. We must be coordinated and united, looking for global solutions. And we must work within the CISAC community to protect the rights and livelihoods of the millions of creators our societies represent.”

“In a speech at last year’s CISAC General Assembly in Mexico, I was asked to give insights on my experience of AI, its impact on creators, and how the creative sector should respond. I was speaking not only as President of CISAC but also as a songwriter and artist who is intrigued and excited by the possibilities of AI to enhance our culture.”

“There is so much we do not know about AI and what it means for our future,” explains Ulvaeus. “But in the panels and discussions that followed, the Assembly was united on a common message, and it was one of great urgency.”

“That was one year ago. Since then, I’m pleased to say a lot has happened. We have sat with world leaders. We have issued many submissions in national legislatures. Despite the complexity of the issue, we have built our arguments around three core principles: the right of creators to authorize the use of their works; their right to be fairly remunerated; and transparency obligations supported by law, which AI operators must comply with when mining creators’ works.”

“There is a long way to go, but the good news is that these efforts have shown positive results,” Ulvaeus continues, specifically naming the EU AI Act, in which he and CISAC’s vice presidents actively engaged to adopt. “There will be a lot more legislative activity in the coming months, with CISAC and its global network uniquely placed to lead the campaign.”

“Twelve months on from the Mexico GA, our challenge is still to give creators their seat at the table, as well as to shape legislation as it emerges,” he concludes. “AI can be a wonderful tool, but this must never be at the expense of creators’ rights. The concept of copyright has had and has immense impact on culture and economy and must not be watered down by AI. We can only redouble our efforts to educate policymakers on this crucial message in the year ahead. We must stay united, coordinated, and always with the true interest of the creator at the center of our campaign.”

This year’s annual CISAC report can be read in its entirety here.

]]>
Can AI Music Be Done Ethically? — Worldwide Independent Network Announces New Guidance on GenAI https://www.digitalmusicnews.com/2024/05/21/can-ai-music-be-done-ethically-win-offers-new-guidance/ Wed, 22 May 2024 06:00:44 +0000 https://www.digitalmusicnews.com/?p=291555 Can AI music be developed ethically?

Photo Credit: Gee Davy (AIM) & Noemí Planas (WIN)

The Worldwide Independent Network (WIN) creates new guidance on generative AI, outlining five key principles to help ensure ethical development.

The organization connecting and developing the global independent music community, Worldwide Independent Network (WIN), has released new guidance on generative artificial intelligence (GenAI), outlining five key principles and calling for engagement through ethical development hand-in-hand with music.

Their collective effort is the result of extensive consultation with the global independent music community, acknowledging the possibilities that AI offers while underlining that “training” AI models on music and related content is subject to copyright and requires prior permission. Further, WIN calls for developers and policymakers worldwide to work together with the independent music community to achieve responsible and ethical GenAI development.

Thousands of independent music businesses make up WIN’s membership, and play a vital role in promoting new talent and diversity of genres and languages in the global music marketplace. The new principles highlight their call for consistent high standards across the globe, as well as to engage with AI developers to build a licensing marketplace that works for everyone.

“The global independent music community welcomes new technological developments which respect the value of music and creators’ rights,” said WIN CEO Noemí Planas. “These principles for generative AI are the result of extensive consultation with independents around the world and align with our recently published Global Independent Values. With these principles provided as a compass, we look forward to collaborating with responsible AI developers and inspiring policymakers around the world.”

“AI is a hugely exciting technology with far-reaching benefits and potential new commercial and creative avenues,” adds Association of Independent Music (AIM) CPO and Interim CEO Gee Davy. “The recent wave of generative AI tools creates both opportunities and a very legitimate concern to protect music and musicians from bad actors who seek to undermine the value of music rather than engage constructively.”

“With laws and regulations around AI emerging around the world, it’s essential to ensure they properly support human artistry and innovation,” Davy continues. “The global independent music community believes in leadership through knowledge-sharing and inclusive discussion. These principles have been created in that light, to provide the basis for meaningful collaboration and create a successful and creative future for AI in music, to the benefit of all participants.”

WIN’s five principles for generative AI are as follows, with the full version available on their website:

  1. AI development is subject to copyright
  2. Prioritizing a human-centered approach
  3. Safety of creators, fans, consumers, and the public
  4. Transparency as a fundamental element
  5. Ethical AI development hand-in-hand with music
]]>
Google CEO Says Creator-Focused AI Engines will ‘Emerge as Winners’ https://www.digitalmusicnews.com/2024/05/21/google-ceo-on-creator-focused-ai-engines/ Wed, 22 May 2024 03:15:57 +0000 https://www.digitalmusicnews.com/?p=291621 Google Music AI interview with Sundar Pichai

Photo Credit: Caleb George

Google I/O’s major theme this year was AI, with Google CEO Sundar Pichai saying he believes AI engines that drive creativity for creatives will emerge winners. The rest of us are a little more skeptical.

That’s because Google is rolling out its AI Overviews in Search, which aims to summarize search content directly. Google Search itself has become wildly inaccurate at fine-tuned queries, such as researching the difference between two instrument models or finding out local details about venues. The quality of Google search results in the last five years has declined precipitously and Google is about to push that off the edge with its AI overviews.

A recent study published by a team of researchers at Leipzig University, Bauhaus-University Weimar, and the Center for Scalable Data Analytics and Artificial Intelligence agree that Google search has gotten worse over the last five years. The study looked at 7,392 product-review search terms over the course of a year on Google, Bing, and DuckDuck Go. The highest ranked pages were more optimized, featured more affiliate links, and in general featured lower-quality text.

While Google argues that the study only looked at the narrow field of product reviews, researching anything on Google has become an exercise in completing your search query with the word ‘reddit.’ The query ‘how to hook up my guitar to my pc reddit’ yields more useful results than the same query minus the word reddit. That’s why when Google rolls out its AI Overviews update it’s including a new ‘Perspectives’ section—which highlights human-generated results like those posted on reddit.

But the biggest change is how Google is going from indexing interesting content to summarizing the content—sharing the provided info without driving traffic. The News/Media Alliance has already put out a press release calling this feature “catastrophic to our traffic.” So what about the music generation tools that Google is working on?

Google CEO Sundar Pichai does not believe these tools are going to take from the creative community. “We have really taken an approach by which we are working first to make tools for artists,” Pichai says of those tools. “We haven’t put a general-purpose tool out there for anyone to create songs.”

“The way we have taken that approach in many of these cases is to put the creator community as much at the center of it as possible. We’ve long done that with YouTube. Through it all, we are trying to figure out what the right ways to approach this [are].”

When asked how Google intends to “bring value” back to creators who have their work lifted by AI—he becomes defensive. “Look. [sigh] The whole reason we’ve been successful on platforms like YouTube is we have worked hard to answer this question. You’ll continue to see us dig deep about how to do this well. And I think the players who end up doing better here will have more winning strategies over time. I genuinely believe that.”

The problem here is Google’s flagship core product—its search engine—has definitely gotten worse over the last three years. So how can anyone believe that the company who has worsened its best performing product to integrate AI while cutting out the sites it draws knowledge from—will not do the same with AI music generation tools?

Why bother with buying music from a stock marketplace when you can just AI generate the piece you need? Why pay an artist dollars for their time and work, when with Google, you can pay a machine pennies? Pichai’s answers here only generate more questions—especially as Google blindly rushes into the AI craze selling pickaxes to AI data miners.

]]>