Using Artificial Intelligence in Developing Broadcast Programming – Watch for Legal Issues

Broadcast Law Blog 2024-09-05

It seems like virtually every panel at every broadcast and media convention, at some point, ends up involving a discussion of Artificial Intelligence. Sessions on AI are filled to capacity, and sessions unrelated to the topic seem to have to mention AI to appear relevant.  Whenever there is a topic that so thoroughly takes over the conversation in the industry, we lawyers tend to consider the legal implications.  We’ve written several times about AI in political ads (see, for instance, our articles here, here and here).  We will, no doubt, write more about that subject (including addressing further action in the FCC’s proceeding on this subject about which we wrote here, on the Federal Election Commission’s pending action on its separate AI proceeding, consideration of which was again postponed at its meeting last week, and on bills pending in Congress to address AI in political advertising). 

We’ve also written about concerns when AI is used to impersonate celebrities and to create music that too closely resembles copyrighted recordings (see, for instance, our articles here and here).  When looking for new creative ways to entertain your audience, a broadcaster may be tempted to use AI’s ability to have a celebrity “say” something on your station by generating their voice with some form of AI.  As we noted in our previous articles, celebrities have protected interests in their identity in many states, and there has been much recent activity, caused by the advent of easily accessible generative AI that can impersonate anyone, to broaden the protections for the voice, image, and other recognizable traits of celebrities.  A federal NO FAKES Act has also been introduced to give individuals more rights in their voice and likeness.  So being too creative with the use of AI can clearly cause concerns.

Today, we’ll offer some more words of caution about the various uses of AI to create and deliver programming to broadcast stations.  This is already happening, and its use will, no doubt, be even more common in the future.  Two years ago, most of us had only considered artificial intelligence when watching science fiction movies, even though technology that could be termed AI has been around for over half a century.  In broadcasting, AI is being used not only for purposes that mirror its use in other industries, including helping to make sense of big data files and offering suggestions for client prospecting.  But legal issues with privacy and data security abound with these uses, and we may address those issues in the future.  AI has even made its way into programming, the subject of today’s article.  We’ve seen services introduced to provide human-sounding DJs for radio stations, and others designed to provide local news and information reports localized for a station’s service area.

Using AI to provide compelling local content to distinguish a broadcaster from its digital competition promises to help broadcasters maintain the relationships they need to survive with their local communities, even at a time when broadcast budgets are stretched thin, making the hiring of staff to provide local service difficult.  If AI can deliver content relevant to audiences at a lower cost, it might seem like a winning proposition for broadcasters.  But, as always, broadcasters need to be sure that they recognize the risks that arise from such services and ensure that the services that they are buying have safeguards to minimize these risks.  Plus, they should look for indemnifications if the risks are not properly managed. 

We note first that broadcasters should approach go-it-alone uses of AI cautiously.  Using free AI tools to create content creates special risks.  Risks include the privacy and security of information provided to any publicly available free AI technologies.  As most of these technologies “learn” from the users of the platform, using them for proprietary purposes can create risks that the information you input into the AI program can also be exported from it by other users of the same program.  So inputting all your sales data into a free AI program, for example, or using these systems to optimize your programming, may well make some of the inputted information discoverable to subsequent users of the platform – including, potentially, your competitors.   

AI learning from existing digitized content is subject to its own set of issues about whether owners of the content that is being used to train the AI are entitled to compensation from the owners of the AI systems.  This area has spawned dozens of lawsuits and prompted legislative proposals – topics that are unsettled and worthy of their own article – that we will tackle at some point in the future.

But in considering AI for programming purposes, be aware that free AI services also may not be updated with new information regularly.  So, if you are trying to use these services to identify relevant news in your market, the product may be immediately outdated or, worse, it may make stuff up (which happened to me when I asked a free AI program to write a blog summarizing  recently introduced legislation, which came back sounding very real, except that it had all the factual information wrong – it “hallucinated” the information about the legislation).  For me, I recognized the issue quickly as the AI article did not match the information that I knew to be true, but in a fast-paced broadcast environment, you could end up with false or even legally dubious information on the air if the output is not reviewed by real people.

But no matter who creates the AI products to provide programming services, there are traps if the human element is removed.  I have already heard local television stations operators reporting that their news reports have begun to surface on local radio stations (and other media outlets using AI-delivered local news services) – in almost word-for-word recitations of the TV station’s news report.  While factual information itself cannot be copyrighted, the expression of that information can be copyrighted.  Where an AI system has many sources of information about local events, where it can “learn” about the facts of an event from multiple perspectives, it may be able to synthesize a story about that event in its own unique words.  But, particularly in smaller markets where there are few sources of information about local events, the machine learning may not have enough sources of information to create a unique take on the event.  If AI spits back a story using many of the same words as the original story from which it gathered its information, and you broadcast that story on your station, you could be held responsible for a copyright violation. 

Issues can arise, too, if AI is unleashed to be an on-air “host” or “talent” without well-defined guidelines on content.  We have all heard of AI having “hallucinations” where it generates false statements when asked a question – one of the reasons that Google’s AI search assistant was quickly disabled a few months ago when it was providing inaccurate (and in some cases potentially dangerous) answers to user inquiries.  Similarly, if an AI program is set up to provide radio listeners a simulated live DJ experience, commenting not just on the music being played but also the local events of the day, there is always the potential that, depending on how the system is set up, some hallucination could arise which, if broadcast over the air, could create liability issues – whether that be an FCC violation or a defamation lawsuit should the AI accuse a local individual of something that they did not do.  There have already been lawsuits (though none that I know of against a radio operator) based on AI hallucinations accusing individuals of crimes that they did not commit.

As with any technical product that you are buying, get references for any AI programs that will be used to create content for your station.  Make sure that the company providing the tools knows what it is doing and has taken steps to minimize the risks.  Look at the contracts to make sure that the company providing services is providing you with indemnifications if their software ends up taking your programming in directions where it should not go.  And check your broadcast insurance policies to make sure that they protect you not only when a human announcer says something wrong on the air, but also when a non-human AI program makes one of those errors.

AI offers broadcasters many opportunities but, as with any technology, you need to understand the cautions about its use.  And watch for legal developments in this area, as issues are developing quickly.