ChatGPT. Stable Diffusion. DALL-E. Chances are, you’ve experienced the recent hype around generative AI. Every company, regardless of industry, needs to devise an AI strategy. Dev Patnaik provides a framework with which to start.
Given the deafening buzz in the six months since the public debut of ChatGPT, it can feel like we’ve entered an AI bubble. It’s pretty much all my clients and contacts want to talk about.
A normal business instinct when a frenzied hype develops — like how it is all about AI right now — is to sit back and wait till the dust settles. And that’s a view I’ve heard expressed by at least one company in describing their AI strategy. It’s usually a good instinct, but for the remarkably fast emergence of AI, it’s dead wrong.
AI is so transformative that the dust isn’t going to settle. This is a rare case where leaders need to believe the hype and get in the game or risk facing some very unpleasant consequences.
The Three Time Horizons for AI Response
Companies can achieve more clarity and direction around AI by planning their response based on three-time horizons — the next 12 months, the next three years, and the next decade — all of which leaders need to start on now.
Horizon 1: Rapid Response
In the short term, the main focus needs to be on actively defending your business from the profound disruptive impact of AI. It’s about litigation, legislation and public relations. Companies need to take legal steps to ensure that AI models aren’t stealing their intellectual property. They have to educate political representatives about the technology and the need for protective regulations. And they have to get their message out to start owning the narrative about AI.
Many companies, particularly in the music and publishing fields, are starting litigation against AI firms to stop them from pilfering their stuff. Getty Images has launched a lawsuit accusing AI image company Stable Diffusion of using more than 12 million of its photos to train its model. Universal Music Group had to threaten legal action to get Spotify and other streaming services to take down a viral AI-generated song that cloned the voices of artists Drake and The Weeknd. As the song climbed up the Spotify and YouTube rankings, the “Fake Drake” song provided an eerie example of selection AI blithely promoting generative AI.
Any business that’s sharing information needs to be ready to defend itself by including language in contracts that specifies how its data can be used or seen by AI. Waiting for Congress to act is a losing game. These are the same lawmakers who still haven’t figured out how to regulate crypto and who aren’t exactly renowned for their deep understanding of technology.
Read full article on Forbes.