It has been 10 years since Theodore, a thirtysomething writer in a dead-end job, fell for his AI assistant, Samantha, in the Spike Jonze film “Her.” The offbeat love story and its stars, actors Joaquin Phoenix and Scarlett Johansson, charmed the critics and made a not-too-distant, hyper-connected future, where extreme loneliness drives people to seek affection from their technology, feel believable. (Actually, it’s not all that fictional. This is already happening.)
Where the film’s premise fell apart was the tech. Samantha was capable of expressing or at least mimicking warmth, empathy and creativity. It was a stretch to think artificial intelligence could become that good anytime soon.
What a difference a decade makes.
Consider that when “Her” opened in 2013, Apple’s Siri was just two years old, and Amazon’s Alexa and Google Assistant hadn’t even arrived yet. But soon, the public’s principal touchpoints with AI would come via imperfect voice assistants and customer service chatbots that couldn’t do much, vexed easily and produced more unintentional comedy, or frustration, than customer good will.
As a matter of direct experience, consumers would look for ways to avoid bots rather than engage them. Then OpenAI quietly unveiled ChatGPT in October and allowed the public to try it out for free. Once word got out, the tech became a viral hit.
The difference between this and earlier bots is a generational shift, especially with generative AI, a version of the technology capable of impressive language fluency, broad comprehension and the creative chops to generate art, music and written works. Developed with large learning models, which pump massive volumes of data through to boost training and blast the rough edges off, bots like ChatGPT appear to substantially narrow the gap between machine and human effort.
Although it seems like this type of breakthrough took a long time to come, it couldn’t have happened at any other time.
Advances in modeling, better hardware and more powerful processing capability, along with the availability of mammoth, high-quality data sets, came together to elevate and accelerate what’s possible. They aren’t perfect — as emerging technologies, this new class of bot can form odd or errant conclusions, get stumped or churn out gaffes. But compared to previous generations, the difference is as stark as a 2040 Ferrari sitting beside a used 2001 Kia.
It’s tempting to think of it as yet another tech fad, but experts have been clear: AI here to stay, and companies that don’t get on board now risk getting left behind. Analysts, scientists, business leaders, elected officials and many others expect that there will be virtually no aspect of modern life that this tech won’t touch, from health care, pharmaceuticals, manufacturing, agriculture, shopping, personal relationships, workplace productivity and more. This is transformative technology, they say, and it’s ready to drive a tectonic shift on par with the Industrial Revolution.
But that’s precisely why critics are raising alarm bells. The tech became so advanced, so quickly that there are few, if any, guardrails in place. That’s chilling, considering AI is poised to go everywhere. If there’s bias in the training data — or in the people who provide human reinforcement, a necessary part of the development process — if private data and ownership rights aren’t protected, if tools are freely distributed without vetting, correcting that later could be almost impossible.
Already, the dangers are in view. Take deep fakes, for instance. These AI-created photos, videos and audio can digitally mimic a real person’s likeness or voice with increasing realism. It’s one thing to marvel at the Pope wearing Balenciaga, Robert Downey Jr.’s younger self in a commercial or the AI voices of Drake and The Weeknd in a viral song. It’s another to realize how easy it is to digitally clone a politician to stoke disinformation or violence. Chat with an image-generator tool like Midjourney or OpenAI’s DALL-E, and it will crank out realistic-looking images of just about anything — like the phony arrest pictures of Donald Trump and Vladimir Putin, which went viral in March. These tools are publicly available to anyone, including criminals.
Arizona mother Jennifer DeStefano found that out when scammers faked her teenager’s voice to demand ransom. “It’s my daughter’s voice crying and sobbing, um, saying, ‘Mom,'” DeStefano recounted to ABC News. “And I’m like, ‘OK, what happened?’ She’s like, ‘Mom, these bad men have me. Help me, help me.’”
Scenarios like that are unnerving, and that’s when the tech works as designed. Obviously flaws have consequences as well. AI that draws the wrong conclusions because of bias or outdated data can cause real harm to individuals, specific groups and entire regions. (An earlier version of ChatGPT had no info on events after 2021.)
In May, the Biden administration met with the chief executive officers of Google, Microsoft, ChatGPT-creator OpenAI and Anthropic to press the matter of ethics and responsibility, and the U.K. is reportedly planning an AI summit this fall. But until there are regulations, fast-moving AI development is something of a wild west.
Brands need to carefully weigh potential partners and learn how their models were trained, where the data came from and its quality, the platform’s approach to preventing bias and whether the tech is being developed in an intentional and ethically sound way. Some due diligence now could safeguard businesses in the face of future court decisions, laws and regulations that could force changes later.
Careful consideration needs to go into business decisions around AI-related projects as well. The backlash against Levi Strauss & Co. this spring is a perfect example. In March, the iconic denim brand revealed a partnership with Lalaland.ai to test AI-generated fashion models as a way to add more diversity to its marketing campaigns.
“While AI will likely never fully replace human models for us, we are excited for the potential capabilities this may afford us for the consumer experience,” Dr. Amy Gershkoff Bolles, a lead for emerging tech at Levi’s, explained in a statement. It makes sense on paper, given the brand’s focus on new technologies, penchant for pushing envelopes and cause-based culture. But not to everyone.
Models, artists, diversity advocates and others quickly swooped in to criticize the brand for going with “fake diversity” instead of hiring real-life diverse human models. Data analyst Tulsa Rice, who goes by @FlyIngenuity on Twitter, didn’t hold back in a tweet calling it “digital blackface.”
New technologies tend to bring unforeseen pitfalls as well. Take the metaverse, for example. Fashion tested the limits of intellectual property law earlier this year, when Hermés sued Web 3.0 designer Mason Rothschild over his MetaBirkins NFTs. The jury didn’t buy the defense’s argument, which framed the digital goods as artworks and therefore in a protected class of speech. So in February, when the luxury brand prevailed, it wound up extending the real world’s IP protections to the virtual world.
With generative AI and its knack for creative work, things look even messier. It’s already triggering fears and debates across industries.
The music industry’s wake-up call happened in April, when the deep fake song “Heart on My Sleeve” featuring Drake and The Weeknd went viral. It shook up the music industry, mainly because neither artist actually performed on that song. The voices were fake, but the panic was real, sending Universal Music Group, the singers’ label, scrambling to pull the song from every social media platform it could find.
Generative AI looms large in Hollywood too, even factoring into the writers strike that began in May.
The Writers Guild of America views the tech as a helpful tool for members, but wants guidelines on how studios can use it, plus assurances that the use of AI won’t compromise the writers’ rights or ownership of their work. The Alliance of Motion Picture and Television Producers, which represents the studios, doesn’t want to hobble a potentially powerful means of cost-cutting.
As the group told WWD’s sibling publication, The Hollywood Reporter, “Writers want to be able to use this technology as part of their creative process, without changing how credits are determined, which is complicated given AI material can’t be copyrighted.”
But the reality is not so definitive. According to the Harvard Business Review, rightful ownership of AI-generated works hasn’t been decided yet, and it’s not a simple or straightforward matter.
The journal cited Andersen v. Stability AI et al. from late 2022, when artists sued “multiple generative AI platforms on the basis of the AI using their original works without license to train their AI in their styles, allowing users to generate works that may be insufficiently transformative from their existing, protected works, and, as a result, would be unauthorized derivative works.”
In other words, the case argues that using someone else’s data or images to train an AI model essentially teaches it to mimic someone else’s style and may lead to knock-offs. That could happen, even if copying isn’t the intention. AI platforms that scrape training data from the web — and many of them do — could easily rake in copyrighted material that impacts the final product.
Who’s liable in that case? Is it the tech platform that created and trained the AI model? The brand that enabled the infringement? Perhaps a shopper used a generative AI tool to personalize a problematic garment. Is the customer on the hook?
Go deeper, and the questions compound. Take the customization tool, for example: Who owns the final design, the shopper who prompted the AI or the brand? What about the instructions given to the bot — can someone own the rights, even trademark, AI prompts? If a bot ends up creating a similar design as the signature look of another designer, is the bot at fault or the user?
These are thorny issues, and they may point to a new reality in the AI era. When it’s possible, even easy, to create just about anything, then just about everything will be created. Like knockoffs.
Machine learning, with its impressive ability to recognize and identify patterns, is proving to be an effective weapon in the battle against counterfeits. That’s why e-commerce juggernauts like Amazon feature ML in its fight against fakes. But that can cut both ways, zeroing in on copycats in some scenarios and creating them in others.
It all points to one simple fact: All technologies are fundamentally tools, with no soul or agenda of their own. As generative AI evolves further, it may be harder to remember that, because it can already act impressively human in so much of its speech and creativity. But the real value, or the detriment, of bots like ChatGPT comes down to the will of the people who make or wield them. At least for now.
Like Scarlett Johansson’s Samantha reaching self-awareness in “Her,” machine sentience could be on the menu in real life one day. At least Demis Hassabis, CEO of AI research lab DeepMind Technologies, doesn’t rule it out.
“Philosophers haven’t really settled on a definition of consciousness yet but if we mean self-awareness.…I think there’s a possibility that AI one day could be,” he said in an interview with “60 Minutes.” In 2014, Google acquired DeepMind as part of its years-long research and investment in AI. The tech giant even birthed a Google bot once that had a surprising capacity to convey — or rather, imitate — emotions. It was good enough to fool a Google engineer. Now, since ChatGPT exploded and accelerated the space, it has been locked in an AI arms race with Microsoft, which is a major backer of OpenAI.
One of these giants could even be the first to reach the Singularity, the critical threshold when machine intelligence surpasses human intelligence. Johansson eventually got there in the film. Some data scientists and experts say it can happen in real life too, and with the breakneck speed of development now, they believe it could come within the next seven years.
Translation: There’s a lot of work to do and not much time.