Big Tech took your data to train AI. We're suing them for it

Americans didn't consent to having their personhoods mined all in the name 'training' artificial intelligence machines. Big Tech needs to understand that

On a crisp autumn day in 1992, President George H.W. Bush’s reelection campaign arrived at my hometown of Wixom, Michigan. Speaking from the rear of a train, President Bush deservedly extolled his achievement of cementing the end to the Cold War through his Strategic Arms Reduction Treaty (SALT), which eased people’s fear of nuclear war after an unnerving decades-long arms race. 

The nuclear narrative traces back to 1945 when J. Robert Oppenheimer’s Manhattan Project yielded the world’s first atomic bomb. It took more than a decade for the world to come together to create the International Atomic Energy Agency to address nuclear safety and security, but by that time, it was too late. President Truman had already detonated two atom bombs over Hiroshima and Nagasaki, killing hundreds of thousands of people, and fueling a nuclear arms race with the USSR that brought the world to the brink of annihilation.

A generation removed from President Bush’s visit, I’m now reminded of the challenges of nuclear arms as we uncover more about the most powerful and perilous technology humanity has ever created, Artificial Intelligence

big tech took your data to train ai were suing them for it

FILE (Jakub Porzycki/NurPhoto via Getty Images)

Leading AI experts recognize its astonishing potential, like curing diseases and tackling climate change, but the dangers are all too real. Even the CEOs of companies leading the charge like OpenAI’s Sam Altman, Google Deepmind’s Demis Hassabis, and Microsoft’s former CEO Bill Gates openly acknowledge them: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Over 1,000 technology leaders and experts signed an open letter calling for a six-month moratorium because AI poses "profound risks to society and humanity." 

OPENAI, MICROSOFT FACE CLASS-ACTION SUIT OVER INTERNET DATA USE FOR AI MODELS

Unless safeguards are implemented, leading experts believe AI poses serious civilizational risks like AI-driven autonomous weapons systems that distort the incentives for starting wars, and creation of an artificial general intelligence that works against humanity’s interests. But immediate dangers are beginning to manifest, like the loss of privacy and trust through widespread scams, lies, and deepfakes that sow civil unrest.

To build the world’s most transformative technology ever, companies like OpenAI and Google have "scraped," or stolen, an almost inconceivable amount of our personal information—essentially the entire internet—including our creative expressions, professional teachings, copyrighted works, photographs, and our conversations and comments, all of which we’ve communicated to unique communities for specific purposes. 

By consolidating all this data into one place to "train" the AI, they now know everything about us. They can create our digital clones, replicate our voice and likeness, predict and manipulate our next move, and misappropriate our skill sets. They are mining our personhoods to create artificial ones, bringing about our obsolescence, which is at the heart of the strike against the Big Hollywood Studios. SAG-AFTRA President Fran Drescher recently predicted, "We are all going to be in jeopardy of being replaced by machines!" In reality, screenwriters and actors, like all Labor, are already facing widespread job loss now

 

big tech took your data to train ai were suing them for it

SAG-AFTRA President Fran Drescher, with National Executive Director & Chief Negotiator, Duncan Crabtree-Ireland (R), joins Writers Guild members at a picket line outside Netflix in Los Angeles on July 14, 2023. Tens of thousands of Hollywood actors went on strike at midnight July 13, 2023, effectively bringing the giant movie and television business to a halt as they join writers in the first industry-wide walkout for 63 years. (Photo by VALERIE MACON/AFP via Getty Images)

We are a progress-driven civilization that all too often asks whether we "can" and not whether we should. Why should Big Tech have asked whether we "should" this time? Because it is illegal. State and federal laws governing personal property, privacy, copyrights, and consumer protection prohibit this type of theft and commercial misappropriation of our personal information. 

CAN CHATGPT DISCUSS CURRENT EVENTS? CHATBOT HAS CLEAR KNOWLEDGE CUTOFF DATE

Big Tech may claim they legally sourced "public" information available on the internet. But "publicly available" has never meant "free to use for any purpose." None of us consented to the use of our personal information to train unpredictable, dangerous technologies that put real people’s safety, privacy, and livelihoods at risk. And the Sophie’s Choice we’ve been given of either using the internet but sacrificing all our rights, or simply not using the internet at all, is a false choice. We do not waive our privacy and property rights simply by using the internet to transact business and communicate with family and friends.

That’s why last month, we filed suit against OpenAI and Microsoft on behalf of all Americans, including children of all ages, whose personal information was stolen and misappropriated. Three days later, Google updated its Privacy Policy, trying to clarify that it may take our information from anywhere on the Internet whether sourced from their products or not. But that was too little too late—and just plain wrong. So last week, we filed a similar lawsuit against Google. 

big tech took your data to train ai were suing them for it

FILE (AP Photo/Marcio Jose Sanchez, File)

There are two primary goals of the lawsuits. First, we argue that until OpenAI, Microsoft, Google, and other tech companies ushering in the Age of AI can demonstrate the safety of their products in the form of effective privacy and property protections, a temporary pause on commercial use and development is necessary. As a society, the price we would pay for negative outcomes with AI like we’ve paid with social media and nuclear weapons is far too steep.

Second, Big Tech must understand that our information has value, and it belongs to us. OpenAI is already valued at $30 billion, and since Google and Microsoft debuted their AI products, their market caps have each increased by hundreds of billions of dollars. That value is almost entirely attributable to the taking of our information, without which their products would be worthless. 

People are entitled to payment for what was stolen from them, and these companies should be forced to pay the victims "data dividends," or a percentage of revenues, for as long as these AI products generate money from our stolen information. The alternative to data dividends would be algorithmic destruction of the theft-created AI products.

big tech took your data to train ai were suing them for it

Sign for the technology brand Microsoft on 1st June 2022 in London, United Kingdom. Microsoft Corporation is an American multinational technology corporation which produces computer software, consumer electronics, personal computers, and related services.  ((photo by Mike Kemp/In Pictures via Getty Images))

Big Tech concedes they do not fully understand the technology spurring the arms race they’ve ignited, yet they’re rapidly entangling it into every aspect of our lives anyway. Big Tech is seizing upon the slow-moving nature of the executive and legislative branches, which were not designed to move as fast as this technology, and they’re trying to take advantage of it before the "free data" window closes. That is why the Courts play such a critical role in applying established law to these unprecedented circumstances.

I realize that all of this is a lot to process and may sound to some as alarmist. But, when I think about the first atom bomb in 1945, I’m reminded of the numerous lives lost. I think about the precious decade we allowed to pass before creating the IAEA. I reflect on the Cold War that nearly ended human civilization before our retreat from the nuclear arms race. And I think about the experts who tried to warn us. Maybe it would have been different if we’d heeded their warnings and taken steps to protect people from such a dangerous technology.

This is our opportunity to learn from our mistakes, to view the opportunities and risks clearly, to balance the desire for innovation with the need for individual liberty, privacy, and security. 

This is our chance to come together on AI.

 Author’s note: To learn more, speak up, take action, go to TogetherOn.AI.

Ryan J. Clarkson is the Managing Partner at Clarkson Law Firm, P.C. based in Malibu, Calif.

Authored by Ryan Clarkson via FoxNews July 18th 2023