Search

How Big Tech Companies Really Think About AI - New York Magazine

techsooper.blogspot.com

Photo-Illustration: Intelligencer; Photo: David Paul Morris/Bloomberg via Getty Images

If you’re a big generative AI company, that means you’re currently in the process of getting sued, vigorously and by multiple parties at once.

Multiple groups of authors, one with the Authors Guild, have filed suit against OpenAI and Meta for training their models on copyrighted material. Alphabet has been accused in a class action of “mass theft” and of scraping “everything ever created and shared on the internet by hundreds of millions of Americans.” Visual artists have filed suit against Midjourney and Stability AI; Stability AI is also the defendant in a lawsuit filed by Getty Images, which claims the company’s model was trained, without permission, on millions of its photos. Anthropic is being sued by record labels over lyrics. Microsoft is getting sued by anonymous software developers over a coding tool in GitHub.

These lawsuits vary in scope and seriousness and collectively touch on not just copyright but, from various legal and ethical angles, questions of privacy and consent. They’re sort of all over the place, but with good reason: This is new territory, and it’s not yet clear how existing laws and legal precedents relate to technologies that, to the plaintiffs and lots of other people, seem like they might be built on a foundation of theft and for something like copying.

A lot of their claims will likely be dismissed, and some of these suits have already been pared down. Mauricio Uribe, a partner at law firm Knobbe Martens, describes this early round of suits filed against OpenAI, Google, and Microsoft as akin to “seeing the undercard of the prize fight” — the type of suit that will eventually help settle what are still emerging as the core legal questions about generative AI, like whether training models on millions of pieces of copyrighted material is protected, as companies like OpenAI claim, by fair use.

The courts can have fun with questions like that. In the meantime, we can have fun learning something else. Tech companies have responded to most of these lawsuits with filings of their own — mostly motions to dismiss — that contain not just legal arguments but full-throated defenses, on behalf of their clients, of generative AI as a project and as an industry. For the most part, Uribe says, these are “completely extraneous to legal questions.” But they’re also kind of wild, revealing how these companies talk about AI when threatened or when there’s money on the line.

Take Google, whose lawyers set up a motion to dismiss like this:

Generative artificial intelligence (“AI”) holds perhaps unprecedented promise to advance the human condition. It is already beginning to revolutionize the way we use technology, serving as a companion that can help research, summarize, and synthesize information; brainstorm ideas; write original creative or factual text and software code; and create images, videos, and music. It will open doors to new insights and forms of expression, as well as better, personalized help and advice in areas such as education, health care, government services, and business productivity.

The plaintiff’s “383-paragraph anti-AI polemic,” Google’s lawyers say, “would take a sledgehammer not just to Google’s services but to the very idea of generative AI,” i.e., that thing that we’ve just been told will advance the human condition. “To realize the promise of this technology,” they say, “generative AI models must learn a great deal,” and like “a human mind” they require “a great deal of training” to do so, concluding that “using publicly available information to learn is not stealing.”

Google routinely makes pretty bold public claims about AI. In a September letter, CEO Sundar Pichai said it would be “the biggest technological shift we see in our lifetimes.” But the company tends to make these claims in the passive voice, with lots of caveats about safety, caution, and its “collaborative” openness to regulation, with an obligatory nod to not getting things wrong and minimizing harm. In legal filings, instead, we get an unqualified argument: AI is important, maybe the most important thing in the world; Google must be allowed to do what it’s doing to help AI realize its potential.

Stability AI, in its response to a lawsuit by visual artists, takes a similar approach and suggests that it is at the forefront of an industry that is “rapidly expanding the boundaries of human creativity and capability.” Who would want to get in the way of something like that? OpenAI’s lawyers open a motion to dismiss with a litany of other people’s words. “While the technology is still in its early days, some commentators believe that in the future, it may help to remedy ‘some of the world’s worst inequities,’ from unequal access to health care, to global educational disparities, and beyond,” the lawyers write. (The aforementioned “commentator” is Bill Gates.) “Others suggest that ChatGPT, in particular, ‘Heralds an Intellectual Revolution,’ representing an innovation whose significance may ultimately prove comparable to ‘the invention of printing.’” (These “others” are Henry Kissinger and Eric Schmidt.) Microsoft’s lawyers begin by mounting an argument that using AI is part of “GitHub and Microsoft’s ongoing dedication and commitment to the profound human project” of open-source software.

Meta’s lawyers are a bit less dramatic, but they’re also up to something interesting. In their motion to dismiss a copyright case, they describe LLaMa, the company’s large language model, in humanizing terms. “Just as a child learns language (words, grammar, syntax, sentence structure) by hearing everyday speech, bedtime stories, songs on the radio, and so on,” the lawyers write, “LLaMA ‘learned’ language by being exposed — through ‘training’ — to ‘massive amounts of text from various sources,’ such as code, webpages, and books, in 20 languages.” This is, again, not legally relevant to the lawsuit. But, in addition to being sort of funny — I’m not sure that “just as” really carries us from a child’s “bedtime stories” to “massive amounts of text” in “20 languages” — this frames the debate over AI in a specific and perhaps useful way, casting models as innocent, curious, independent beings that simply want to learn, and positioning their creators as mere helpers in an intuitive, inevitable process of apprehension — as parents who want the best for their young … entities? Not as software and advertising companies fighting over what they’re allowed to do in service of creating and monetizing new software. You wouldn’t sue a child for humming a song, would you? Would you? 

None of this tells us much about the legal questions at hand, and judges will know to ignore it. These setups are followed mostly by aggressive legal argumentation calling into question every single premise of the plaintiffs’ claims, which is what the tech companies’ lawyers were hired to do: Of course training AI is fair use! Of course its outputs are transformative! Who are you to even take issue, here? Etc.

What these arguments do provide is a glimpse into the future of how AI companies will talk about themselves. Leaders at Google, Microsoft, Meta, and especially OpenAI have enjoyed, over the last couple of years, the benefit of speaking theoretically. Most people in the world don’t have much, if any, direct experience with state-of-the-art AI tools; those that do have encountered them mostly in the context of demonstration, or as small features in software they already use. Figures like Sam Altman and Sundar Pichai have been relatively free to pontificate about what AI is and what it can do; they’re quite comfortable conceding potential harms or talking about responsibility and stewardship in the present and future tenses. They go out of their way to sound not just optimistic but cautious, generous, and humble about the future of AI and their parts in it. They do this because it’s good marketing. But they also do this because it’s easy. They’re not answering for specific, urgent grievances, but rather posing and responding to questions about how they plan to prevent the apocalypse. They’re not responding to public outcry. They’re not dealing with criticism — or even regulations — that cause them much worry, yet.

But they will. And when they do, they’ll probably sound more like they already do in court, in addressing alleged past harms — copyright violation, theft, indiscriminate scraping — in addition to concerns about materially specific future harms: self-important, indignant, and shrill. They’ll have to defend what they’re doing, not just what they say they want to do, to regulators and eventually to a collective plaintiff, i.e., the public. And they might sound a little bit ridiculous.

See All

Adblock test (Why?)


How Big Tech Companies Really Think About AI - New York Magazine
Read More


Bagikan Berita Ini

0 Response to "How Big Tech Companies Really Think About AI - New York Magazine"

Post a Comment

Powered by Blogger.