Search

Don't Let Mistrust of Tech Companies Blind You to the Power of AI - WIRED

techsooper.blogspot.com

It seems evident to me that almost 70 years after the first conference on artificial intelligence—where the nascent field’s leaders suggested the task would be completed within a decade—the field is now poised to make a transformational impact on our lives. We don’t need to reach artificial general intelligence, or AGI, whatever that means, for this to happen. I wrote as much in this column three weeks ago, citing evidence that after the astonishing leap of large language models that gave us ChatGPT, the advancements had not “plateaued” as some critics were charging. I also disagreed with the wave of skeptics claiming that what looked amazing in OpenAI’s GPT-4, Anthropic’s Claude 3, Meta’s Llama 3, and an armada of Microsoft Copilots was merely a linguistic variation of a card trick. The hype, I insisted, is justified.

It turns out that conclusion is anything but evident to lots of people. The pushback was immediate and furious. My rather neutral tweet about the column was viewed over 29 million times, and lots of those eyeballs were shooting death lasers at me. I received hundreds of comments, and though a good number expressed agreement, the vast majority were negative and expressed disagreement in an impolite manner.

The attacks came from several camps. First were those disparaging the advance of AI itself, claiming I was a lousy journalist for blindly accepting the fake narrative of the tech companies pushing AI. “This is a shill, nothing more,” said one commenter. Another said, “You’re parroting the lies put forth by those scam artists.” After Google released its AI Overview search feature, which was prone to jaw-dropping errors, my responders seized on its mistakes as proof that there was no there in generative AI. “Enjoy your pizza with extra glue,” someone advised me.

Others used the opportunity to decry the dangers of AI, though this stance affirms my observation that AI is a big deal. “So was the Atom Bomb,” said one tweeter. “How did that work out?” One contingent condemned LLMs because they trained on copyrighted material. This is a valid criticism but doesn’t diminish what these models can do.

My favorite response was from someone who cited the example I used of an LLM passing the bar exam with high marks. “Passing the bar exam is something DeepMind could do back when it did well at Jeapordy [sic],” said this detractor. The Jeopardy! computer was actually IBM’s Watson—DeepMind was just an embryonic startup then—and was carefully optimized to play that TV game. Since the bar exam isn’t conducted in a format that requires candidates to provide the questions to go with supplied answers, it’s ridiculous to think that Watson could have passed. The wrongness of that sentence is something that even the most hallucinogenic LLM would be hard-pressed to match! When I asked several LLM models whether the Watson computer could have passed the bar exam, all of them carefully and correctly explained why not. Chalk one up to the robots.

Putting aside the disrespectful tone of the responses—that’s just the way things roll on X—I find the reaction understandable but misguided. We’re in a period of latency, where users are only beginning to figure out how to exploit the extraordinary products coming out of the AI companies. Forget about the dumb answers that AI Overviews and other LLMs can produce (but remember that Google has no monopoly on hallucinations). The big tech companies have made a conscious decision to push less-than-fully-baked products into the marketplace, in part because it’s the best way to find out how to improve them and in part because the competition is so intense that none of the companies can afford to slow down.

Meanwhile, in less visible ways, AI is already changing education, commerce, and the workplace. One friend recently told me about a big IT firm he works with. The company had a lengthy and long-established protocol for launching major initiatives that involved designing solutions, coding up the product, and engineering the rollout. Moving from concept to execution took months. But he recently saw a demo that applied state-of-the-art AI to a typical software project. “All of those things that took months happened in the space of a few hours,” he says. “That made me agree with your column. Tons of the companies that surround us are now animated corpses.” No wonder people are freaked.

What fuels a lot of the rage against AI is mistrust of the companies building and promoting it. By coincidence I had a breakfast scheduled this week with Ali Farhadi, the CEO of the Allen Institute for AI, a nonprofit research effort. He’s 100 percent convinced that the hype is justified but also empathizes with those who don’t accept it—because, he says, the companies that are trying to dominate the field are viewed with suspicion by the public. “AI has been treated as this black box thing that no one knows about, and it’s so expensive only four companies can do it,” Farhadi says. The fact that AI developers are moving so quickly fuels the distrust even more. “We collectively don’t understand this, yet we’re deploying it,” he says. “I’m not against that, but we should expect these systems will behave in unpredictable ways, and people will react to that.” Fahadi, who is a proponent of open source AI, says that at the least the big companies should publicly disclose what materials they use to train their models.

Compounding the issue is that many people involved in building AI also pledge their devotion to producing AGI. While many key researchers believe this will be a boon to humanity—it's the founding principle of OpenAI—they have not made the case to the public. “People are frustrated with the notion that this AGI thing is going to come tomorrow or one year or in six months,” says Farhadi, who is not a fan of the concept. He says AGI is not a scientific term but a fuzzy notion that’s mucking up the adoption of AI. “In my lab when a student uses those three letters, it just delays their graduation by six months,” he says.

Personally I’m agnostic on the AGI issue—I don’t think we’re on the cusp of it but simply don’t know what will happen in the long run. When you talk to people on the front lines of AI, it turns out that they don’t know, either.

Some things do seem clear to me, and I think that these will eventually become apparent to all—even those pitching spitballs at me on X. AI will get more powerful. People will find ways to use it to make their jobs and personal lives easier. Also, many folks are going to lose their jobs, and entire companies will be disrupted. It will be small consolation that new jobs and firms might emerge from an AI boom, because some of the displaced people will still be stuck in unemployment lines or cashiering at Walmart. In the meantime, everyone in the AI world—including columnists like me—would do well to understand why people are so enraged, and respect their justifiable discontent.

Time Travel

Invoking the 1956 AI conference in Dartmouth brings to mind Marvin Minsky, an unforgettable human mind. Upon his death in 2016, I wondered whether even the most advanced AI could ever match the meat inside his head. It’s a scary thought.

There was a great contradiction about Marvin Minsky. As one of the creators of artificial intelligence (with John McCarthy), he believed as early as the 1950s that computers would have human-like cognition. But Marvin himself was an example of an intelligence so bountiful, unpredictable and sublime that not even a million Singularities could conceivably produce a machine with a mind to match his. At the least, it is beyond my imagination to conceive of that happening. But maybe Marvin could imagine it. His imagination respected no borders …

I was dazzled by Minsky, an impish man of clear importance whose every other utterance was a rabbit’s hole of profundity and puzzlement. He’d been a professor at MIT since 1958, had invented stuff like the head mounted display, and besides AI, had done pioneering work in neural nets and robotics. But even had he done nothing, the blinding brilliance of his conversation, leavened by the humor of a lighthearted borscht belt comic, would have cemented a legacy. He questioned everything, and his observations were quirky, innovative, and made such perfect sense that you wonder why no one else had thought of them. After a couple of hours with him, your own vision of the world was altered. Only years later did I realize that his everyday Minsky-ness imparted a basic lesson: if you saw the world the way everybody else did, how smart could you really be?

Ask Me One Thing

Mark asks, “What does tech have to worry about in another Trump term?”

Thanks for asking, Mark. I’ll avoid making general remarks about what everyone has to worry about in another Trump term and concentrate on the question at hand. The climate for tech after a Trump victory is more complicated now that a number of super-rich Silicon Valley tech figures are supporting the former president—felony conviction notwithstanding. This week, tech billionaires Chamath Palihapitiya and David Sacks hosted a sold-out Trump fundraiser, which charged $300,000 to join the “host committee” and stay for dinner, and $50,000 to attend just the reception. Elon Musk is reportedly angling to be Trump’s tech adviser in a second term.

Clearly some tech people aren’t worried about Trump. Indeed, his return to the White House might actually be a short-term boon for some of the biggest companies. Trump would almost certainly reverse the Biden administration’s hard line toward regulation and antitrust prosecution. (Bye-bye, net neutrality. Hello, giant acquisitions by tech companies.)

But there would be plenty for tech to worry about, too. Trump has a well-documented history of rewarding his supporters and punishing those who don’t bend the knee. Remember how he tried to steer TikTok to Oracle, run by his booster Larry Ellison? Tech works best as a meritocracy—crony capitalism would be counterproductive for the industry.

The first Trump administration never got around to big infrastructure investments—would it now roll back Biden’s big grants in chip manufacturing? We might also see a drift in tech policy: The Biden White House has issued a detailed order on artificial intelligence that includes close scrutiny of the technology's potential downsides and security risks. Would Trump unwind it all? (He hasn’t talked much about AI on the campaign trail.) Ultimately the smartest tech executives in big companies would figure out how to appease Trump. But long-term, a dwindling of public investment and research and the rise of a crony-based system might well weaken the US tech industry.

Oh, and expect Trump to mandate that all government communications should be conducted on Truth Social. Just kidding. I think.

You can submit questions to mail@wired.com. Write ASK LEVY in the subject line.

End Times Chronicle

It’s not even summer yet, and the highs in India are topping 120 degrees Fahrenheit. So maybe it’s not so bad that it’s 110 degrees in Phoenix.

Last but Not Least

AI Overviews aren’t always wrong. But here’s one case where a correct answer was suspiciously close to language in a WIRED story.

How one California town sent drones to answer 911 calls—at a possible cost of the privacy of those living in poorer neighborhoods.

Inside the biggest sting in FBI history.

If you are going to write a sci-novel, who would be the ideal collaborator? Yep, Keanu Reeves.

Adblock test (Why?)


Don't Let Mistrust of Tech Companies Blind You to the Power of AI - WIRED
Read More


Bagikan Berita Ini

0 Response to "Don't Let Mistrust of Tech Companies Blind You to the Power of AI - WIRED"

Post a Comment

Powered by Blogger.