Advanced Computing in the Age of AI | Tuesday, March 19, 2024

Regulate AI? Musk-Zuckerberg Stand Astride the Divide 


The future of AI – heck, the future of human civilization – is the hot tech industry topic this week, not only for the topic itself but for who’s talking about it, namely the world’s fifth and 80th richest people: Mark Zuckerberg and Elon Musk, respectively. The two tech giants stand astride the dividing line between those who look glowingly at the AI of the future and those alarmed by it.

Yet for all the attention the Zuckerberg-Musk debate has received, much of the discussion and media coverage oversimplifies the issues at hand and at times misses the mark entirely.

Starting with Musk, he has for years raised the apocalyptic alarm that AI poses “an existential threat to human civilization.” The reason his views are back in the news this week is that Zuckerberg took issue after Musk recently restated his long-held opinions before a meeting of U.S. governors.

The other reason Musk’s comments made news is that, in speaking to state governors, he called for governmental regulation of AI, which naturally drew the ire of many in the business community, along with AI and robotics innovators. Though Musk admitted he's not sure what an AI regulatory regime would look like, the notion of regulating a technology, or a group of companies, is antithetical to most business owners.

It turns out that Musk isn't the only prominent voice interested in regulatory restrictions on two FANG companies heavily invested in AI - Google and, yes, Zuckerberg's own Facebook.

Steve Bannon

None less than President Trump's chief strategist, Steve Bannon, according to press reports, has told people close to him that the two companies "have become effectively a necessity in contemporary life" and therefore should come under government oversight like other public utilities. Though Bannon's views apparently are not directly related to AI, the growing power of the FANG companies as a result of data accumulation - the fountainhead of AI - is coming under increasing scrutiny. A recent article in The Economist declared data to be the world's most valuable asset, and that  M&A activities of the FANG companies should be viewed through a prism of data, not simply market share, acquisition.

Musk’s comments were made more newsworthy by the elevation of his stature with the release this week of the relatively inexpensive Model 3 all-electric car from his Tesla car company, which could help fulfill his vision for mass production of electric cars when Tesla was formed, amid skepticism, in 2003.

According to National Public Radio’s coverage of Musk at the governor’s conference, a hush fell over the audience when Musk uttered his dystopic statement about the existential threat of AI – “you could have heard a pin drop,” one governor was heard to say.

Zuckerberg called Musk’s statements “irresponsible” and “negative” – opinions he expressed during a live chat session on Facebook while he turned meat on a backyard grill (doesn't some machine do that for him?***).

The shortcoming in this week’s debate between the two tech visionaries, and the coverage of it, is lack of specificity. As happens in political debates, the two adversaries were – or seem to have been – talking past each other about different forms of AI at different stages of its development. There’s AI that puts ads on our Facebook pages tailored to our unique profiles and interests. There also is, as we know, AI that automates certain defined tasks, and this form of AI robotics will continue to develop for handling increasingly complex – yet defined – tasks with increasing competence.

At some future time, there may develop another form of AI – “general” or “strong” AI that broadens the number of tasks that one system can automate while also incorporating intuitive, aesthetic and emotional comprehension. Many tech thinkers doubt this will ever happen. But for others, such as Musk, it will definitely happen – and whether it will be a good thing or a threat is another point of diversion.

Though Musk made statements about “robots going down the street killing people” that drew the most media attention, his call for AI regulation seemed to come in the context of job loss caused by AI. In fact, assuming AI is left unchecked, a recent survey of AI experts from around the world found they expect many job categories to be automated within a few decades and all work automated within 120 years. Musk specifically cited the oncoming automation of transport, comprising 12 percent of American jobs, which the surveyed AI experts expect to be machine-driven within 12 years.

“There will certainly be a lot of job disruption,” Musk told the governors. “Because what’s going to happen is that robots will be able to do everything better than us… I mean all of us. I’m not sure exactly what to do about this. It’s really about the scariest problem to me. So I really think we need government regulation here ensuring the public good. You’ve got companies that have to race to build AI because they’re going to be made uncompetitive. If your competitor is racing to build AI and you don’t, they will crush you. So they’re saying, ‘We need to build it too….’ Transport will be one of the first things to go fully autonomous. But when I say everything, the robots will do everything, bar nothing.”

Zuckerberg countered this with an optimistic view of AI.

“I think people who are naysayers and kind of try to drum up these doomsday scenarios, it’s really negative and in some ways I actually think it’s pretty irresponsible,” Zuckerberg said in a livestream now on YouTube. “If you’re arguing against AI then you’re arguing against safer cars that aren’t going to have accidents, and you’re arguing against being able to better diagnose people when they’re sick.”

While it’s true Musk said self-driving vehicles will eliminate jobs, he does not, so far as we know, object to advances in medical care brought about by AI.

Zuckerberg wasn’t alone in taking issue with Musk, or in talking past him. AI experts and tech journalists also rejected his alarmist vision. Writing in Slate magazine, Nick Thieme accused both Zuckerberg and Musk of being wrong about AI, or right about AI but for the wrong reasons, as the case may be. He grouped Musk in the extreme anti-AI camp and ignored his concerns about job loss – which Thieme agrees will happen.

“…for many, A.I. will deliver little more than unemployment checks,” Thieme wrote. “Reports have placed the coming unemployment due to A.I. as high as 50 percent, and while that is almost certainly alarmist, more reasonable estimates are no more comforting, reaching as high as 25 percent. It’s a different hell from the one Musk envisions.”

Actually, it isn’t. The difference between Musk’s and Thieme’s view is in the degree of anticipated job automation.

While Musk accused Zuckerberg of failing to understand AI, robotics pioneer Rodney Brooks, founding director of MIT’s Computer Science and Artificial Intelligence Lab and the cofounder of both iRobot and Rethink Robotics, made the same accusation against Musk. In a story in TechCrunch, Brooks maintained that AI machines have limited capabilities – that it is extremely difficult to program a machine to truly master a task.

“There are quite a few people out there who’ve said that AI is an existential threat: Stephen Hawking, astronomer Royal Martin Rees, who has written a book about it, and they share a common thread, in that: they don’t work in AI themselves,” Brooks said. “For those who do work in AI, we know how hard it is to get anything to actually work through product level.”

Brooks also criticized Musk's vague call for regulation of AI, and pointed out that Tesla itself is developing autonomous vehicles.

“If you’re going to have a regulation now, either it applies to something and changes something in the world, or it doesn’t apply to anything,” Brooks said. “If it doesn’t apply to anything, what the hell do you have the regulation for? Tell me, what behavior do you want to change, Elon? By the way, let’s talk about regulation on self-driving Teslas, because that’s a real issue.”

 

EnterpriseAI