Friday, November 22

Four artificial intelligence experts have expressed concern after their work was cited in an open letter co-signed by Elon Musk calling for an immediate halt to research.

The letter, dated March 22 and with over 1,800 signatures as of Friday, demanded a six-month moratorium on the development of systems “more powerful” than Microsoft-backed (MSFT.O) OpenAI’s new GPT-4, which can hold human-like conversations, compose songs, and summarize lengthy documents.

Since the release of GPT-4’s predecessor, ChatGPT, last year, competitors have rushed to release similar products.

According to the open letter, AI systems with “human-competitive intelligence” pose grave risks to humanity, citing 12 pieces of research from experts such as university academics and current and former employees of OpenAI, Google (GOOGL.O), and its subsidiary DeepMind.

Since then, civil society groups in the United States and the European Union have urged lawmakers to limit OpenAI’s research. Requests for comment were not immediately returned by OpenAI.

Critics have accused the Future of Life Institute (FLI), which is primarily funded by the Musk Foundation and is behind the letter, of prioritizing imagined apocalyptic scenarios over more immediate concerns about AI, such as racist or sexist biases being programmed into the machines.

“On the Dangers of Stochastic Parrots,” a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google, was cited.

Mitchell, now the chief ethical scientist at Hugging Face, slammed the letter, telling Reuters that it was unclear what constituted “more powerful than GPT4”.

“By taking a lot of dubious ideas for granted, the letter asserts a set of priorities and a narrative on AI that benefits FLI supporters,” she explained. “Ignoring current harms is a privilege that some of us do not have.”

On Twitter, her co-authors Timnit Gebru and Emily M. Bender slammed the letter, with the latter calling some of its claims “unhinged.”

FLI president Max Tegmark told Reuters that the campaign was not intended to undermine OpenAI’s competitive advantage.

“It’s quite amusing. I’ve heard it said that Elon Musk is attempting to slow down the competition “He also stated that Musk had no involvement in the letter’s creation. “This isn’t about a single company.”

RISKS NOW

Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, took issue with the letter mentioning her work. She co-authored a research paper last year arguing that the widespread use of AI already posed serious risks.

Her research claimed that the current use of AI systems could influence decision-making in the face of climate change, nuclear war, and other existential threats.

“AI does not need to reach human-level intelligence to exacerbate those risks,” she told Reuters.

“There are non-existent risks that are extremely important but don’t get the same level of Hollywood attention.”

When asked about the criticism, FLI’s Tegmark stated that both the short-term and long-term risks of AI should be taken seriously.

“If we cite someone, it simply means we claim they agree with that sentence. It doesn’t mean they’re endorsing the letter or that we agree with everything they say “He told Reuters.

Dan Hendrycks, director of the California-based Center for AI Safety, who was also cited in the letter, defended its contents, telling Reuters that it was prudent to consider black swan events – those that appear unlikely but have catastrophic consequences.

According to the open letter, generative AI tools could be used to flood the internet with “propaganda and untruth.”

Dori-Hacohen called Musk’s signature “pretty rich,” citing a reported increase in misinformation on Twitter following his acquisition of the platform, as documented by the civil society group Common Cause and others.

Twitter will soon introduce a new fee structure for access to its research data, which could stymie future research on the subject.

“That has had a direct impact on my lab’s work, as well as the work of others studying misinformation and disinformation,” Dori-Hacohen said. “We’re doing our work with one hand tied behind our back.”

Musk and Twitter did not respond immediately to requests for comment.

Share.
Leave A Reply Cancel Reply
Exit mobile version