{"id":22982,"date":"2023-05-22T11:54:16","date_gmt":"2023-05-22T11:54:16","guid":{"rendered":"http:\/\/107240711"},"modified":"2023-05-22T11:54:16","modified_gmt":"2023-05-22T11:54:16","slug":"parrots-paper-clips-and-safety-vs-ethics-why-the-artificial-intelligence-debate-sounds-like-a-foreign-language","status":"publish","type":"post","link":"https:\/\/wp.worldtechguide.net\/parrots-paper-clips-and-safety-vs-ethics-why-the-artificial-intelligence-debate-sounds-like-a-foreign-language\/","title":{"rendered":"Parrots, paper clips and safety vs. ethics: Why the artificial intelligence debate sounds like a foreign language"},"content":{"rendered":"

<\/span><\/p>\n

\n
\n
\n

Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction. <\/p>\n

Eric Lee | Bloomberg | Getty Images<\/p>\n<\/div>\n<\/div>\n<\/div>\n

\n

This past week, OpenAI CEO Sam Altman charmed<\/span> a room full of politicians in Washington, D.C., over dinner, then testified<\/span> for about nearly three hours about potential risks of artificial intelligence at a Senate hearing.<\/p>\n

After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public.<\/p>\n

“AGI safety is really important, and frontier models should be regulated,” Altman tweeted. “Regulatory capture is bad, and we shouldn’t mess with models below the threshold.”<\/p>\n

In this case, “AGI” refers to “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.<\/p>\n

“Frontier models” is a way to talk about the AI systems that are the most expensive to produce and which analyze the most data. Large language models, like OpenAI’s GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats<\/span> in photos.<\/p>\n

Most people agree that there need to be laws governing AI as the pace of development accelerates.<\/p>\n

“Machine learning, deep learning, for the past 10 years or so, it developed very rapidly. When ChatGPT came out, it developed in a way we never imagined, that it could go this fast,” said My Thai, a computer science professor at the University of Florida. “We’re afraid that we’re racing into a more powerful system that we don’t fully comprehend and anticipate what what it is it can do.”<\/p>\n

But the language around this debate reveals two major camps among academics, politicians, and the technology industry. Some are more concerned about what they call “AI safety.<\/strong>” The other camp is worried about what they call “AI ethics.<\/strong>“<\/p>\n

When Altman spoke to Congress, he mostly avoided jargon, but his tweet suggested he’s mostly concerned about AI safety \u2014 a stance shared by many industry leaders at companies like Altman-run OpenAI, Google DeepMind and well-capitalized startups. They worry about the possibility of building an unfriendly AGI with unimaginable powers. This camp believes we need urgent attention from governments to regulate development an prevent an untimely end to humanity \u2014 an effort similar to nuclear nonproliferation.<\/p>\n

“It’s good to hear so many people starting to get serious about AGI safety,” DeepMind founder and current Inflection AI CEO Mustafa Suleyman tweeted<\/span> on Friday. “We need to be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today.”<\/p>\n

But much of the discussion in Congress and at the White House about regulation is through an AI ethics lens, which focuses on current harms.<\/p>\n

From this perspective, governments should enforce transparency around how AI systems collect and use data, restrict its use in areas that are subject to anti-discrimination law like housing or employment, and explain how current AI technology falls short. The White House’s AI Bill of Rights proposal<\/span> from late last year included many of these concerns.<\/p>\n

This camp was represented at the congressional hearing by IBM<\/span>