Latest

6/recent/ticker-posts

Biden and Harris have talked with CEOs about the dangers of artificial intelligence.

Microsoft's Satya Nadella and Google's Sundar Pichai visit the White House to discuss concerns about Artificial Intelligence.

Image Credit: KGNS

On Thursday, President Joe Biden had a meeting with the CEOs of major artificial intelligence companies, such as Microsoft and Alphabet's Google, and emphasized that it is crucial for them to guarantee the safety of their products before releasing them.

This year, generative artificial intelligence has gained significant attention, and applications like ChatGPT have caught the public's imagination, causing a surge of interest among companies to introduce comparable products that they think will transform the way we work.

Several million users have started to test these tools, which advocates claim can help with tasks like medical diagnoses, screenplay writing, legal brief creation, and software debugging. However, there is mounting anxiety about how the technology could lead to privacy breaches, bias in employment decisions, and enable scams and misinformation campaigns.

According to the White House, President Biden, who has utilized ChatGPT and experimented with it himself, instructed the officials to reduce the present and potential dangers that AI could pose to individuals, society, and national security.

The meeting, which lasted for two hours starting at 11:45 a.m. ET (1545 GMT) on Thursday, included Google's Sundar Pichai, Microsoft Corp's Satya Nadella, OpenAI's Sam Altman, and Anthropic's Dario Amodei, alongside Vice President Kamala Harris and government officials such as Biden's Chief of Staff Jeff Zients, National Security Adviser Jake Sullivan, Director of the National Economic Council Lael Brainard, and Secretary of Commerce Gina Raimondo.

The White House stated that the discussion was "open and constructive" and covered the necessity for companies to be more transparent with policymakers about their AI systems, the importance of assessing the safety of these products, and the requirement to safeguard them from malicious attacks.

In a statement, Vice President Harris mentioned that the technology has the potential to enhance people's lives, but it also carries safety, privacy, and civil rights concerns. She informed the chief executives that they have a "legal responsibility" to ensure the safety of their AI products and that the government is willing to explore new regulations and support new legislation on artificial intelligence.

When asked if the companies were in agreement about regulations, Altman told reporters after the meeting that "surprisingly, we are on the same page on what needs to happen."

The White House also disclosed that the National Science Foundation would invest $140 million to launch seven new AI research institutes, and the Office of Management and Budget would publish policy guidance on the application of AI by the federal government.

Prominent AI developers such as Anthropic, Google, Hugging Face, NVIDIA Corp, OpenAI, and Stability AI will engage in a public evaluation of their AI systems.

Not long after President Biden announced his campaign for reelection, the Republican National Committee created a video featuring an apocalyptic vision of the future under a second Biden term, entirely constructed using AI imagery.

As AI technology becomes more widespread, political advertisements created using AI are expected to become increasingly common. Compared to European governments, regulators in the United States have not adopted a strict approach to technology regulation or established robust regulations regarding deepfakes and misinformation.

"We do not view this as a competition," a senior administration official remarked, stating that the government is collaborating closely with the U.S.-EU Trade & Technology Council to address the issue.

In February, President Biden issued an executive order instructing federal agencies to eliminate bias in their AI use. The Biden administration has also issued an AI Bill of Rights and a risk management framework. The Federal Trade Commission and the Department of Justice's Civil Rights Division announced last week that they would use their legal powers to combat AI-related harm.

Tech giants have vowed many times to combat propaganda around elections, fake news about the COVID-19 vaccines, pornography and child exploitation, and hateful messaging targeting ethnic groups. But they have been unsuccessful, research and news events show.

Post a Comment

0 Comments