Security
Headlines
HeadlinesLatestCVEs

Headline

White House unveils Blueprint for an AI Bill of Rights

Categories: News The blueprint aims to make AI less harmful to Americans by holding its creators accountable.

(Read more…)

The post White House unveils Blueprint for an AI Bill of Rights appeared first on Malwarebytes Labs.

Malwarebytes
#git#intel

On Tuesday, the Biden-Harris Administration’s Office of Science and Technology Policy (OSTP) unveiled a new Blueprint for an AI Bill of Rights, which lists five principles to guide the design, use, and development of intelligence-based automated systems "to protect the American public in the age of artificial intelligence".

These principles focus on things that matter to Internet users: Protection from risky systems, protection from discrimination, data privacy, notice and explanation of AI use, and the option to opt out.

“Automated technologies are increasingly used to make everyday decisions affecting people’s rights, opportunities, and access in everything from hiring and housing, to healthcare, education, and financial services,” the White House said in a press release. It continued:

While these technologies can drive great innovations, like enabling early cancer detection or helping farmers grow food more efficiently, studies have shown how AI can display opportunities unequally or embed bias and discrimination in decision-making processes. As a result, automated systems can replicate or deepen inequalities already present in society against ordinary people, underscoring the need for greater transparency, accountability, and privacy.

While the blueprint is for big tech companies, Dr. Alondra Nelson, deputy director for science and society in the OSTP, made clear it’s also for every American who interacts with AI or whose life is affected by "unaccountable algorithms".

In mid-September, the White House conducted a listening session on tech platform accountability wherein experts identified six concerns, each paired with a core principle for reform.

AI prejudice

Perhaps the most significant source of AI pain is algorithm discrimination. The descrimination stems from the fact that AIs are trained using training data sets rather than programmed. Gaps or biases in the training data inform the way that AI evaluates data in the real world.

As a result, the human prejudices some hoped AI would eliminate are sometimes baked right in. There are AI’s that can’t understand certain accents, others have prevented African Americans from getting kidney transplants, and some just don’t think women can be computer programmers.

Although failings in AI are generally unintentional, their effects on marginalized populations can be real and severe.

Just the first step

While many organizations, such as the Center for Democracy and Technology (CDT), the American Civil Liberties Union (ACLU), and Access Now, have welcomed the government’s Blueprint for the AI Bill of Rights, some say it shouldn’t end here.

“This is clearly a starting point. That doesn’t end the discussion over how the US implements human-centric and trustworthy AI,” Marc Rotenberg, head of the Center for AI and Digital Policy (CAIDP), told Technology Review. “But it is a very good starting point to move the US to a place where it can carry forward on that commitment.”

He also wants to see the US implement "checks and balances to AI uses that have the most potential to cause harm to humans", such as those in the EU’s upcoming AI Act.

“We’d like to see some clear prohibitions on AI deployments that have been most controversial, which include, for example, the use of facial recognition for mass surveillance,” Rotenberg said.

Director of Policy for Stanford Institute for Human-Centered, AI Russell Wald, thinks the blueprint lacks details or mechanisms for enforcement. “It is disheartening to see the lack of coherent federal policy to tackle desperately needed challenges posed by AI, such as federally coordinated monitoring, auditing, and reviewing actions to mitigate the risks and harm brought by deployed or open-source foundation models,” he said.

Sneha Revanur, founder and president of Encode Justice, an organization focusing on the youth and AI, also sees that flaw but has high hopes: “Though it is limited in its ability to address the harms of the private sector, the AI Bill of Rights can live up to its promise if it is enforced meaningfully, and we hope that regulation with real teeth will follow suit,” she said.

Malwarebytes: Latest News

Spotify, Audible, and Amazon used to push dodgy forex trading sites and more