Security
Headlines
HeadlinesLatestCVEs

Headline

DHS Establishes AI Safety Board with Tech Titans and Experts

By Waqas The Department of Homeland Security (DHS) has formed an AI Safety Board to ensure secure AI use in critical infrastructure. This is a post from HackRead.com Read the original post: DHS Establishes AI Safety Board with Tech Titans and Experts

HackRead
#vulnerability#web#intel

The board also includes OpenAI CEO Sam Altman and Alphabet CEO Sundar Pichai. Some experts argue that this presents a conflict of interest, given that both companies are deeply engaged in developing proprietary AI models.

The Department of Homeland Security (DHS) announced the formation of the Artificial Intelligence Safety and Security Board (the Board) on Friday, April 26th, 2024. This move comes amidst growing concerns about security vulnerabilities and the potential risks of artificial intelligence (AI) systems, particularly within the nation’s critical infrastructure sectors.

The Board will serve in an advisory capacity, providing recommendations to the Secretary of Homeland Security, critical infrastructure stakeholders, and the public on the responsible development and deployment of AI technologies.

Its focus, according to the Department of Homeland Security today announcement, will be on ensuring these technologies are implemented securely and effectively within sectors like energy, transportation, finance, and communication networks.

This initiative aligns with President Biden’s Executive Order on AI and Advanced Computing, signed in October 2023. The order called for a national strategy to promote responsible AI development and mitigate potential risks, including those related to security and safety in critical infrastructure.

The Board will bring together prominent industry experts from AI hardware and software companies, leading research labs, critical infrastructure entities, and the U.S. government,” the DHS stated in a press release.

The Board’s membership reportedly includes tech leaders like OpenAI CEO Sam Altman and Alphabet CEO Sundar Pichai, alongside experts in AI, civil rights, and representatives from critical infrastructure companies

This announcement follows the release of DHS’s first “Artificial Intelligence Roadmap” in March 2024. The roadmap outlines the department’s plans to leverage AI responsibly for homeland security missions while safeguarding individual privacy, civil rights, and liberties. It also emphasizes national leadership in AI through collaborative partnerships.

“The establishment of the Artificial Intelligence Safety and Security Board by the DHS is a good step towards making sure the development and deployment of AI technology in critical infrastructure is done safely and securely,” stated Joseph Thacker, principal AI engineer and security researcher at AppOmni, a SaaS security pioneer.

“By bringing together a wide array of experts, the board will provide valuable insights and recommendations which will hopefully mitigate the risks associated with AI while still reaping the benefits,” he added.

However, Joseph argued the conflict of interest factor, citing board members who are among the founders and top bosses of companies heavily involved in developing AI source models.

“One major concern I have is that there is a large conflict of interest by bringing in the companies that are developing the closed source AI models. They are incentivized to recommend against open source models for “safety reasons” when it would massively help their business models and positively affect their bottom line.”

Nevertheless, the Board’s formation is a significant step towards ensuring the safe and secure integration of AI within the nation’s critical infrastructure. Its recommendations will be crucial for developing best practices and mitigating potential risks associated with AI deployment in these sensitive sectors.

  1. White House Strategy: Software Firms Liable for Breaches
  2. AI Generated Fake Obituary Websites Target Grieving Users
  3. US-Led 40 Countries Alliance to Combat Ransomware Threat
  4. Poisoned Data, Malicious Manipulation: NIST Reveals AI Flaws
  5. Dark Web Pedophiles Using Open-Source AI to Generate CSAM

HackRead: Latest News

Two Californians charged in the largest NFT fraud case to date