Security
Headlines
HeadlinesLatestCVEs

Headline

Unfounded Fears: AI Extinction-Level Threats & the AI Arms Race

There is an extreme lack of evidence of AI-related danger, and proposing or implementing limits on technological advancement isn’t the answer.

DARKReading
#google#intel#auth

Aaron Rosenmund, Senior Director of Content Strategy and Curriculum, Pluralsight

June 28, 2024

4 Min Read

Source: Mopic via Alamy Stock Photo

COMMENTARY

Skynet becoming self-aware was the stuff of fiction. Despite this, reports that stoke apocalyptic fears about artificial intelligence (AI) appear to be on the uptick. Published insights on AI need to be handled and reported responsibly. It is a disservice to us all to showcase survey findings in a way that invokes the doomsday endgame brought on by the non-human antagonists of the Terminator films.

Earlier this year, a governmentwide action plan and report was released based on an analysis that AI may bring with it catastrophic risks, stating AI poses “an extinction-level threat to the human species.” The report also finds that the national security threat AI poses likely will grow if tech companies fail to self-regulate and/or work with the government to reign in the power of AI.

Considering these findings, it’s important to note that survey results are not grounded in scientific analysis, and published reports are not always backed by a thorough comprehension of AI’s underlying technology. Reports focusing on AI that lack tangible evidence to back up AI-related concerns can be viewed as inflammatory rather than informative. Furthermore, such reporting can be particularly damaging when it’s provided to governmental organizations that are responsible for AI regulation.

Beyond conjecture, there is an extreme lack of evidence of AI-related danger, and proposing or implementing limits on technological advancement is not the answer.

In that report, comments like “80% of people feel AI could be dangerous if unregulated” prey upon our country’s cultural bias of fearing what we don’t understand to stoke flames of concern. This kind of doom-speak may gain attention and garner headlines, but in the absence of supporting evidence, it serves no positive goal.

Currently, there’s nothing to point to that tells us future AI models will develop autonomous capabilities that may or may not be paired with human-aimed catastrophic intent. While it’s no secret that AI will continue to be a highly disruptive technology, this does not necessarily mean it will be destructive to humanity. Furthermore, AI as an assistive tool to develop advanced biology, chemistry, and/or cyber weaponry is not something that the implementation of new US policies or laws will solve. If anything, such steps are more likely to guarantee that we’d end up on the losing side of an AI arms race.

The AI That Generates a Threat Is the Same AI That Defends Against It

Other countries or independent entities who intend harm can develop destructive AI-based capabilities outside of the reach of the US. If forces beyond our borders plan to use AI against us, it’s important to remember that the AI that can, for example, create bioweapons, is the same AI that would provide our best defense against that threat. Additionally, the development of treatments for diseases, cures to toxins, and the advancement of our own cyber industry capabilities are equal outcomes of advancing AI technology and will be a prerequisite to combating malicious use of AI tools in the future.

Business leaders and organizations need to proactively monitor the implementation of regulations related to both the development and use of AI. It’s also critical to focus on the ethical application of AI across the industries where it’s prevalent and not just how models are advancing. For example, in the EU, there are restrictions on using AI tools for home underwriting to address concerns over inherent biases in datasets that could allow for inequitable decision-making. In other fields, “human in the loop” requirements are employed to create safeguards related to how AI analysis and decision-making are applied to job recruitment and hiring.

No Way to Predict What Level of Computing Generates Unsafe AI

As reported by Time, the aforementioned report — Gladstone’s study — recommended that Congress should make it illegal “to train AI models using more than a certain level of computing power” and that the threshold “should be set by a federal AI agency.” As an example, the report suggested that the agency could set the threshold “just above the levels of computing power used to train current cutting-edge models like OpenAI’s GPT-4 and Google’s Gemini.”

However, while it is clear that the US needs to create a road map for how AI should be regulated, there is no way to predict what level of computing would be required to generate potentially unsafe AI models. Setting any computing limit to put a threshold on AI advancement would be both arbitrary and based on limited knowledge of the tech industry.

More importantly, a drastic step to stifle change in the absence of evidence to support such a step is harmful. Industries shift and transform throughout time, and as AI continues to evolve, we are simply witnessing this transformation in real-time. That being the case, for now, Terminator’s Sarah and John Connor can stand down.

About the Author(s)

Senior Director of Content Strategy and Curriculum, Pluralsight

Aaron Rosenmund is a cybersecurity operations expert, with a background in federal and business defensive and offensive cyber operations and system automation. Leveraging his administration and automation experience, Aaron actively contributes to multiple open and closed source security operation platform projects and continues to create tools and content to benefit the community. As an educator and cybersecurity researcher at Pluralsight, he is focused on advancing cybersecurity workforce and technologies for business and national enterprises alike. In support of the Air National Guard, he contributes those skills part time in various initiatives to defend the nation in cyberspace.

DARKReading: Latest News

Ransomware Attack on Blue Yonder Hits Starbucks, Supermarkets