Security
Headlines
HeadlinesLatestCVEs

Headline

AI Is Being Used to ‘Turbocharge’ Scams

Plus: Amazon’s Ring was ordered to delete algorithms, North Korea’s failed spy satellite, and a rogue drone “attack” isn’t what it seems.

Wired
#vulnerability#ios#mac#apple#google#microsoft#amazon#intel#backdoor#perl

Code hidden inside PC motherboards left millions of machines vulnerable to malicious updates, researchers revealed this week. Staff at security firm Eclypsium found code within hundreds of models of motherboards created by Taiwanese manufacturer Gigabyte that allowed an updater program to download and run another piece of software. While the system was intended to keep the motherboard updated, the researchers found that the mechanism was implemented insecurely, potentially allowing attackers to hijack the backdoor and install malware.

Elsewhere, Moscow-based cybersecurity firm Kaspersky revealed that its staff had been targeted by newly discovered zero-click malware impacting iPhones. Victims were sent a malicious message, including an attachment, on Apple’s iMessage. The attack automatically started exploiting multiple vulnerabilities to give the attackers access to devices, before the message deleted itself. Kaspersky says it believes the attack impacted more people than just its own staff. On the same day as Kaspersky revealed the iOS attack, Russia’s Federal Security Service, also known as the FSB, claimed thousands of Russians had been targeted by new iOS malware and accused the US National Security Agency (NSA) of conducting the attack. The Russian intelligence agency also claimed Apple had helped the NSA. The FSB did not publish technical details to support its claims, and Apple said it has never inserted a backdoor into its devices.

If that’s not enough encouragement to keep your devices updated, we’ve rounded up all the security patches issued in May. Apple, Google, and Microsoft all released important patches last month—go and make sure you’re up to date.

And there’s more. Each week we round up the security stories we didn’t cover in depth ourselves. Click on the headlines to read the full stories. And stay safe out there.

Lina Khan, the chair of the US Federal Trade Commission, warned this week that the agency is seeing criminals using artificial intelligence tools to “turbocharge” fraud and scams. The comments, which were made in New York and first reported by Bloomberg, cited examples of voice-cloning technology where AI was being used to trick people into thinking they were hearing a family member’s voice.

Recent machine-learning advances have made it possible for people’s voices to be imitated with only a few short clips of training data—although experts say AI-generated voice clips can vary widely in quality. In recent months, however, there has been a reported rise in the number of scam attempts apparently involving generated audio clips. Khan said that officials and lawmakers “need to be vigilant early” and that while new laws governing AI are being considered, existing laws still apply to many cases.

In a rare admission of failure, North Korean leaders said that the hermit nation’s attempt to put a spy satellite into orbit didn’t go as planned this week. They also said the country would attempt another launch in the future. On May 31, the Chollima-1 rocket, which was carrying the satellite, launched successfully, but its second stage failed to operate, causing the rocket to plunge into the sea. The launch triggered an emergency evacuation alert in South Korea, but this was later retracted by officials.

The satellite would have been North Korea’s first official spy satellite, which experts say would give it the ability to monitor the Korean Peninsula. The country has previously launched satellites, but experts believe they have not sent images back to North Korea. The failed launch comes at a time of high tensions on the peninsula, as North Korea continues to try to develop high-tech weapons and rockets. In response to the launch, South Korea announced new sanctions against the Kimsuky hacking group, which is linked to North Korea and is said to have stolen secret information linked to space development.

In recent years, Amazon has come under scrutiny for lax controls on people’s data. This week the US Federal Trade Commission, with the support of the Department of Justice, hit the tech giant with two settlements for a litany of failings concerning children’s data and its Ring smart home cameras.

In one instance, officials say, a former Ring employee spied on female customers in 2017—Amazon purchased Ring in 2018—viewing videos of them in their bedrooms and bathrooms. The FTC says Ring had given staff “dangerously overbroad access” to videos and had a “lax attitude toward privacy and security.” In a separate statement, the FTC said Amazon kept recordings of kids using its voice assistant Alexa and did not delete data when parents requested it.

The FTC ordered Amazon to pay around $30 million in response to the two settlements and introduce some new privacy measures. Perhaps more consequentially, the FTC said that Amazon should delete or destroy Ring recordings from before March 2018 as well as any “models or algorithms” that were developed from the data that was improperly collected. The order has to be approved by a judge before it is implemented. Amazon has said it disagrees with the FTC, and it denies “violating the law,” but it added that the “settlements put these matters behind us.”

As companies around the world race to build generative AI systems into their products, the cybersecurity industry is getting in on the action. This week OpenAI, the creator of text- and image-generating systems ChatGPT and Dall-E, opened a new program to work out how AI can best be used by cybersecurity professionals. The project is offering grants to those developing new systems.

OpenAI has proposed a number of potential projects, ranging from using machine learning to detect social engineering efforts and producing threat intelligence to inspecting source code for vulnerabilities and developing honeypots to trap hackers. While recent AI developments have been faster than many experts predicted, AI has been used in the cybersecurity industry for several years—although many claims don’t necessarily live up to the hype.

The US Air Force is moving quickly on testing artificial intelligence in flying machines—in January, it tested a tactical aircraft being flown by AI. However, this week, a new claim started circulating: that during a simulated test, a drone controlled by AI started to “attack” and “killed” a human operator overseeing it, because they were stopping it from accomplishing its objectives.

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Colnel Tucker Hamilton, according to a summary of an event at the Royal Aeronautical Society, in London. Hamilton continued to say that when the system was trained to not kill the operator, it started to target the communications tower the operator was using to communicate with the drone, stopping its messages from being sent.

However, the US Air Force says the simulation never took place. Spokesperson Ann Stefanek said the comments were “taken out of context and were meant to be anecdotal.” Hamilton has also clarified that he “misspoke” and he was talking about a “thought experiment.”

Despite this, the described scenario highlights the unintended ways that automated systems could bend rules imposed on them to achieve the goals they have been set to achieve. Called specification gaming by researchers, other instances have seen a simulated version of Tetris pause the game to avoid losing, and an AI game character killed itself on level one to avoid dying on the next level.

Wired: Latest News

More Spyware, Fewer Rules: What Trump’s Return Means for US Cybersecurity