Source
Wired
Think US health data is automatically kept private? Think again.
Soon after Russian troops invaded Ukraine in February 2022, sensors in the Chernobyl Exclusion Zone reported radiation spikes. A researcher now believes he’s found evidence the data was manipulated.
Since 2018, a dedicated team within Microsoft has attacked machine learning systems to make them safer. But with the public release of new generative AI tools, the field is already evolving.
Cybercriminals are touting large language models that could help them with phishing or creating malware. But the AI chatbots could just be their own kind of scam.
Here’s one simple way to reduce your security risk while logging in.
Plus: A framework for encrypting social media, Russia-backed hacking through Microsoft Teams, and the Bitfinex Crypto Couple pleads guilty.
The US Congress is trying to tame the rapid rise of artificial intelligence. But senators’ failure to tackle privacy reform is making the task a nightmare.
Flaws in the Points.com platform, which is used to manage dozens of major travel rewards programs, exposed user data—and could have let an attacker snag some extra perks.
Researchers found a simple way to make ChatGPT, Bard, and other chatbots misbehave, proving that AI is hard to tame.
Generative AI won't just flood the internet with more lies—it may also create convincing disinformation that's targeted at groups or even individuals.