Headline
CVE-2019-17192: Signal: Incoming call can be connected without user interaction
** DISPUTED ** The WebRTC component in the Signal Private Messenger application through 4.47.7 for Android processes videoconferencing RTP packets before a callee chooses to answer a call, which might make it easier for remote attackers to cause a denial of service or possibly have unspecified other impact via malformed packets. NOTE: the vendor plans to continue this behavior for performance reasons unless a WebRTC design change occurs.
Deja Vu.
> you can send the callee the message the caller gets when the callee answers
This is the exact same type of bug that was in libssh: https://www.nccgroup.trust/uk/our-research/technical-advisor…
“possible to bypass authentication by presenting to the server an SSH2_MSG_USERAUTH_SUCCESS message in place of the SSH2_MSG_USERAUTH_REQUEST message which the server would expect to initiate authentication”
Also, Apple had a FaceTime bug of very similar nature:
https://www.theverge.com/2019/1/28/18201383/apple-facetime-b…
“you begin calling somebody via FaceTime Video from within the Phone app. Before that person picks up, you can swipe up to add your own phone number to the call. Once you’ve added yourself, FaceTime immediately seems to assume it’s an active conference call and begins sending the audio of the person you’re calling”
And what’s the common theme here? Naive switch statement logic instead of a real state machine with fully mapped transitions.
> State machine bug in Signal app
Exploitable in the Android Signal app in particular; not the iOS one.
It’s potentially exploitable on iOS, but a UI issue has so far prevented the exploit from being useful. That’s not to say it couldn’t be exploited in a useful manner, and the vulnerability is still present. Continuing to use an unpatched version on iOS would be high-risk.
The bug tracker requires javascript, transcript for anyone that doesn’t want to enable it:
There is a logic error in Signal that can cause an incoming call to be answered even if the callee does not pick it up.
In the Android client, there is a method handleCallConnected that causes the call to finish connecting. During normal use, it is called in two situations: when callee device accepts the call when the user selects 'accept’, and when the caller device receives an incoming “connect” message indicating that the callee has accepted the call. Using a modified client, it is possible to send the “connect” message to a callee device when an incoming call is in progress, but has not yet been accepted by the user. This causes the call to be answered, even though the user has not interacted with the device. The connected call will only be an audio call, as the user needs to manually enable video in all calls. The iOS client has a similar logical problem, but the call is not completed due to an error in the UI caused by the unexpected sequence of states. I would recommend improving the logic in both clients, as it is possible the UI problem doesn’t occur in all situations.
To reproduce this problem on the Android client, replace the method handleSetMuteAudio in the file WebRtcCallService.java with the following method.
private void handleSetMuteAudio(Intent intent) {
Log.e(TAG, "SENDING MESSAGE");
this.dataChannel.send(new DataChannel.Buffer(ByteBuffer.wrap(Data.newBuilder().setConnected(Connected.newBuilder().setId(this.callId)).build().toByteArray()), false));
intent.putExtra(EXTRA_CALL_ID, this.callId);
intent.putExtra(EXTRA_REMOTE_ADDRESS, recipient.getAddress());
handleCallConnected(intent);
}
Then build the client and install it and make a call. When the call is ringing, the audio mute button can be pressed to force the callee device to connect, and audio from the callee device will be audible.
This bug is subject to a 90 day disclosure deadline. After 90 days elapse or a patch has been made broadly available (whichever is earlier), the bug report will become visible to the public.
Thank you, why absolutely no text can be displayed without JS enabled is beyond me. The page source is filled to the brim with trackers.
> The page source is filled to the brim with trackers.
I would expect nothing less from a site owned by google.
On the contrary; on a site owned by Google (or any other tracking company), I expect there to be only their own tracker(s), unlike on most other commercial sites, where there are often dozens.
And the requests I see seem to indicate just that: it only tried to load Google Analytics.
It looks like this was shortly preceded by https://bugs.chromium.org/p/project-zero/issues/detail?id=19… which was an exploration of the fact that making a call induces RTP data processing on a recipient device during a call, prior to the recipient answering.
That ‘seems’ innocuous (and both Signal and WebRTC had reasonable arguments around expecting that behaviour) but this follow-up exploit looks more serious, and the researcher is correct to note how an expanded attack surface can lead to problems like this :/
It’s fixed and updated.
But this demonstrates how much advantages open-source software has, this transparency does pay off with users who are gradually paying attention to the technology and security landscapes.
What I don’t understand is:
I’m a big fan of maintaining a forced 90 day disclosure period to pressure companies that do not address relevant security bugs.
But why, when a security issue is fixed, whitehats tend to always disclose immediately? Since it is fixed it is not relevant any more, and disclosing now only increases the likelihood of a hacker abusing the bug. Instead wouldn’t it be better if the targeted entity just disclosed that “a critical security vulnerability was found” and that "users should upgrade immediately"?
I don’t see the point of disclosing the specifics of a fixed security vulnerability soon after the fix? I understand that recognition is an important factor, but isn’t it more logical to delay the recognition step for e. g. 6 months?
Because blackhats look through updates to determine what has been fixed by reversing the change, and try to capitalize on the time between an update being available and it being widely deployed. The more you raise awareness of people that might be susceptible to attack in that time frame to get them to update sooner than automated systems would allow, the less victims there are to exploit.
I imagine there’s probably a short time after update release, almost definitely in the single or double digit hours range, where you might be helping the blackhat that would reverse it do it quicker, but it’s probably hard to do more harm than benefit by releasing the details earlier than later.
Well, even if the bug report wasn’t disclosed, there’s a decent chance it was reverse engineered out of the patch that was released a week ago by anyone with enough determination. It seems like the act of disclosing it soon after the patch is available allows information to propagate through the security community, which in theory helps accelerate the spread of the update.
People need to know when to update. Disclosure provides an incentive to do so, especially in corporate eenvironments.
Hackers on the other hand don’t need to be informed, they can always look for juicy bugs the moment fixes are rolled out.
It thrills me to no end that there aren’t a bunch of snarky comments about “OH THIS PROVES SIGNAL IS A TOOL OF THE NSA!!”.
Open-source software is great because you can find bugs like this by inspecting the software. Anything that is related to personal communications should be open-source.
I think open-source software is great as well, but assuming these kind of bugs are found because you can inspect the code is very wishful thinking that doesn’t always hold up.
This specific example required a Google department to find it. Who would have found it if Google got restrained by the NSA?. Other notable examples include openSSL.
On top of that, here is a great talk about how easy it would be to infiltrate open source projects: https://www.youtube.com/watch?v=fwcl17Q0bpk
The fact it’s open source enabled someone outside the project to find it in practice. While also possible with closed source software, if you think the bar is possibly too high with an open source project, it is an order of magnitude higher with closed source.
Also, please don’t say "Google". A bunch of hackers (on Google’s payroll) found it, not Google. We can’t tell what would’ve happened in a counterfactual universe where Google was not financing Project Zero.
I’m shocked at how cynical your perspective is that you don’t grant credit here.
Like if I said “the police didn’t save me from the hostage situation. Some hero saved me who happened to be working for the police. In an alternate universe we don’t where this guy isn’t employed by the police, we don’t know if he wouldn’t have saved me anyway”
Can’t you just say, thanks police, you saved me.
I just prefer congratulating the actual people that did this instead of the relatively arbitrary money supplier. You could say I’m equally shocked that credit is propagated as “Google” instead of the names of the researchers.
The reason I said Google because I think that if the NSA pulled some strings at Google, this exploit would not have been published. As such this was all in the hands of Google.
Do you know what your app library dependency is doing? They talked about advertising and that got people up in a roar… but what if it’s doing reconnaissance? Mapping out your build and deploy infrastructure because you fetch externally from 12 different locations along your pipeline. Then one day they target a specific company in a patch release and then fix it later. In a high release project you would never know.
Who’s got your back on that?
What it does mean is that the community can fix it, and you don’t have to divert time from a product approved sprint to fix sec bugs.
It’s also ironic that the community fixing something for a security issue isn’t going to help much since almost all the users rely on the app stores (Play Store in this case, since it seems like this is Android specific) for updates and wouldn’t get any fixes unless they’re tech savvy or until the developer (Signal) pushes the update to the Play Store and Google approves it.
> It thrills me to no end that there aren’t a bunch of snarky comments about “OH THIS PROVES SIGNAL IS A TOOL OF THE NSA!!”.
There are no such comments here as of this post, or are you lost?