Cyberweekly #210 - MFA is "simple"

Published on Sunday, October 02, 2022

For years, the security community has said that it's simple to roll out MFA.

The Uber hack shows that this is simple not true. Ignoring the swirling rumours around the hack, what is pretty clear is that Uber had deployed MFA across its staff, and not SMS MFA, but proper Microsoft Authenticator or Okta style MFA from the sounds of it.

But MFA that can send you hundreds of notifications will eventually tire or exhaust the operator so that they'll either make a mistake, or they'll press the "accept" out of frustration simply to make it stop.

I'm sure there are lots of talking heads out there who are saying any number of quotable sayings, probably starting with "If only Uber had simply...".

The reality is that this stuff isn't simple, it's not easy, and in most cases, it's a deliberate non-competitive cost.

If you want to launch a new product, start a new startup, then you need to get the business up and running on the lowest costs possible. However, as we know, security has to be threaded through everything the business does. If you create service accounts that have no second factor and hardcoded passwords in order to automate your infrastructure, that will give you extra ability to deliver, be faster, easier to maintain and probably more useful than doing it "the proper way", with secret managers and authenticated machine accounts. However, three years down the line, when your organisation is a critical enterprise SaaS service, it's really hard to take all of these workaround out.

And of course, if you didn't have a SOC or even a security team in your first few months, then you won't have a history of these things, and when the hacker finds that account, they'll be able to bypass all of your controls to get into your system. Of course, as a hacker, when you find one of these things, you tend to find a whole trail of them, which makes it easier to pivot or move around the infrastructure.

We have to stop declaring that these security fixes are simple. and start working out how to actually make them simple. Only when the easiest possible way is the secure way will people do it the secure way when under pressure to deliver.

Upcoming events

I'll be speaking at the International Security Conference on 28th September in a talk titled "Looking to the Future: Emerging Security Capabilities and a Journey to Access Control".

I'll be keynoting at Lean Agile Scotland on the 6th October, where I'll be talking "Security at the speed of Agile", mostly about how agile makes us more secure, and just what has changed in security over the last 5 years or so. tl;dr; I probably won't be saying "this is simple"!

    Security update | Uber Newsroom

    https://www.uber.com/newsroom/security-update

    An Uber EXT contractor had their account compromised by an attacker. It is likely that the attacker purchased the contractor’s Uber corporate password on the dark web, after the contractor’s personal device had been infected with malware, exposing those credentials. The attacker then repeatedly tried to log in to the contractor’s Uber account. Each time, the contractor received a two-factor login approval request, which initially blocked access. Eventually, however, the contractor accepted one, and the attacker successfully logged in.

    From there, the attacker accessed several other employee accounts which ultimately gave the attacker elevated permissions to a number of tools, including G-Suite and Slack. The attacker then posted a message to a company-wide Slack channel, which many of you saw, and reconfigured Uber’s OpenDNS to display a graphic image to employees on some internal sites.

    This one still seems to be running, so there’s likely more news coming over time. But from the sounds of it, the hacker relied on MFA fatigue, which is to say that when a normal user sees notifications over and over, they are far more likely to accidentally press the accept notification, or be tricked into doing so by the attacker (as claimed on twitter).

    What you can learn from this is that anything that has a simple accept function, such as pressing any key on the phone, or simply pressing accept is not as secure as something that verifies that the user was responding to the right MFA prompt. Microsoft enables you to

    The Attackers Guide to Azure AD Conditional Access – Daniel Chronlund Cloud Tech Blog

    https://danielchronlund.com/2022/01/07/the-attackers-guide-to-azure-ad-conditional-access/

    Some Common Weak Spots

    Of course, the best way of attacking Conditional Access is to never trigger it at all, to avoid it. There are some common weak spots in almost every organisation that can be abused.

    Exclusion Group Abuse

    It’s best practice to always exclude one security group from all Conditional Access policies. This group should contain two break glass accounts for you to use during an emergency. This group should not include any other accounts, but in reality it almost always does. Admins get sloppy and adds themselves, service accounts, or complaining users to this group. By stealing the the password from one of these accounts, we can skip Conditional Access all together. Game over!

    Missing Block Policies

    Admins tend to create policies to enforce MFA for certain, or all, applications in a tenant. But they often include conditions in these policies, like allowed platforms or allowed locations. What they don’t understand is that if we don’t block the unwanted scenarios with a corresponding block policy, an attacker can simply spoof the location or the platform to bypass the policy and sign right in. This is very much what the rest of this guide will showcase.

    This is a good guide to the most common control that organisations put in to prevent access to their corporate resources. It’s well worth noting that Conditional Access doesn’t allow access, it denies access, so if you don’t set it, then everything is allowed.

    Secondly, take note of that first example weak spot. Exclusion groups are often used by admins fed up of obeying the security policies, but it really opens your system up for compromise. Once someone is in, they can easily find out who is in that exclusion group, so do yourself a favour and audit those exclusion groups on a regular basis

    Privacy and Threat Modelling — JMP Blog

    https://blog.jmp.chat/b/2022-privacy-threat-modelling

    You cannot protect your privacy unless you know what your are protecting and what you are defending against .  Otherwise you may take extra steps to secure something not worth protecting, omit something you were unaware needed protected, or even protect something at the detriment of something you would have cared more about.  Privacy is not a slider from zero to infinity, you cannot be simply “more” or “less” private in some general abstract way.

    For example, someone may be a part of a group of insurgents in a small country.  They wish the contents of their communication to be kept a secret from the current government if any one of them is found out, so they choose to use an end-to-end encrypted messaging app.  They have prevented their mobile carrier and government from logging their messages!  They also secure their devices with biometrics so they cannot be stolen.  However, due to the unpopularity of this app in their country, when asked the carrier can immediately identify the current location of anyone using it.  When any of these people are brought in for questioning, the investigator forces the biometric (face or fingerprint) onto the device from the person in custody, unlocks it, gets access to all the decrypted messages, and let’s just say the insurgency is over.

    So did the insurgents make “un-private” choices?  No!  For other people with different vulnerabilities, their choices may have been ideal.  But when their identity and current location is more at risk than the content of their messages, sending messages less-encrypted over a more-popular app or protocol (which could have all contents logged for all users, but very likely does not), and deleting them regularly from the local device in case they are caught, would have been more effective.

    I slightly disagree with the statement in here that a threat model is “a list of possible vulnerabilities, often with attached priorities.”. A threat model has to include threat actors, their intent to compromise your data/privacy/asset, and the capability to use a vector of attack.

    But this summary and application of threat modelling to privacy is a good introduction otherwise, and as this section says, know what you are protecting (your assets/privacy/whatever), and what you are defending against (the actors).

    The example given here is a really good example of where we see security advice go wrong so often. Telling people to use a thing that is more secure than they need and mismatched against their threat model can result in greater risk to them even though they are using some magic “super security”.

    There is no “software supply chain” — iliana.fyi

    https://iliana.fyi/blog/software-supply-chain/

    This is where the supply chain metaphor — and it is just that, a metaphor — breaks down. If a microchip vendor enters an agreement and fails to uphold it, the vendor’s customers have recourse. If an open source maintainer leaves a project unmaintained for whatever reason, that’s not the maintainer’s fault, and the companies that relied on their work are the ones who get to solve their problems in the future. Using the term “supply chain” here dehumanizes the labor involved in developing and maintaining software as a hobby** .

    Everything that can be said about sponsorship and paying maintainers has already been said. Important work is still unfunded. Some of us, including me, don’t particularly mind that we’re not making money off of our weekend hacks. The problem is when the mere act of publishing software becomes a burden.

    You still cannot disable pull requests on a GitHub repository. A package repository might deem your software “critical” , adding requirements to publishing updates that you might not want to or be able to comply with. Google even wanted to disallow anonymous individuals from maintaining critical software and wanted to police the identities of others. 1 Or, perhaps a maintainer tells someone that they won’t maintain a project anymore, and GitHub notifies thousands of dependent repositories , calling it a “critical severity” advisory. 2 This was obviously a mistake, and GitHub withdrew and re-labeled it as low severity this morning, but it is far from the only time systems built to secure the “software supply chain” have failed to understand the nuances of open source software maintenance.**

    I disagree with quite a lot of this, but it’s a really useful view from the perspective of the hobbyist programmer. People who publish their code, who open source it so that others can use it, may not be doing so with the intent that it is used in serious commercial applications.

    I do agree that the onus is on the software consumer to ensure that the software they are selecting is appropriate for their use case, rather than on the maintainer.

    But there’s a huge amount of software out there, and software dependency chains that results in individual hobbyist developers maintaining quite critical libraries.

    ImperialViolet - Passkeys

    https://www.imperialviolet.org/2022/09/22/passkeys.html

    “RP” stands for “relying party”. You (the website) are a “relying party” in authentication-speak. An RP ID is a domain name and every passkey has one that’s fixed at creation time. Every passkey operation asserts an RP ID and, if a passkey’s RP ID doesn’t match, then it doesn’t exist for that operation.

    This prevents one site from using another’s passkeys. A passkey with an RP ID of foo.com can’t be used on bar.com because bar.com can’t assert an RP ID of foo.com. A site may use any RP ID formed by discarding zero or more labels from the left of its domain name until it hits an eTLD. So say that you’re https://www.foo.co.uk: you can assert www.foo.co.uk (discarding zero labels), foo.co.uk (discarding one label), but not co.uk because that hits an eTLD. If you don’t set an RP ID in a request then the default is the site’s full domain.

    Our www.foo.co.uk example might happily be creating passkeys with the default RP ID but later decide that it wants to move all sign-in activity to an isolated origin, https://accounts.foo.co.uk. But none of the passkeys could be used from that origin! If would have needed to create them with an RP ID of foo.co.uk in the first place to allow that.

    But you might want to be careful about always setting the most general RP ID because then usercontent.foo.co.uk could access and overwrite them too. That brings us to the second control mechanism. As you’ll see later, when a passkey is used to sign in, the browser includes the origin that made the request in the signed data. So accounts.foo.co.uk would be able to see that a request was triggered by usercontent.foo.co.uk and reject it, even if the passkey’s RP ID allowed usercontent.foo.co.uk to use it. But that mechanism can’t do anything about usercontent.foo.co.uk being able to overwrite them.

    This is an absolutely fantastic guide to implementing and using passkeys for your website, and there’s some good advice in here as well, such as remembering that you can enable multiple passkeys per account.

    As a reminder, PassKeys are the next implementation of WebAuthN, that allows the browser or operating system to attest that it has and is managing keys in the operating system appropriately.

    Notice how the specification has carefully thought about failure modes from previous specifications. The web’s same-origin policy had this system of enabling people to set a more general domain than the current one, but this step of requiring that requests include the current more specific domain enables the backend server to make security decisions appropriately.

    You probably still shouldn’t be using PassKeys only for authentication, but if you are rolling your own auth system anyway, offering passkey support can make logging in and authenticating much more secure in future.

    How we Abused Repository Webhooks to Access Internal CI Systems at Scale - Cider Security Site

    https://www.cidersecurity.io/blog/research/how-we-abused-repository-webhooks-to-access-internal-ci-systems-at-scale/

    As mentioned above, the first concern that comes to mind around abusing webhooks is a potential attempt by an attacker to trigger pipelines – a concern which also has multiple countermeasures provided by the SCM and CI vendors. But why should attackers limit themselves just to triggering pipelines (which most organizations aren’t vulnerable to anyway)?

    While the IP range of the SCM vendor webhook service was opened in the organization’s firewall to allow webhook requests to trigger pipelines – this does not mean that webhook requests cannot be directed towards other CI endpoints, besides the ones that regularly listen to webhook events. We can try and access these endpoints to view valuable data like users, pipelines, console output of pipeline jobs, or if we’re lucky enough to fall on an instance that grants admin privileges to unauthenticated users (yes, it happens), we can access the configurations and credentials sections.

    Lovely set of attacks that are hard to manage or deal with at scale.

    The problem essentially is that you expect HTTP requests to come in from the shared tenant SaaS tool, but there’s no easy way to determine which customer triggered that request, so attackers can setup their own tenant and then probe your systems.

    Most webhooks require some secret or token, but as described in here, just because webhooks are supposed to call specific urls or actions, chances are that the firewalls don’t understand that protocol and allow any web request to come into the CI system.

    There’s some recommendations in the blog, but architecturally, you might want to consider separating CI systems that can carry out privileged actions from CI systems that simply build code, which should limit the blast radius of attacks.

    AttachMe: critical OCI vulnerability allows unauthorized access to customer cloud storage volumes | Wiz Blog

    https://wiz.io/blog/attachme-oracle-cloud-vulnerability-allows-unauthorized-cross-tenant-volume-access

    In June, Wiz engineers discovered and reported #AttachMe , a major cloud isolation vulnerability in Oracle Cloud Infrastructure (OCI), prompting Oracle to patch the vulnerability within hours and without requiring customer action. 

    • Potential impact— Before it was patched, all OCI customers could have been targeted by an attacker with knowledge of #AttachMe . Any unattached storage volume, or attached storage volumes allowing multi-attachment, could have been read from or written to as long as an attacker had its Oracle Cloud Identifier (OCID), allowing sensitive data to be exfiltrated or more destructive attacks to be initiated by executable file manipulation.
    • Remediation— Within 24 hours of being informed by Wiz, Oracle patched #AttachMe for all OCI customers. No customer action was required.
    • Key conclusions— Cloud tenant isolation is a key element in cloud. Customers expect that their data isn’t accessible by other customers. Yet, cloud isolation vulnerabilities break the walls between tenants. This highlights the crucial importance of proactive cloud vulnerability research, responsible disclosure, and public tracking of cloud vulnerabilities to cloud security.

    This was a nasty bug, and a very scary attack, especially in mass user cloud services.
    Oracle responded promptly, and there was a number of requirements, such as needing the hard drive image to be unattached that makes this a bit harder to carry out.

    But any attack on a multi-tenant cloud provider that breaches the tenant separation is a big deal, and a big worry

    mjg59 | Bring Your Own Disaster

    https://mjg59.dreamwidth.org/61089.html

    There's obvious mutual appeal to having developers use their own hardware rather than rely on employer-provided hardware. The user gets to use hardware they're familiar with, and which matches their ergonomic desires. The employer gets to save on the money required to buy new hardware for the employee. From this perspective, there's a clear win-win outcome.

    But once you start thinking about security, it gets more complicated. If I, as an employer, want to ensure that any systems that can access my resources meet a certain security baseline (eg, I don't want my developers using unpatched Windows ME), I need some of my own software installed on there. And that software doesn't magically go away when the user is doing their own thing. If a user lends their machine to their partner, is the partner fully informed about what level of access I have? Are they going to feel that their privacy has been violated if they find out afterwards?

    But it's not just about monitoring. If an employee's machine is compromised and the compromise is detected, what happens next? If the employer owns the system then it's easy - you pick up the device for forensic analysis and give the employee a new machine to use while that's going on. If the employee owns the system, they're probably not going to be super enthusiastic about handing over a machine that also contains a bunch of their personal data. In much of the world the law is probably on their side, and even if it isn't then telling the employee that they have a choice between handing over their laptop or getting fired probably isn't going to end well.

    I think this is a little over egged on a strawman, but the concepts raised in here about why you might not want to allow users to bring their own device are all valid and correct.

    Of course, there are gradients and shades of grey here. Enabling users to check their email on their personal phone can be part of a BYOD policy, but you might not allow them to work off of a personal laptop. Or you might allow some staff, those at low risk of compromise, to work from personal devices when it’s appropriate, but not allow staff with access to financial records, customer data or in your software development team to.

    But whenever you do allow someone to BYOD, you really do have to think about the implications of doing so. Are you monitoring the device, and if so, what does that mean for invasion of privacy of the user? Would you forensically monitor the device if compromised, and if you did, what would happen if something else was found at the same time?

    These are solvable questions (and there’s more in the blog post), but they must be considered when thinking about BYOD policies

    A Practical Guide To Building Team Culture (Including Remote-Team Culture) | Meta Bulletin

    https://davidepstein.bulletin.com/a-practical-guide-to-building-team-culture-including-remote-team-culture/

    I would say it’s more often the first case. Because it’s clear that brashness creates an advantage (or, just as important, the perception of an advantage) in winner-take-all environments. Perhaps the supreme example of this is somebody we’ve both written about: Lance Armstrong. In that pharma-enhanced era of the early 2000s, Armstrong’s brashness functioned as a kind of force field that kept rivals afraid and allies in line. And I think it’s possible to draw a straight line from Armstrong to the brashness of Travis Kalanick, Adam Neumann, Elizabeth Holmes, and others who made their brands (and billions of dollars) by generating the same mythos: I am different. I am smarter and hungrier than the rest… I am destined to succeed . And they all ended up, like Armstrong, taking the long fall and proving that they were, in fact, destined to fail.

    All of which speak to a larger point: that the heroic-genius model of leadership is outdated. The world today is so complex and fast-changing. Believing that a single human can be massively smarter and hungrier and more creative than everyone else is sort of like believing in Bigfoot. Accordingly, most of the great groups I’ve studied aren’t led by heroic geniuses, but rather by teams of humble leaders who collaborate, listen, and, above all, learn from their mistakes.

    The problem is, as a society we absolutely LOVE the heroic-genius story. We love talking about it, writing about it, believing it — as the swirl around Elon Musk continues to prove. Maybe Musk really is smarter than everybody — I met someone last month who had worked with him and claimed that it was true. But I’m going to put myself in the “seriously doubtful” category for the time being.

    GitHub - vulhub/vulhub: Pre-Built Vulnerable Environments Based on Docker-Compose

    https://github.com/vulhub/vulhub

    Vulhub is an open-source collection of pre-built vulnerable docker environments. No pre-existing knowledge of docker is required, just execute two simple commands and you have a vulnerable environment.

    This looks like an excellent resource for those of you who want to try out your pentesting skills or simply write a proof of concept for exploiting a known vulnerability. I notice some famous vulnerabilities in here, such as log4j, struts2, bash (shellshock) and so on