Cyberweekly #188 - Trust networks
Published on Sunday, March 13, 2022
How networks affect us all is both intuitive and supremely unintuitive at the same time.
GPG, rather famously suggested that people have key signing parties, where you would gather in person and you would validate someone's passport or driving license to validate they were who they said they were. This would build a "web of trust", where you might know a number of people and trust that they had validated the other people in their network.
The problem with this is that trust isn't actually transitive, and it's not a single binary thing.
Just because I've met you and validated that you are who you say you are (and we'll leave aside that I'd have no idea how to check if your passport or drivers licenses was forged or not), it doesn't mean that I have any trust in your ability to validate others. It doesn't mean that I trust you in all circumstances, to introduce me to others, to verify others effectively or to make good judgements in all situations.
But networks are the way that humans build a perception of trust in those around them. We are told to be careful of who we link to on social media, because "they might not be trustworthy", but one of the flags that people instinctively look for is whether or not the new person knows their friends or acquaintances. But is being linked to friends of yours a sign of trust? If they've got lots of links to other people, in a wider network, then they will look to you, in your smaller network, like they have lots of links to your network, through friends of friends.
Increasingly these networks are illusionary or phantom. As we move online, our social relationships move more towards online engagements with people. That makes it far harder to judge peoples interest in us, their attention and read all of those non-verbal feedback mechanisms that we are so used to. That's not to say that the relationships are more superficial or even less real. It can be far easier to open up to someone by text, or someone who can't see your face, because you can be far more vulnerable.
But it opens us up to potential manipulation by others, into how we assess the superficiality of the relationship, or even the validity and reality of our co-workers. Deep fake videos and audio and the development of AI conversation bots increasingly makes it harder to tell a computer from a human, especially in a socially lossy mechanism.
So what can we do about that? As we move more and more to remote work for a large proportion of our work, we need to remember that it takes time and effort to invest in those relationships. Companies need to invest in social tools that aren't entirely focused about work, but let people share pictures of their cats or dogs, that's lets people talk about random things, and ensures that we humanise and empathise with our coworkers. These things are far harder to fake, especially over time, and helps build connections between us, connections that build a network of trusted relationships.
- We discover a new way that attackers could launch reflected denial of service (DoS) amplification attacks over TCP by abusing middleboxes and censorship infrastructure. These attacks can produce orders of magnitude more amplification than existing UDP-based attacks.
- This is the first reflected amplification attack over TCP that goes beyond sending SYN packets and the first HTTP-based reflected amplification attack.
- We found multiple types of middlebox misconfiguration in the wild that can lead to technically infinite amplification for the attacker: by sending a single packet, the attacker can initiate an endless stream of packets to the victim.
- Good Practice Guide (GPG) 43: Requirements for secure delivery of online public services
- Good Practice Guide (GPG) 53: Transaction monitoring for HMG online service providers
The Zoom call had about 40 people on it - or that's what the people who had logged on thought. The all-staff meeting at the glamorous design agency had been called to welcome the growing company's newest recruits. Its name was Madbird and its dynamic and inspirational boss, Ali Ayad, wanted everyone on the call to be ambitious hustlers - just like him.
But what those who had turned on their cameras didn't know was that some of the others in the meeting weren't real people. Yes, they were listed as participants. Some even had active email accounts and LinkedIn profiles. But their names were made up and their headshots belonged to other people.
The whole thing was fake - the real employees had been "jobfished". The BBC has spent a year investigating what happened.
Absolutely terrifying counterpoint to the story from a few weeks back of employees turning up to companies who are different to the person who interviewed for the job.
In this case, the vast majority of the company appears not to actually exist, and people were fooled into working, essentially for free, for a scam company.
I call this the ANEW effect, for Asymmetric Network Effects Warfare: when a subnetwork splits from the main network, it will suffer more than the main network . That’s because the links between the small subnetwork and the main one account for a big chunk of the links of the subnetwork, but only a small fraction of the links of the big network. Banning Western banks from dealing with Russian banks is an inconvenience for most of them. But it’s existential for the Russian banks, who are likely to default one after the other. It’s not just the financial system. For example, flights are networked. Isolating Russia might be inconvenient for the West. It’s mortal for Russian airlines, who won’t be able to easily fly outside of Russia anymore.
I'm not convinced by everything in this article, and it's a bit all over the place, but this bit stood out to me. This view of network asymmetry is something that pops up all over the place. In this case, the impact of sanctions affects the smaller network far more than the larger network. But we also see it in social networks, where small social groups can be overwhelmingly affected the actions of people in the larger groups without the larger group noticing as much.
Networks themselves and how we map, model and understand them is a fascinating topic for study, and in this modern era, worth almost every leader getting at least a basic understanding of
Relative to the Pandemic, the single biggest work question I’ve been asking myself for two years is: what did we lose? What is the measurable and objective loss for teams not working in close proximity? I’ve been looking for cracks. I’ve been looking for leading indicators of future doom. The Great Resignation seems like a proper crack, right? But are people quitting their jobs because they can’t work together or because their current job sucks and all this terror in the air has given them a new appreciation of what really matters?
What I see are endless bits of friction:
No , I can’t hear you. You’re muted.
No , I can’t see what you’re sharing.
No , I have no idea that you’re in a bad mood. You’re just the same old postage stamp two-dimensional muted headshot that you were in the last three meetings.
No , I have no idea that everyone hates the idea you just proposed because my ability to read the room has been mostly erased. I can’t tell the difference between “We hate this idea” silence and “We’re mostly just quiet because it’s a chore to speak during a video conference” silence.
This is a useful reminder that just because all of your individual team members are remote, it doesn't make you a remote company. You need to actively invest in a number of other things to ensure that your culture can work remotely. This likely includes paying travel fares for your staff to get together on a regular basis, so that they can still experience in person meetings, and get to know one another.
So a token that’s linked to a managed identity is not an issue in itself. You are supposed to be able to get a token for your own managed identity. But if you’ve been following, there were additional ports accessible locally. Each time I ran an automation job, I saw the port changing, but it remained around the same range.
I wrote up a quick Python script to make HTTP requests to 20 ports starting from 40,000. It was simple to do
Random ports gave me JWT tokens. I executed the script a few more times and different ports gave me different tokens! It was obvious to me that I was actually accessing other people’s identity endpoints. I already proved that these tokens could be used to manage the Azure account, if given enough permissions, so accessing data of other tenants was not necessary.
We wanted to understand how far this simple flaw could go. We used the schedules feature of Azure Automation to try grabbing tokens from a few hundred ports and seeing which tenants came up. We did not store the token and only extracted the metadata about the tenant (tenant ID and automation account resource ID).
In this short period of time, before the issue was patched, we saw many unique tenants, including several very well-known companies! And this was just from a scheduled run every hour. If we ran continuously, it’s likely we would have captured much more (It’s expected the identity endpoint goes down as soon as the automation job finishes, so you’d have to grab it really fast in some cases).
This is a nasty flaw in Azure Automation (now fixed), and something can be common in multi-tenancy systems.
You are expected to treat the machine as if it's all yours, but multiple tenants are on a machine. That means that if you expose anything over a network, whether a database, or in this case identity tokens, you need to validate that the request came from the original tenant.
Microsoft fixed this in just a few days, and then took several months to validate that there was no impact. It's therefore also a good reminder that even some of the simplest vulnerabilities are just that, a vulnerability, and have not been exploited
Several factors determine the worth of access, and asking prices vary significantly among sectors, countries and access brokers. Access with elevated privileges typically attracts a higher asking price, as does access to large corporations with higher annual revenues or advertisements by more-established access brokers. Some brokers auction the access, offering a “buy-it-now” price or attempting to encourage a bidding war.
The sectors attracting the highest average asking price for access were government, financial services, and industrial and engineering organizations. The most advertised sector does not necessarily attract the highest asking price; for example, access to the academic sector was, on average, priced at $3,827 USD. In comparison, the government sector — which was the second most advertised — attracted an average asking price of $6,151 USD.
This is an interesting report, but this finding in particular seems to stand out. Mostly, government organisations seem to say that they don't pay ransoms, and so far, most people seem to be associating Initial Access Brokers with Ransomware actors. If that's the case, who is paying this much for initial access?
Government is also a wide sector, especially with different governments around the world, and central and federal governments being quite different to local government or community councils.
Of course, the price that access brokers are asking doesn't mean that anyone is actually paying that price, so it's hard to actually drawn strong conclusions from this data
The findings suggest that Apple's claims of the Find My protocol being "built with privacy in mind" fall short of the mark, with Positive Security spoofing the protocol by having an open-source device broadcast "2,000 preloaded public keys" as a way of fooling some anti-stalking protections.
The proof-of-concept device was kept with a volunteer user for five days, during which time it did not show on Apple's Tracker Detect app – while "location reports for the broadcasted public keys were uploaded and could be retrieved."
This is quite scary. Apple's anti stalking protections are there to protect people against malicious use of airtags. The fact that these prototype devices can enable the location of a user, but don't trigger the alerts means that they are perfect for almost all malicious uses and yet have no real legitimate use.
This work was presented at USENIX Security 2021 and received a Distinguished Paper Award. Our conference talk is also available here.
Collectively, our results show that censorship infrastructure poses a greater threat to the broader Internet than previously understood. Even benign deployments of firewalls and intrusion prevention systems in non-censoring nation-states can be weaponized using the techniques we discovered.
This is a lovely bit of security research. In essence, proxies that attempt to block users from browsing "forbidden content" often return a warning to the user. But with the right spoofing technique, you can make it send that warning page to an internal users computer, over and over and over again.
In 2021, vendors took an average of 52 days to fix security vulnerabilities reported from Project Zero. This is a significant acceleration from an average of about 80 days 3 years ago.
In addition to the average now being well below the 90-day deadline, we have also seen a dropoff in vendors missing the deadline (or the additional 14-day grace period). In 2021, only one bug exceeded its fix deadline, though 14% of bugs required the grace period.
Differences in the amount of time it takes a vendor/product to ship a fix to users reflects their product design, development practices, update cadence, and general processes towards security reports. We hope that this comparison can showcase best practices, and encourage vendors to experiment with new policies.
This data aggregation and analysis is relatively new for Project Zero, but we hope to do it more in the future. We encourage all vendors to consider publishing aggregate data on their time-to-fix and time-to-patch for externally reported vulnerabilities, as well as more data sharing and transparency in general.
This is pretty much entirely positive news. This data tells us that vendors are reacting faster, that fixes are being shipped to users faster, and that we are getting more secure as a result of Google's (and others), vulnerability research and disclosure programs
If you're responsible for the designing and running government online services, you'll probably be familiar with GPG43 and GPG53, also known as:
When CESG became part of NCSC, these were hosted on the GOV.UK website because the Government Digital Service (GDS) still required both sets of guidance.
Fast forward to 2022. Although GPGs 43 and 53 have been widely used across government for many years (and both are still referenced in some HMG contracts) they refer to a number of concepts and programmes that are no longer relevant (such as impact levels). Of course, the requirement to properly secure online services (including transaction monitoring) is more important than ever, which is where the new guidance comes in.
So, we've replaced the GPGs with two new pieces of NCSC guidance:
Those of you who have worked with the UK Government might remember some of these bits of guidances.
Getting these updated is really helpful for people trying to modernise their digital services, and deliver new systems, as it removes some of the last vestiges of some advice that just didn't scale into an internet age.
How does SHA-256 actually work? This is a beautifully implemented version of it, that lets you put in some text, and then walk through the 113 or so steps needed to generate a hash from the text.
The clever use of colour and space here makes it clear just what is being added together, and shows just how clever the algorithm is.