Cyberweekly #212 - The danger of frameworks
Published on Sunday, October 09, 2022
Frameworks are brilliant, they let us build something quickly, providing it’s the same shape as the framework intends.
We use frameworks in a number of areas, from process frameworks like SCRUM or SAFe, to software frameworks like Ruby on Rails, or Django Framework, through to risk or cyber frameworks like ISO27001, PCI or NIST Cybersecurity Framework.
But frameworks can be harmful to organisations, because they create a paved path that we can follow, but that paved path may well not lead to where we want to go.
Most frameworks have clever people working on them who attempt to generify the destination. They don’t care if your company makes audio equipment, packages and sends parcels, trades in realtime stock information or any other domain you care to think of. Their framework says that to be cyber secure you must follow this five step program, you must Identify, Protect, Detect, Respond and Recover, and if you do that, you’ll be good.
But if you do what is needed, and you take that framework and then adjust it for your context and your organsiation, then how do you know which parts of the framework are important and which are not. Some of those dull looking parts might actually be critical to ensuring the whole thing works, and you won’t know that until you start using it and don’t get results.
Secondly, frameworks like Agile, Zero Trust or Shift Left can rapidly lose their original meaning. As each vendor or adoptor tries out the framework and people jump on the bandwagon, they each trumpet their successes, their failures and they can otherwise unbalance a very simple idea into a bloated framework that doesn’t mean anything consistent anymore.
Frameworks are a useful thinking tool, they can be incredibly valuable for consultants and strategists to ensure that you haven’t forgotten anything important, and they can help assess your current abilities to see where critical gaps are. But they shouldn’t be mistaken for blueprints, and they need to be adopted carefully to be useful.
- Containers Overview
- What is a container?
- What are different types of container solutions?
- How long have containers been around?
- How are Virtual Machines and Containers different?
- Container Security Benefits
- Application Isolation
- Reduced Attack Surface
- Container Security Risks
- Added Complexity
- Not Complete System Isolation
- Container Security Defense-in-Depth
- Container Threats Mapped to STRIDE
- Wrapping Up
- We take a CSV which lists a bunch of Company Names.
- We then do a Google search, and go to the first result (‘I’m Feeling Lucky’).
- We assume the first result is the homepage of that company, and the domain they would use for their tenant.
- We pull out the host name, and then check it against the Open ID Configuration endpoint.
- If we get a valid response from the endpoint, then we say that they have a tenant!
- Otherwise, we say they do not have a tenant.
The most harmful byproduct is that:
Companies perceive the challenge as one of adopting the framework instead of creating conditions similar to healthy companies (or other learning situations). They blame their teams for being slow to figure the framework out when things aren't working. Frameworks become their crutch.
Instead of introspection, a leader will likely buy into the whole story of their team being babies learning how to walk. The irony is that it is the leader who needs to improve.
We start to believe the frameworks represent the ideal instead of being wholly artificial "hacks" for less-than-optimal situations.
Be skeptical of frameworks. And be skeptical of explanations like “crawl, walk, run” that infantilize people and don’t describe healthy adult learning environments
Frameworks for managing our staff, our processes, our software or even our risk are just that, frameworks.
If we see the goal as being “adopt the framework” then we are missing what the original goal was, which was to have high performance teams, efficient processes, quality software or reduced risks.
Google have phrased their implementation as “BeyondCorp” which takes the connotations away by talking about the edges. Could this be evolved to take on a more generic meaning?
“Parameterless Security” or “Boundaryless Security” - in a similar vein to BeyondCorp, it’s conveying a sense of security that’s everywhere. Quite a mouthful to say.
“Continuous Verification” or “Continuous Security” - this is somewhat accurate, though it sounds a bit tedious; would a user think that they’ll need to keep logging in every few minutes?
“Just in Time Access” - not too bad, this conveys the why of certain things happening. This might get confused with Just in Time compilation .
“End to End Security” - it’s generic, and sounds similar to “End to End Encryption” which has a modern usage made popular by Whatsapp. Could work. Conclusion Zero Trust is a phrase with negative connotations. I hope that someone with a better head can come up with more suitable naming and messaging around the Zero Trust model to help inculcate its benefits and its necessity, and get buy-in from users.
This is a good comment on the whole Zero Trust movement.
One thing I’ve noted in the past is that there will be well meaning senior technology decision makers who do not understand the details of things like zero trust. They will take the idea and run with their faulty understanding, causing havok and mayhem as they go.
We probably can’t prevent this, although appointing more senior leaders who have at least a basic understanding of technology from the internet era would be a good start.
But it also means that we need to use terms that actually say what they mean. Otherwise we’ll “shift left” into a “zero trust” “DevSecOps” future that doesn’t actually adopt any of these things.
On Zero Trust, at the very least, we can reliably and regularly use it in the full form, “Zero Trust Networking” to indicate that it’s about having no implicit trust in the network.
Understand container security challenges and learn about critical container security best practices, such as securing images, registries, etc.
Table of contents
A lovely overview for those of you who haven’t had a chance to take a proper look at containers yet. There’s some good reading as an introduction to containers as well as material that covers how to think about securing your containers
When a company is planning to migrate their infrastructure and applications to cloud or want to create a new service, the IT department, Cloud or DevOps team, will have the task for creating the necessary automated infrastructure deployment with keeping security in mind. As security is more and more important, quality should be built in instead of trying to test quality. This is a different way than previously done. There are a lot of moving pieces and possibly many different teams might have to work together. It is difficult to know all the parts of the environment and design all security controls in every step in the deployment or through the automated deployment.
The good news is that there are a lot of information and tools available today for anyone who would like to automatically deploy infrastructure resources with built-in security in the cloud by developing secure infrastructure as a code. This article aims to make an attempt to collect the main starting points, creating a guide on how to integrate security into infrastructure as a code and show how these security checks and gates, tools and procedures secures the infrastructure by mentioning free and/or open-source tools wherever possible.
This is a fantastic resource that covers a good overview of the whole “Securing your Infrastructure As Code” domain. Lots of resources linked here, and is well worth the time digesting and following through as many links as possible
So in summary what does this script do?
One thing to note about these results is that when we get a result that says the company has a tenant, we are nearly 100% correct in that fact. However, if we say that a company does not have a tenant, we are not necessarily correct. It is possible that the google result did not point to their actual domain name, or they are using a different domain name for their AAD Tenant.
This is a cute mechanism for determining whether a company has an O365 tenant.
I wrote a prototype tool a while back that did something similar based off of someones email address, https://github.com/bruntonspall/universal-signin, which looks up the MX record of the DNS record to perform much the same diagnostic.
In both cases, there is a failure rate, from companies using a mail fronting service (for say anti-spam reasons), or a company using a different domain for it’s O365 than its website, but they’re useful OSINT practices.
This document provides a practitioner's perspective and contains a set of practical techniques to help IT executives protect an enterprise Active Directory environment. Active Directory plays a critical role in the IT infrastructure, and ensures the harmony and security of different network resources in a global, interconnected environment. The methods discussed are based largely on the Microsoft Information Security and Risk Management (ISRM) organization's experience, which is accountable for protecting the assets of Microsoft IT and other Microsoft Business Divisions, in addition to advising a selected number of Microsoft Global 500 customers.
This collects together the Microsoft advice on securing your enterprise IT crown jewels, from secure administration workstations, to locking down the AD server itself.
For example, if an attacker wants access to your online banking account, it’s possible that they could gain direct physical access to your computer, but it’s overwhelmingly more likely for them to gain access through credential theft from thousands of miles away. Therefore, a zero-trust implementation requires not just a username and password, but proof of possession of your personal device, through a code in a text message or recited over a phone call. And since both phone calls and SMS messages have documented vulnerabilities, the more secure methods require either a smartphone or an additional accessory, like a security key.
But this does not work for everyone. For one thing, it assumes each user has a personal device. This alone would already exclude many of the people I met at the computer class—who often simply can’t afford one.
According to Pew Research Center, about three-quarters of U.S. adults own a laptop or desktop computer, and 97 percent of U.S. adults own a cell phone of some kind, with 85 percent owning a smartphone. Smartphone dependency, or reliance on smartphones for most or all online access, has declined from 20 percent in 2018 to 15 percent today, but remains at 27 percent among people with a household income less than $30,000 and 32 percent among those with less than a high school degree. One-quarter of Hispanic Americans are smartphone-dependent, as compared to 17 percent of Black Americans and 12 percent of White Americans.
This is important. Our security features in our products cannot exclude users, especially the ones who are most in need.
Our digital economy runs the risk of leaving the poorest and most vulnerable behind, and we need to ensure that our products can help those people, while still remaining secure. That’s a tricky tightrope to walk, but the first principle is going to have to be to remember that “no one size fits all”.
Some organisations are going to have to seriously consider how they can either allow some users to not have a second factor, or enable power users, such as librarians or other “digital assistants” to authenticate users for them. All mechanisms that might be abused, but ensuring we have accessible channels for those who need to access services is important enough that we have to do something.
Cisco Talos Intelligence Group - Comprehensive Threat Intelligence: De-anonymizing ransomware domains on the dark web
We have developed three techniques to identify ransomware operators' dark websites hosted on public IP addresses, allowing us to uncover previously unknown infrastructure for the DarkAngels, Snatch, Quantum and Nokoyawa ransomware groups.
The methods we used to identify the public internet IPs involved matching threat actors’ TLS certificate serial numbers and page elements with those indexed on the public internet, as well as taking advantage of ransomware operators’ security failures.
In de-anonymizing the dark web infrastructure used by ransomware actors, we can enable hosting providers to reduce illegal activity on their networks, enhance threat actor tracking, assist in possible law enforcement investigations, and/or slow ransomware operations as they make operational changes.
Some nice detective work here, using ransomware operators mistakes to help de-anonymise their networks and track their work.
The SEC might plausibly come to a broker in 1948 and say “show me all of your internal communications for the last two years,” and he (he) might hand the SEC, like, 30 memos, and the SEC might say “ah right that’s a reasonable number of memos” and then go read them.
One thing that has happened in the intervening 74 years is that written communication has become much much much much much much much easier and more casual. I used to work on a desk at Goldman Sachs Group Inc., where the default method of communication with someone who sat two seats away from you was instant messaging. 8 There were some mornings when I sent more than 100 inter-office memoranda, though like 20 of them would be “lol” or “fml.” 9 And so then if the SEC thought that a broker was up to no good, it could say “give me all of the internal communications of one particular desk for the month of September,” and the broker would hand over like 500,000 emails and instant messages, and the SEC would run some search terms on this vast corpus, and the search terms would be like “fraud” and “spoof” and “manipulate” and “sucker” and “fml,” and they’d get a hit like “fml i just did a fraud oops” and go from there to build a case. In 1948, it would be very weird for a broker to send around an inter-office memo with the subject line “Re: We need to do more fraud.” He might say it, but only out loud, in person, in a form not required to be preserved for SEC examination. By 2011, it was just expected that if a broker was doing fraud there would be dozens of eternally preserved electronic messages about it.
This from Matt Levines brilliant newsletter on Money Matters reminds me about the conceptual gaps between life as percieved and life as actually lived.
The rules and regulations given by the SEC appear to struggle with the everyday change in how we communicate. This is really difficult for regulators to work around, because although they have always traditionally lacked the power to surveil and audit lots of communications, as those communications have moved into auditable mechanisms (such as Instant Messaging) it feels very difficult for them to give up the accidental rights that they’ve gained to check such things.
But as Matt pulls out here, for the last few decades, you could have easily communicated out loud with people, and it would have required a whistleblower who was listening to your conversation to be caught. These powers to inspect all messages are entirely new and unintended by the drafters of the legislation.
It’ll be interesting to see how this changes over time, whether we see regulators deliberately give up power of retention and audit, or whether people find other ways around it.
Yurchenko spent about eight years in the US capital, and his time there was mostly distinguished by his shoddy professionalism. He slept around, including with the wife of a Soviet diplomat and with an American woman who may have been an FBI informant. In 1976, while Yurchenko was serving as head of security at the DC rezidentura, a former CIA officer named Edwin G. Moore threw a package over the wall of the Soviet embassy. Yurchenko and Dmitri Yakushin, the KGB rezident who Moore had already made several attempts to contact at his home, sent a cable to KGB Center in Moscow discussing the situation. Yuri Andropov, the KGB chairman, cabled back that the incident was likely an FBI provocation and that they shouldn't respond to the volunteer. Worried that the package might contain a bomb, Yurchenko handed it over to the DC police. It turned out that Moore was a genuine traitor hoping to cash in on the 22 years he had spent in the agency's research division. He was promptly arrested, and when the FBI searched his home, they found hundreds of pages of CIA-related documents and photographs, including a typewritten note that offered access to CIA secrets for $10 million. (The package that Moore tossed over the embassy wall had included a request for $200,000.) Moore, who had a heart attack while in custody, was found guilty, sentenced to 15 years in prison, and paroled three years later. Yurchenko — and Yakushin — had whiffed on collecting a potential windfall of US secrets. For Yurchenko's KGB colleagues, Victor Cherkashin wrote, "the incident became notorious as a glaring example of unprofessionalism." Kalugin called it "an indescribably stupid error." It didn't much affect Yurchenko's career, as his connections helped him rise to senior intelligence posts, where he gained access to highly secret information about Soviet agents in North America and other clandestine operations abroad.
A lovely set of stories about the height of cold war espionage and how some of the various spies overlapped. It’s also notable that amidst all the stories of expert espionage there’s a surprising number of stories of incompetence as well.