Cyberweekly #224 - Building trust between teams

Published on Sunday, September 24, 2023

Welcome back everyone!

It's been 6 months since my last issue, in what has become an increasingly and unexpectedly long sabbatical from CyberWeekly, I'm pleased to say that I'm back! I ended up taking a break for a combination of workload, focus and some personal reasons, which all added up to make it hard to find the time to sit and read news on a regular basis, let alone formulate any analytical thought about it.

However, things have gotten easier and I've spent the last few weeks reading various things that I had queued over the last 6 months, and am finally in a position to start curating interesting reading for you all again.

This week, I've been going back over some of the older articles and looking at the continual theme around building trust between teams, breaking down silos and mutual respect.

Having one foot in technology and the other in security, I am witness to a lot of disrespectful attitudes on a regular basis. From technologists complaining that security just says no, or security people talking about technologists being "on the planet Zog", to everyone complaining that executives or senior managers make befuddling decisions, it's sometimes a bit dispiriting to witness.

Any organisation has to be made up of people from different backgrounds, with different skills. That means they also have fundamentally outlooks on life, and can often look at the same situation and have wildly different reactions.

If there is a successful phishing attack on an organisation, security people can sometimes blame the user, "Why did they click that link? Wasn't it obvious that the URL was misspelled?", technologists can grumble at the lack of good cyber defences "Why didn't the user have MFA? Surely the main system should have detected the link as bad?", and executives are often more worried about the impact on the organisations bottom line, users trust in the org and the safety of the data.

It's only when we bring people together, in a space that is psychologically safe, where we build trust that someone asking such a question isn't seeking to assign blame, but is adding more context to the situation that we can start to find solutions to the problem that cross divides and ensure that the entire sociotechnical system works the way people expect.

As I've said before, one of the most interesting questions you can ask after an incident is not "what went wrong?", but "why didn't it go wrong any other time?". Our organisations are large complex systems that are often in a near continuous state of micro-failures, all of which are caught or managed by compensatory controls in some way. It's only when these failures align that things go wrong, and trying to find any specific single "root cause" is almost always a waste of time and effort.

We need systems within our organisations that enable people to reach out when there is a problem, that connect the humans together, and that encourage decision making empowered with all of the appropriate information. Checklists, policies, standards and documentation are all good proxies to enable people to make good decisions, but all of it is a pale imitation of an open conversation.

At the end of the day, most people in your org are good people, trying their best in trying times. We need to develop empathy for those around us, and remind ourselves that sometimes people won’t ask for your amazing insight and wisdom because they don’t know you have that experience, or they don’t want to bother you.

Time invested in getting to know your colleagues better is rarely wasted.

    Building Balanced Security Teams - Updated

    https://www.philvenables.com/post/building-balanced-security-teams-updated

    A overall balanced team is one where specialists, risk advisors and operational experts work together to deliver a finely tuned machine for risk identification / resolution, driving technical solutions and overall architecture design. They slip-stream work into business services and product design / operation. They run an industrial-scale operational core to make sure the work is constantly becoming more efficient and effective over time.

    At some level we should expect every individual to have a combination of these three skill-sets, but not in perfect balance. A specialist should be solidly a specialist but also appreciate the need to partner with risk advisors and operational people to make themselves more effective. Similarly, risk advisors and operational people should be sufficiently technical (and technically curious) to understand issues and the overall landscape of risks even if they’re not a full domain expert.

    At a specific sub-team level it’s not always important to be fully balanced as long as the overall organization overall is.

    I really like this model, and think that Phil’s assessment is broadly speaking right. Many of the problems in security I see come from an over-emphasis of just one type of person or skills within the team.

    The other thing that I’d call out here is that one dysfunction that I’ve seen in a few examples is where there is a balance of these skills across the organisation, but there either isn’t power or worse respect between these skill sets.

    When one voice is louder than the other, or gets to play a trump card in any disagreements means that your organisation isn’t making the right use of those skills. You need not just a balanced team, but one where they respect each others contributions, ask for support and work together to ensure that the business gets what it needs.

    Helicopter Management and Other Mistakes – charity.wtf

    https://charity.wtf/2023/06/19/helicopter-management/

    Systems thinking is a core skill for both managers and engineers. It’s not a skill we are born with; it takes a lot of practice and failure to develop good instincts for debugging complex systems. As an engineering manager, you may have spent 10+ years writing software and learning how computers work, but you have hardly begun to understand how business and organizational systems work. This explains a lot when it comes to the empathy gap between engineers and management, I believe.

    We spend a lot of time talking about empathy these days — empathy between teams, people, neurotypes; holding space for the fact that nobody is always at their best, etc. Yet engineers can still be incredibly dismissive and judgey towards management actions and organizational decisions.

    We see a decision that doesn’t make sense to us, or that we wouldn’t have made, and we write it off as being selfish, uninformed, incompetent, stupid, money-grubbing, bureaucratic, untrustworthy, craven, selling out. Or — maybe worst of all — we shrug and say something cynical about how this kind of thing always happens in business. Or they’re out to get us, or they never listen to us, or it shows how much they don’t give a shit about us.. Far be it to me to excuse corporate venality, or to try and blow smoke up your ass about your leaders’ motives. But in many, many of these situations, this actually represents a failure of systems thinking when it comes to imagining the complex business, corporate, and people systems your leaders are operating in.

    Great insight from Charity here, lots on how to consider the wider implications of being a manager, including reminders that managers aren’t purely there to be liked by their team. But this snippet about systems thinking is dead on the money.

    I spent years listening to technologists, and being on of those technologists, saying that the organisations I worked in didn’t value our opinion enough, and made dumb decisions that made no sense to me.

    But as Charity points out here, in many cases, those people were trying to do the best they could in the situation they had. It was far more likely that they had access to information that I either didn’t see, or more likely didn’t value. It’s possible that my input might have helped, but chances are that they didn’t know that I existed or that my skills could be applied in that situation.

    The more I look at modern organisations, the more I see the wider systems that make it difficult for the right information to flow to the right place at the right time. Solving that is a far harder problem than any technology solution can ever hope to solve

    Building security tools is the wrong approach

    https://crashoverride.com/blog/building-security-tools-is-the-wrong-approach/

    Things have looked up recently. It's not the shift left rhetoric which is based on myth , but the improvement in the speed and approach of modern SAST tools that allow them to be run inline of DevOps pipelines.

    Phase one was when there was so much friction you couldn't even give them to developers. Phase two was when you could give them to some developers but the friction was still unacceptable and phase three was when you didn't need to give them to developers because friction had reduced and they could, the key word being ‘could’, form part of the developer tool chain.

    Phase three has resulted in a meaningful effect on software security, but the reality is, especially when you consider the relative, previous levels of adoption, that we are still at a level when the effect on the big picture is relatively small, so the question is what can we do about it.

    Well we don’t need more bloody appsec tools for a start , but we can look at the adoption of SCA . For a long time, I have said that Github and probably AWS when they decide to run in the developer tools space, will become the dominant choice for security tools. When we look at SCA and particularly Dependabot, we see that the widespread adoption has been because it is mainly designed, deployed and positioned as a developer tool that automatically updates dependencie s. It solves dependency hell](https://en.wikipedia.org/wiki/Dependency_hell#:~:text=Dependency%20hell%20is%20a%20colloquial,versions%20of%20other%20software%20packages.)) and just so happens as a side effect to solve the vulnerable library security problem.

    What this means is that if we are to get true mass adoption of tools that can significantly improve security, they will have to be tools that first and foremost solve a ‘gunshot to the chest’ problem for software developers, and then solve a ‘gunshot to the chest’ problem for security teams as a side effect as well. Just reducing friction is not enough.

    This is of course far more complicated for security startups and far easier for developer tools companies who have many opportunities. We have tools like Playwright that are natural hosts for DAST, and tools like ESLint that are natural hosts for SAST. Check this out .

    I have had a personal revelation. If we want mass adoption of security technology and to have a truly meaningful impact on the state of software security, we have to stop building security tools and start building developer tools that have security features.

    I went into this expecting to strongly disagree with this. I’m a believer that security tools, especially ones built against the unix philosophy of “Do one thing and do it well” is what we need to ensure that security tooling is built into the pipeline.

    I still believe that there’s a need for that sort of thing and a product space that still has room for development and improvement.

    But I also storngly agree with the final line in here. We don’t need more security tools. What we need is tools for developers that solve problems they have, but do so securely, and embedding security into the development pipeline.

    Bureaucracy, fear, and damaging dogma

    https://visitmy.website/2023/08/16/bureaucracy-fear-and-damaging-dogma/

    I’ve been working on the GOV.UK Design System for just shy of 3 months now, and we receive frequent questions about needing ‘sign off’ or ‘approval’ to do things. For example, if you’re looking for a date picker to include in your service, the GOV.UK Design System doesn’t have one. But Digital Scotland does. So we were asked what sign-off was needed to use it.

    None. You don’t need any approval.

    Sure, you’ll get asked about using the design system in alpha and beta assessments, but you don’t have to. (There are many reasons why you should, but there are reasons why teams don’t.) And if you haven’t designed or implemented an accessible component, this will get picked up in an accessibility audit. But so will everything else.

    Another example: my service owner at NHS Digital was petrified of the alpha service assessment. They thought that if they failed the assessment, the service would be stopped. This caused them to focus more on boxes to tick in order to pass, rather than thinking about what good looked like, what our priorities should be, and how to empower teams to do their best work.

    This doesn’t feel good. Constraints should encourage creativity, not stifle it. When people feel they need to tick boxes, they’re not in the right headspace to innovate. To make government services radically better.

    We need to address the dogma, otherwise it’ll be damaging in the long-run. For an organisation that professes the benefits of open-working, agility, and co-design, it hasn’t done too well at creating psychological safety in the system.

    Another data point to remind us about how the intent of standard makers and process creators can easily get lost in the detail.

    This is about not just psychological safety, but about enabling and encouraging people within your organisation to feel some sense of ownership about their decisions. Standards and policies that say “You must do …” don’t support that, but principle based standards do enable that. But you need more than just the principles, especially if you work in an organisation where your security policies or HR policies are written in different styles. You need to give people a structure to enable good decision making that aligns with those principles.

    Marks out of ten – how are we doing after a decade of public digital transformation? – Matt Edgar writes here

    https://blog.mattedgar.com/2023/05/21/marks-out-of-ten-how-are-we-doing-after-a-decade-of-public-digital-transformation/

    Matt Jukes has posted a provocation on the tenth anniversary of his first role in a user-centred, agile digital government team. I realise that I too recently passed that milestone. There’s even a video in which you might glimpse a 10-years-younger me pointing at PostIt notes during the alpha of the service manager induction programme, which I was privileged to produce in my own first assignment for the Government Digital Service (GDS).

    Readers of my weeknotes will know that I’m now deeply wrapped up in the work of urgent and emergency care and the creation of a new national organisation for the NHS. (As always, this blog represents my own views, and not necessarily those of my employer.) I thought it might be time to take a broader look at what’s changed and what’s still to do in the wider domain of UK public sector transformation of which I’m still proud to play a part.

    This is a deeply thoughtful review of how digital transformation has landed in Government from someone who has been on the sharp end of a lot of it for the last decade or so.

    It’s easy to read this with a critical eye, looking at what we’ve missed, but as Matt sums up at the end, the question of how far we have come was really tested when Government had to react to Covid back in March 2020. The time, money and effort spent in this decade meant that work in major departments to change their systems to enable work from home, to support sudden policy shifts were all far more possible than they would have been with systems caught between multiple big suppliers and their rigid adherence to contracts.

    The Ups and Downs of 0-days: A Year in Review of 0-days Exploited In-the-Wild in 2022

    http://security.googleblog.com/2023/07/the-ups-and-downs-of-0-days-year-in.html

    The number of 0-days detected and disclosed in-the-wild can’t tell us much about the state of security. Instead we use it as one indicator of many. For 2022, we believe that a combination of security improvements and regressions influenced the approximately 40% drop in the number of detected and disclosed 0-days from 2021 to 2022 and the continued higher than average number of 0-days that we saw in 2022.

    Brainstorming the different factors that could lead to this number rising and declining allows us to understand what’s happening behind the numbers and draw conclusions from there. Two key factors contributed to the higher than average number of in-the-wild 0-days for 2022: vendor transparency & variants. The continued work on detection and transparency from vendors is a clear win, but the high percentage of variants that were able to be used in-the-wild as 0-days is not great. We discuss these variants in more depth in the “Déjà vu of Déjà vu-lnerability” section.

    In the same vein, we assess that a few key factors likely led to the drop in the number of in-the-wild 0-days from 2021 to 2022, positives such as fewer exploitable bugs such that many attackers are using the same bugs as each other, and negatives likeless sophisticated attack methods working just as well as 0-day exploits and slower to detect 0-days. The number of in-the-wild 0-days alone doesn’t tell us much about the state of in-the-wild exploitation, it’s instead the variety of factors that influenced this number where the real lessons lie.

    As always, really interesting research from Google’s security team.

    The heartening news is that browser vendors in particular have really started working on security improvements that attempt to eliminate entire classes of variants, which seems to have driven down the number of discovered 0-days for Browsers and making us all safer.

    The bad news is that in the mobile space, it’s clear that the application of patches included in the software supply chain is still taking far too long, and as such, exploits are found that include bugs that can have been publicly known for months rather than 0-days.

    Of course, all of this is still hard, and the use of 0-day capabilities is still tiny compared to the general exploitation of well known and well publicised vulnerabilities that enables attackers such as ransomware operators.

    The Great Google Experiment: Googlers Trapped in an Internet-Free Wonderland

    https://javvadmalik.com/2023/07/20/the-great-google-experiment-googlers-trapped-in-an-internet-free-wonderland/

    Now, here’s the plan: Google’s selected employees will find their internet access obliterated on their battle stations, save for internal web-based tools and Google-owned websites. They tried to make it mandatory for the chosen 2,500, but lo and behold, they received feedback. And by feedback, I mean a chorus of vehement objections. Who would have thought? So now, they’re kindly letting the employees opt out if they wish to tread the treacherous internet waters.

    And let’s not forget, dear comrades, that Google has a bone to pick with root access. “No root access!” they cry, like a rally cry against the unenlightened ones. It may make sense for some computer roles, but for developers, it’s like cutting off their caffeine supply. The withdrawal symptoms are real, my friends.

    But fear not, for the Googlers deemed worthy of this high-security program will still enjoy the spoils of Google-owned websites. It’s like being banished from a wild party but allowed to party in the garden shed. You won’t be able to Google search your way to enlightenment, but hey, writing documents, sending emails, taking notes, chatting—it’s like a virtual paradise. And let’s not forget, YouTube remains a glowing beacon in the darkness, just a few clicks away. Phew!

    This plan of removing internet access is a key part of the Privileged Access Management pattern for secure systems administration.

    It starts with the concept that a dedicated air-gapped laptop that manages your enterprise estate cannot be compromised if you don’t allow users to browse the web on it. After all, how can malware get to the terminal if the user cannot do email or internet browsing on the device.

    Sadly, something that has proven increasingly difficult in the current world of systems adminsitration, especially in modern technology companies, is the fact that most systems administration is supported through internet hosted tooling.

    Google is big enough to develop it’s own version of most of the tooling, but for normal companies, the idea of the system administrator being unable to access the internet, such as their Trello board, their Splunk log console, their Cloudflare control panel falls down very quickly.

    Instead the pattern either needs to work out what bits of the internet are considered the “management plane” and thus allowed, or like Google, develop internally hosted tools that can interact with those systems at a distance.

    The pattern is a good pattern from a security perspective, but we need to watch what the implementation actually means in our org, not just dump advice like “just don’t access the internet” onto people who will never be able to reach that bar to ensure it’s useful

    Precision Munitions for Denial of Service | Tales about Software Engineering

    https://beny23.github.io/posts/precision_munitions_for_denial_of_service/

    I’ve just demonstrated an easy mistake: I’m not describing a Denial of Service (DoS) attack, it’s a Distributed Denial of Service (DDoS) attack. The aim is to overwhelm the infrastructure, either the networking infrastructure or the application by sending more requests than can be handled.

    That’s akin to carpet bombing:

    • Drop lots of malicious requests on the target • Hope something hits • Can do it from great height • Doesn’t need much intelligence other than a rough location of the target

    But that is not the only way to take out a target. Rather than carpet bombing, these days air strikes are carried out differently. Guided missiles use a lot less explosives but strike the intended target much more precisely.

    Gerald presented this at the open evening of London’s 44Con, and I’ve heard nothing but good things from people who made it (I couldn’t make the first evening unfortunately).

    This talk really emphasises something that we’ve lost over time. Nearly 15 years ago when I was building systems of scale, it wasn’t anywhere nearly as easy to build a distributed denial of service capability, becuase compromise and hosting at scale wasn’t as cheap and plentiful. I learned about denial of service for exactly this kind of attack, things that were aimed at resource exhaustion through clever targeting of inputs that created significantly more work to process than it took to request.

    The fact that in the last years, it’s become trivial for script kiddies to buy or use botnets to carry out distributed denial of service means that defenders have lost the art of understanding the implications of a precision strike on their system because they don’t have to defend against it anymore.

    Anker finally comes clean about its Eufy security cameras - The Verge

    https://www.theverge.com/23573362/anker-eufy-security-camera-answers-encryption

    First, Anker told us it was impossible . Then, it covered its tracks . It repeatedly deflected while utterly ignoring our emails. So shortly before Christmas, we gave the company an ultimatum: if Anker wouldn’t answer why its supposedly always-encrypted Eufy cameras were producing unencrypted streams — among other questions — we would publish a story about the company’s lack of answers.

    It worked.

    In a series of emails to The Verge , Anker has finally admitted its Eufy security cameras __ are not natively end-to-end encrypted — they can and did produce unencrypted video streams for Eufy’s web portal, like the ones we accessed from across the United States using an ordinary media player .

    But Anker says that’s now largely fixed. Every video stream request originating from Eufy’s web portal will now be end-to-end encrypted — like they are with Eufy’s app — and the company says it’s updating every single Eufy camera to use WebRTC, which is encrypted by default.

    Security tools really need to be held to a higher standard than other tools. Far too often, the tools that security recommends you install, from anti-virus software to managed detection and remediation systems require administrator privileges but their companies don’t show the workings needed to build confidence.

    Anker advertised the Eufy cameras on the basis that they were more secure and more privacy aware than many other “cloud enabled” competitors. In reality, although they did exactly what was said on the tin, it appears that the actual security engineering simply wasn’t as good as you would expect.

    Gandalf | Lakera - Prompt Injection

    https://gandalf.lakera.ai/

    In April 2023, we ran a ChatGPT-inspired hackathon here at Lakera. Prompt injection is one of the major safety concerns of LLMs like ChatGPT. To learn more, we embarked on a challenge: can we trick ChatGPT to reveal sensitive information?

    🔵 The Lakera Blue Team gave ChatGPT a secret password. They spent the day building defenses of varying difficulty to prevent ChatGPT from revealing that secret password to anyone.

    🔴 In another room, Lakera's Red Team came up with many different attacks, trying to trick ChatGPT into revealing its secrets. They were successful at times, but struggled more and more as the day went on.

    If you’ve not played with Gandalf, you should give it a try. It’s a fun way to explore Prompt Injection and get some idea about what’s easy, what’s hard and what’s possible