Cyberweekly #144 - People are at the heart of security

Published on Sunday, April 11, 2021

The famous joke goes that the only secure computer system is one that is powered off, and preferably in a sealed box buried in a hole in the ground.

But that computer system doesn't work because it has no usability, no ability to help the people around it do their job.

All security systems (for the moment, until the rise of our AI overlords that is) are inherently based on people. Our security processes have to implemented by people, and our security systems have to be overseen, managed and run by people.

In security, some people regard people as the weakest link in security. They say that the system is secure until the user clicks a link, downloads a file, or turns off the security system. But in reality, most of the time people act as the strongest link in our security systems. For everytime a user breaks the security of a system, there are millions of times that previous users have not done so, where themselves, or they have internal invisible "shadow security" processes that mean that things are double checked and validated.

We need to remember that people are part of our security systems, and that means understanding what drives people, it means working out what they are trying to do on a day to day basis (and no, it's not break the system, nobody wants to do that... well most people don't anyway). Security needs to understand and acknowledge that it's one part of a wider team, that includes HR and Technology, and Operations and Finance and all of those other non-delivery activities, who are clustered around the delivery arm of the business, attempting to ensure that the delivery gets done, safely, cheaply, easily and effectively.

That might mean learning to be better listeners, it might mean spending our time training people to do our jobs and demystifying the "sacred dark arts of security" and it might mean accepting that security is a tradeoff, and that sometimes we are willing to take a risk because that's how the business gets done.

The focus this week is on a selection of people oriented articles and blog posts that have been building up in my backlog of reading for a number of months now, but the common theme that runs through them all is that we need to think carefully and deliberately about how we setup the processes by which we all interact with each other.

    Redefining Threat Modeling: Security team goes on vacation | Segment Blog

    https://segment.com/blog/redefining-threat-modeling/

    In our security Utopia, the Engineering teams perform the threat modeling exercises, and the Security team are optional attendees. Security can then pick and choose which sessions they attend based on interests, priorities, and size of attack surfaces.

    The security “goal state” is important to model, because it solves many problems. In our current world, the Security team’s availability becomes a bottleneck, but in our goal state, no engineering team is blocked by a lack of Security personnel, because the engineering teams can run their own threat modeling sessions and the Security team are optional attendees. In our current world, threat modeling can take up additional engineering hours to review, where in the goal state, threat modeling activities are built into the design review, which is already a standard meeting in the development cycle. And while threat modeling is hard to scale when only a handful of people are responsible for it, it scales beautifully when everyone can do it. 

    Developers know their features better than anyone: they understand what works well, where the gaps are, and where they might run into trouble. In contrast, Security Engineers can only spend a few hours on a specific feature when they assess it, but are expected to provide in-depth security concerns about that feature. It makes a lot more sense for the feature experts to discuss potential security concerns, rather than the folks that are looking at the problem from the outside.

    This is a great example of how to uplift the security of a company, not through compliance regimes or auditing tools, but through the simple assumption that people want to do a good job, but that they don't have the tools to do their job. Investing in a training program to enable development teams to self-serve, such that the security team can act purely as consultants when needed, and potentially as spot checkers for quality control, means that the security team can be freed up to focus on developing and delivering even more enabling tools for teams.

    My only concern is the additional cognitive load that this places on a development team. This needs to be not just a job that security adds to an already busy backlog, but an accepted part of delivery and product development, that is costed into the delivery programme, and given space, time and priority within the delivery teams.

    Why every team needs a Delivery Manager (DM) – Tim Abell – UK Software Developer

    https://timwise.co.uk/2019/07/08/why-every-team-needs-a-delivery-manager/

    In any organisation beyond about 20 people the bureaucracy starts to sneek in, layers of rules are added as a business tries to avoid repeating previous mistakes, and as you hit enterprise scale it’s positively stifling, to the point that people issues and an inability to even open Microsoft Word (TM) (R) ((c) 1908) on a work computer without signing 3 forms in blood become nearly impossible.

    [...]

    As a developer on an Agile team working inside a government team (not for the first time), I am eternally grateful for all the things I don’t have to worry about because we have a DM. It’s not that I’m not capable, I’ll happily turn my hand to the biggest problem of the day for a team at any time, but the code won’t write itself and that’s what I was hired to be a pro at.

    This has some great descriptions of a good Delivery Manager and why it's slightly distinct from a Project Manager or a Scrum Master role (although encompasses much of the work of both).

    I've always liked the view as being a servant leader, there to lead the team by serving to the needs of those hired into the team with specialised skills and abilities, to ensure that they always have just what they need, just when they need it. To a certain degree, one can imagine this like a game of Curling. The development team itself is like the curling stone, it's heavy, it has inertia, and the aim is to get the stone into the house. The delivery managers, senior management and leadership are flying around the stone, trying to sweep it's path ahead, as well as performing the initial throw.

    What Do Fighter Pilots and Incident Management Have in Common? | Transposit

    https://www.transposit.com/blog/fighter-pilots-and-incident-management/

    The Key to a Debrief: Transparency

    Fly on this journey with us for a minute. You're a mid-level officer, and you've just returned from a training mission with a group of officers, along with a two-star general. As you debrief the mission, you notice in the video that the general was a 100 miles off target coming home and still had the "Master Arm" switch in the arm position (meaning weapons are still live) when you're supposed to go "Master Arm" safe 50 miles off target. Do you tell the general—who is in charge of your pay, promotions, and demotions—their misstep?

    When Bourke asked this question, many of us cringed at the idea of telling an authority figure they'd made a mistake. But then he presented the potential consequences of keeping lips sealed. The F16s your squadron was flying has the capacity to drop 2,000 pound bombs and can shoot 6,000 rounds a minute. As you approach the base, one press of a button could be a deadly mistake that takes out your own troops. With this added information, the answer was obvious. Transparency is not optional.

    The practice of debriefing makes teams stronger and more adaptable when the next mission (or incident) comes. But the special sauce of the debrief requires a not-so-secret but often hard to come by ingredient: full transparency.

    During a debrief, hierarchy should be stripped away and egos set aside. "When the door to the debriefing room closes, something magical happens," Bourke said. "Name tags come off our chest, and we hold the debrief where there is no hierarchy, and the sole purpose is to learn and get better." Bourke urges teammates to be "their own worst enemy," exposing their own mistakes and committing to make a change going forward. Rather than placing blame, teammates gain confidence in their fellows.

    Ensuring that your staff have a free space to feel like they can highlight errors, regardless of the power structures at play is important to having an effective retrospective or debrief.

    It can be incredibly hard to actually carry out though. Even if you have a room or space where you declare that the heirarchy is stripped away, if you are the most senior person in the room, you will always have the "role power", that is to say, an invisible sticky note on your forehead that says "I can fire you".

    It's therefore important that retrospectives of this kind are structurally supported by even more powerful people or systems, ones that ensure that those in more senior positions in the meeting, if they take action or respond to something inappropriately, that they will be challenged on it. Without that structural support, you cannot create a "flat hierarchy" in such a meeting.

    What blocks you from listening? | LeadDev

    https://leaddev.com/communication-relationships/what-blocks-you-listening

    The habit of problem-solving blocks listening

    Engineering leaders usually have a technical background, and a large part of that was spent solving problems. Over time, repeatedly identifying problems and finding solutions forms a strong habit that is hard to break. In a conversation, this habit guides engineering leaders in the wrong direction. Instead of listening to another person’s needs, the leader listens for a problem. When they feel they have found a problem, their inner dialogue starts searching for an optimal solution.

    Not all people start a conversation with the desire for solutions, so be cautious about automatically offering one. In many cases, people start conversations because they seek empathy. Seeking empathy is particularly common when people share a close relationship. For example, a person in a relationship might come home from a difficult day at work, and start a conversation with their partner describing a dramatic event. The person sharing is often not looking for something to ‘do’; they want to hear reassurance. Instead of responding with advice, a more appropriate response might be, ‘You’re right. That situation sounds awful,’ or, ‘That person acted inappropriately. I would be upset about what they did too.’

    This has certainly felt true in my experience. I spent decades being someone who people only asked questions of when they wanted technical answers, and as such I'm trained to assume that questions from people are indications that someone wants me to specifically solve something.

    But when you become a manager of people, that's no longer true. You as a manager are not always expected to be the problem solver, the solution giver. Someone is talking to you because they want to feel heard, because they want to work out a problem, or because they are expressing the things that they are trying to solve. Attempting to hand out a solution prevents you from focusing on listening to your report, and makes you over directive, and likely to be considered a "micromanager".

    For someone like me, this is probably one of the hardest lessons to learn, and I'm still terrible at it, I instinctively want to jump to solving problems, and it requires thought and care to try not to do that.

    Creative Good: Why I'm losing faith in UX

    https://creativegood.com/blog/21/losing-faith-in-ux.html

    But now Amazon has embraced a new kind of UX, as shown by the Amazon Prime cancellation process. What should be a single page with a "Cancel my subscription" link is now a six-page process filled with "dark patterns" - deceptive design tricks known to mislead users - and unnecessary distractions. This isn't an accident. Instead, and this is the point of Decade 3, there's a highly-trained, highly-paid UX organization at Amazon that is actively working to deceive, exploit, and harm their users. UX has completely flipped now, from advocating for the user to actively working against users' interests. To boost profits, UX has turned into user exploitation.

    And Amazon is hardly unique. I've written plenty about Facebook, Google, and other tech giants - for example, Calling the culprits by name from last month. But where Amazon leads in UX, the rest of the tech industry generally follows.

    This effective rant about the failure of UX represents a pattern I see repeated over and over, from DevOps to agile, UX to user research, security to DevSecOps.

    Your initial foreys into something exicting will be really positive. Companies that buy in early tend to be believers in a thing, and so everyone is either really bought in or at least fundementally understands the thing.

    Your second wave of users will have seen the successes of the first, but won't really understand the thing in the same way. They don't believe in your holy war to change everyones thinking, they just see that organisations that quack get bigger profits or deliver results, and so they start copying the methods and practices. Sometimes this results in successes, and other times you'll get conference talks on "deploying X in the automotive industry" or similar, that talks about all the ways in which they changed thing X, defeating your much loved and held ideals.

    This is what Mark calls hte Decade 2 or the slide. That was people adopting UX, but the seniors not listening to the findings, and not only ignoring the recommendations, but actively pushing for the findings to be "the correct kind of findings".

    Decade 3, or step 3 in this process is when people realise that they can use this system "for evil", or within the money making purpose of the organisation, but without the net positive effect envisioned by their leaders.

    Looking across all of the interesting paradigmatic shifts in software over the last 20 years, this feels like an inevitable curve that almost all adoptions undergo. I wonder what, if anything, we can learn from that.

    Systems design explains the world: volume 1 - apenwarr

    https://apenwarr.ca/log/20201227

    Tanya Reilly has an excellent talk (and transcribed slides) called Being Glue that perfectly captures this effect. In her words: "Glue work is expected when you're senior... and risky when you're not."

    What she calls glue work, I'm going to call systems design. They're two sides of the same issue. Humans are the most unruly systems of all, and yet, amazingly, they follow many of the same patterns as other systems.

    People who are naturally excellent at glue work often stall out early in the prescribed engineering pipeline, even when they'd be great in later stages (staff engineers, directors, and executives) that traditional engineers struggle at. In fact, it's well documented that an executive in a tech company requires almost a totally different skill set than a programmer, and rising through the ranks doesn't prepare you for that job at all. Many big tech companies hire executives from outside the company, and sometimes even from outside their own industry, for that reason.

    There's a lot in this essay to unpack, as it covers the danger of decentralised and flat hierarchies (control systems that are implicit are more biased than explicit ones), disruptive versus sustaining innovation and network effects.

    But what's key is that all of these are forms of network effects, and that understanding them, grokking them, and working out how to apply them takes a particular kind of attention and focus. In theory, that's the kind of experience that being a senior manager will give you, although since seeing effectiveness in these areas is super tough, I suspect that many senior managers are terrible at these things, but take credit for systemic successes that happened in spite of them.

    But some of this thinking is an anathema to your typical software engineer, because it gets in the way of just delivering code and good products, and worse, it makes every trade-off decision feel like you won't have enough information to make the decision.

    For new lead developers, architects and people at the strategic decision making points in their careers, I think this essay covers a lot of concepts that are well worth learning, investigating further and trying to understand.

    Lessons I Learned at Google. By Dave Rensin (@drensin) | by Dave Rensin | Mar, 2021 | Noteworthy - The Journal Blog

    https://blog.usejournal.com/lessons-i-learned-at-google-a1d489f163b

    Google strives to hire the smartest people on Earth. In my experience most Googlers have excellent bullshit detectors. So it seems to me that the only winning long-term strategy (especially as a manager) is to be completely transparent with people. It’s really not as hard as you think.

    Every question can be answered with:

    • The answer
    • “I don’t know”
    • “I know the answer but won’t/can’t share it with you because…”

    The goal is that the people you work with feel safe enough that they can ask you really hard questions without self-editing. If all the hard questions in your next team Q&A are anonymous then something is broken.

    This is generally a good set of interesting lessons learned, but this focus on transparency with your teams is something that I think is really valuable for building trust within your teams

    You are Going on a Quest – Rands in Repose

    https://randsinrepose.com/archives/you-are-going-on-a-quest/

    This. Forever.

    You’re right. I said four roles. Thanks for paying attention.

    The fourth role is by far the most important. It’s the role the vast majority of engineers will follow in their careers, and I’m going to call it “This. Forever.” The role you have right now is the thing you are going to do be doing forever.

    Yup. You read that right.

    Facts. The vast majority of engineers will not become engineering managers. It sure hasn’t felt that way for me for the past two decades, where I’ve spent my time building the leadership detritus to mint new managers out of necessity.

    Unsurprisingly, engineers begin to believe the only path is that of management in these start-up scenarios. It’s the only way to maintain relevance in a rapidly evolving situation from everything they’re seeing. As a primary contributor to this erroneous perception, I apologize. We managers shine so much light on management’s necessity that we forget that leadership comes from everywhere.

    This really hits a nail on the head of something I've been thinking about for a while.

    Too often, we assume that success in a career is achieved through promotion into managerial and executive ranks. The Peter Principle articulates the problem with this quite well, in short, promoting people out of doing something they are good at is a terrible or, or more colloquially, people get promoted to their level of incompetence.

    Instead we need to focus and remember that we should be rewarding effective workers not with promotions, but with additional pay, holidays, perks, exposure, publication or other motivations for people. This is the impetus behind creating a strong "Individual Contributor" track for career structures that can end up with Fellow level individual contributors, who don't have to manage people or run programmes, but instead are paid to think, to work on areas that interest them and to be productive and good at what they do.

    Why does the London Underground still not have Wi-Fi in tunnels?

    https://www.wired.co.uk/article/london-undeground-wi-fi

    Blame physics.

    An accessible write-up on why the London Underground (Tube) network - which is around 156 years old - doesn't (but soon may) have wireless data and WiFi in the tunnels between stations.

    A somewhat more pessimistic analysis (quite reasonably not coverage in the article) would be of the associated risk assessment work that would be conducted by Transport for London, the City of London Police (and others) on what the security implications may be of wireless signals in less accessible tunnels.

    (Joel) My dad worked diligently at the National Police Improvement Agency (NPIA) many years ago when emergency services communication were overhauled after communication difficulties during incident response. He didn't go into sensitive details but I remember his frustrations around the tunnels back then as well.