Cyberweekly #208 - Process is the backbone of organisations

Published on Sunday, September 04, 2022

Welcome back, apologies for not sending a newsletter out last week, but I was on holiday and tanning myself on the beach. As a reward, a slightly longer than normal newsletter for you.

Processes are the bones of an organisation, and like skeletons, that means that they are often brittle and prone to breaking when bent the wrong way.

We in both digital, technology and cybersecurity, tend to influence the way that "the business" does it's work through the setting of policies, the defining of standards and the writing of guidance and advice. If we're any good, we also have a role to play in providing enabling services that make it easier for business units to meet those standards and policies.

But we're often really bad at setting out the why of those standards and policies.

When I worked at the UK's Government Digital Service, we did a lot of work on a digital service standard that tried to define the why in short pithy ways. This work also influenced the UK Governments push for "principles over policies", which led to the creation of things like the NCSC's Cloud Security Principles and the Technology Code of Practice.

All of these documents share some common criteria, they set out some core "always true principles" (or as best they can), and they try to set out the why of each principle. The intent there is that teams who are working in contexts that couldn't have been imagined by the authors, from issuing passports to arranging for burial at sea, to ensure that those teams can reason out the right way to behave based on those core principles.

That system works really well for high performing teams, who have organisational latitude, management support and operational independence.

But they don't work very well for supplier dominated teams who want to be told what to do, and they don't work well for teams who are fighting to change culture in their organisation. They also provide an operational overhead that can become unmanageable if you are running hundreds of simultaneous services and need some alignment between them to ensure that they operate effectively.

The missing link for me, and one of my regrets that I learnt it too late to put into action while still setting standards at GDS, is that if you have these principles, you also need to set standards and write guidance for low capability teams or for people who don't have the capacity to re-approach every problem from scratch. Instead what you need is clear guidance. Government produces a lot of that as well, but what it fails to do is link that guidance back to the principles, and make clear why the author thinks that the guidance is suited for "a large majority of contexts".

Without that bit of connective tissue, what we end up with is guidance that becomes brittle, unyielding, and brutal to teams ability to deliver.

If you are writing policies and standards in an organisation at the moment, think hard and long about whether you can encode the why of each policy, to enable it to be effectively challenged. Ensure that policies have a lifetime associated with them, in which they must be updated or withdrawn. Think early about the exceptions policy, how it will work and how you encode exceptions back into the policy effectively. Be agile about your policy making, rather than holding to religious zeal that you are right about everything. Because if there's one thing I learnt in those years in GDS ranging across Government, it's that there's far more complexity in context out there than anyone in the center ever thinks.

    ADD / XOR / ROL: Essays about management in large(r) organisations (1): Process and flexibility

    http://addxorrol.blogspot.com/2016/09/essays-about-management-in-larger.html

    What is a healthy way to deal with processes?

    1. Realize that they are a form of “organisational memory”: They are often formed as reaction to some unpleasant event - with the intent of preventing this event from repeating. It is also important to realize that unchecked and unchallenged processes can become organisational “scar tissue” - more hindrance than help.
    2. Keep track of the exact motivation for creating each process -- the “why”. This will involve writing half a page or more, and checking with others involved in the creation of the process that the description is accurate and understandable.
    3. The motivations behind the process should be accessible to everybody affected by it.
    4. Everybody should know that company processes are supposed to support, not hinder, getting work done. Everybody should feel empowered to suggest changes in a process - ideally while addressing why these changes will not lead to a repeat of the problem the process was designed to prevent.
    5. People should be empowered to deviate from the process or ignore it - but frequent or even infrequent-but-recurring exceptions are a red flag that the process needs to be improved. Don't accumulate "legacy process" and "organisational debt" through the mechanism of exception-granting.
    6. Everybody should be aware that keeping processes functional and lean is crucial to keeping the organisation healthy. Even if a process is unreasonable and obstructive, most people instinctively try to accept it - but the first instinct should ideally be to change it for the better. Constructively challenging a broken process is a service to the organisation, not an attack.
    7. It may be sensible to treat processes a bit like code - complete with ownership of the relevant process, and version control, and handover of process ownership when people change jobs. Amendments to processes can then be submitted as text, reviewed by the process owner, discussed, and eventually approved - much like a patch or removal of dead code.

    Keeping an organisation healthy is hard. The most crucial ingredient to keeping it healthy, though, is that the members of the organisation care to keep it healthy. Therefore it is absolutely critical to encourage fixing the organisation when something is broken - and to not discourage people into "blindly following the process".

    This is one of my bugbears around a lot of organisational process, and also around guidance and standards within organisations, especially complex multi-domain organisations (such as Governments).

    If you write a piece of guidance or a standard that says “You should use Privileged Access Workstations” for example, then I think it’s a requirement that you spell out what the motivation for creating that standard is. This can be as simple as hooking together principles and guidance so that it’s clear what principles the guidance is trying to achieve, or it can be fairly complex interlinking of existing policies and processes.

    But this creation of the “why of a directive” enables people to sensibly question it, and also to take and apply local contextual hints.

    Don't scar on the first cut - Signal vs. Noise (by 37signals)

    https://signalvnoise.com/archives2/dont_scar_on_the_first_cut.php

    The problem with policies are that they compound and eventually add up to the rigidity of bureaucracy that everyone says they despise. Policies are not free. They demean the intellect of the executer (“I know this is stupid, but…”) and obsolve the ability to deal with a situation in context (“I sympathize, but…”).

    Here’s a curve ball: When something goes wrong, have a chat about it, embed the learning in the organizational memory as a story instead of a policy. Stories have context and engage the listeners, so next time a similar situation arise, you’ll be informed by the story and act wiser.

    Policies are codified overreactions to unlikely-to-happen-again situations. A collective punishment for the wrong-doings of a one-off. And unless you want to treat the people in your environment as five year-olds, “Because The Policy Said So” is not a valid answer.

    A little bit of wisdom from 2006, and the root behind a lot of my thinking around policies and guidance ever since

    Shifting from planning to learning | by Ashley Evans | Good Trouble | Aug, 2022 | Medium

    https://medium.com/good-trouble/shifting-from-planning-to-learning-74e217561f65

    Through conversations with service teams, agile product managers, compliance areas, and leadership, here’s what we’ve identified.

    The key problem is that protocols are not contextual , they are:

    • Disproportionate to risk level — e.g. a single go-live process for all products, whether they are built for the authenticated or non-authenticated space and regardless of service features
    • Disconnected from service design phase/maturity of research, design, or development — e.g. requiring details around data collection and use during the Discovery or Alpha phase
    • Repetitive — e.g. requiring teams to fill out the same information each time they interact with a new form
    • Duplicative — e.g. collecting the same or very similar information across different forms or compliance processes
    • Unclear processes — e.g. what it means to complete a protocol is not defined
    • Lacking user participation and public accountability — e.g. public participation in deciding what products/services to prioritize, visibility into feedback being implemented, proactively releasing compliance documentation, etc.

    A reminder that processes that lack context lead to wasteful, duplicative, and repetitive processes, especially when the surrounding context changes. Furthermore, as discussed elsewhere, it’s difficult for anyone to feel like they can change a process if they don’t understand what it’s trying to achieve or why it exists.

    For self control, ‘always’ rules are better than ‘sometimes’ rules | by Alex Komoroske | Medium

    https://medium.com/@komorama/for-self-control-always-rules-are-better-than-sometimes-rules-4c38be32f318

    But when I think about it more, I realize that even though my brain is a softie, I’m still sometimes able to exert self control. The trick that’s worked for me is to not involve that scheming brain in the loop. A good rule of thumb is that when it comes to self control, an ‘always’ rule is better than a ‘sometimes’ rule .

    I’ll explain what I mean with an example. It was the summer of 2007, and I had just begun an APM internship at Google. I was still in college student mode where when you see free food you gorge, because you never know when you’ll see free food again.

    This mindset did not work well with Google’s micro-kitchens, to put it mildly. That summer the micro-kitchens had large gravity bins full of peanut M&Ms. Every time I walked by one, I felt compelled to fill up a cup to snack on. Unfortunately, I walked by the micro-kitchens a few times a day. A few months later I had put on more than 20 pounds. Yeah, it was gross.

    But that changed a few months later. I enacted a single rule that was broad but unflinching: I would always go to the gym every day. Importantly, there were no quality requirements — I didn’t have to do much at the gym — but the every day requirement was not up for debate. Compliance with the rule was black and white.

    Suddenly any chance of excuses evaporated. I could never make the case that it wasn’t possible to fulfill those goals. Even if I was feeling under the weather, I could go sit on the bike for 20 minutes at a low heart rate. I knew that if I gave my brain any chance to make excuses, it would. So I didn’t give it the chance.

    […] see how you can systemize the exception to keep your rule a (slightly more complicated) ‘always’ rule. Back then I modified my rule:

    “I must always work out every day, unless it is the first or last day of a trip.” If I was on anything longer than an overnight trip I required myself to find a way to exercise. That modification again made compliance black and white, and made it easier to keep the streak going.

    When writing guidance, policies and standards, it’s worth doing the extra work to make the policies really simple and making them an always rule.

    This is really hard to do, because if your policy is going to be used in lots of different business domains, then coming up with an always rule is much much harder. But doing that hard work centrally to define the always rules, and then delegating the more context specific rules down to teams who can then make “always rules” means that the ruleset becomes much much easier for people to follow and adhere to.

    PGPP Beta Launch

    https://invisv.com/articles/pretty-good-phone-privacy.html

    PGPP helps to protect users against types of tracking that previously couldn’t be prevented: the tracking of your location by your globally-unique IMSI.

    PGPP changes this by enabling you to prove that you are supposed to get service from the mobile network (authentication) and then get connected to the mobile network using a random, time-limited IMSI. We presented this privacy-preserving system in peer-reviewed research we published last year . When using PGPP mobile data service, your IMSI changes periodically, while your device continues to get data service from the mobile network.

    I’d be curious as to how they are maintaining this. Presumably it’s using a virtual mobile phone provider, and they are randomising the IMSI’s that they hand to accounts, and rotating them on a regular basis.

    The downside to that as a user is that if it’s not a big enough community, and if the community includes bad actors, then you might find yourself being bulk targeted because you are now using an IMSI that was used by a bad actor a few hours before.

    I’m less sold on the protection from global law enforcement (as I’ve said before, I’m less convinced of the complete distrust of authority that many privacy activists maintain), but I’m interested by the ability to reduce the amount that your actual mobile phone provider can gather on you, and can sell or have insiders access. Reducing the ability of the telecoms providers themselves is a useful protection.

    The story behind Google’s in-house desktop Linux | Computerworld

    https://www.computerworld.com/article/3668548/the-story-behind-google-s-in-house-desktop-linux.html

    To make all this work without a lot of blood, sweat, and tears, Google created a new workflow system, Sieve. Whenever Sieve spots a new version of a Debian package, it starts a new build. These packages are built in package groups since separate packages often must be upgraded together. Once the whole group has been built, Google runs a virtualized test suite to ensure no core components and developer workflows are broken. Next, each group is tested separately with a full system installation, boot, and local test suite run. The package builds complete within minutes, but testing can take up to an hour. Once that's done, all the new packages are merged with the newest gLinux package pool. Then, when Google decides it's time to release it into production, the team snapshots that pool. Finally, it rolls out the fresh release to the fleet. Of course, it’s not going to just dump it on users. Instead, it uses Site reliability engineering (SRE) principles such as incremental canarying to make sure nothing goes awry.

    A reminder that as an organisation, you create organisational patterns. For Google, this means automating all the things, and using compute power to test and refine changes incrementally.

    TikTok’s Poison Pill - Study Hacks - Cal Newport

    https://www.calnewport.com/blog/2022/08/01/tiktoks-poison-pill/

    The social media giants of the last decade have cemented a pseudo-monopolistic position in the internet marketplace because they serve content based on massive social graphs , constructed in a distributed fashion by their users, one friend or follow request at a time. It’s too late now for a new service to build up a network of sufficient influence and complexity to compete with these legacy topologies.

    TikTok, by contrast, doesn’t depend on this type of painstakingly accumulated social data. It instead deploys a simple but brutally effective machine learning loop onto the pool of all available videos on its platform. By observing the viewing behavior of individual users, this loop can quickly determine exactly which videos will most engage them; no friends, retweets, shares, or favorites required. The value of the TikTok experience is instead created by a unique dyadic mind meld between each user and the algorithm.

    If platforms like Facebook and Instagram abandon their social graphs to pursue this cybernetic TikTok model, they’ll lose their competitive advantage. Subject, all at once, to the fierce competitive pressures of the mobile attention economy, it’s unclear whether they can survive without this protection.

    This is a really interesting point. Although you can follow people in TikTok, it’s treated as a weak signal to the algorithm, as compared to say Instagram, where following someone means you want to see everything they produce. This difference of approach totally changes how you consume information, but also changes the power of influencers and content creators, because they can’t rely on amassing followers and then changing tactic, they need to continue to create engaging content.

    Lloyd’s of London Exclude Nation-Backed Cyber Attacks from Insurance - Red Goat

    https://red-goat.com/lloyds-of-london-exclude-nation-backed-cyberattacks-from-insurance/

    So, even if we can do attribution accurately, and this is in no way a priority for most in the heat of an incident, then how do we really show they are state backed? If we can’t show this accurately this places us in a slightly precarious position insurance wise. In the US the burden would fall on the insurers to prove the exception applies but that’s not the case in every country. So, it could fall on the victim to show the reverse.

    It has been claimed in the sea of analysis on this decision that the attack won’t necessarily need official attribution to be excluded from the policy coverage. The insurer can decide, according to Threat Post, if it is “objectively reasonable to attribute cyber-attacks to state activities”. So the insurer could claim that the attack is excluded because it is “reasonable” to attribute it to a nation state. Not the clarity we perhaps wanted!

    […]

    The cyber community have often commented on the tendency of organisations (public or private) to claim that it was a “nation state that got me” perhaps as a means of trying to remove some of the responsibility from themselves. Perhaps now this may do a total 180 flip in fear that the insurer will use this to exclude liability?

    The reality is that this is probably a reaction to the increase in demand and risks from the evolving threats. Cyber is unique in that way. The threat can evolve rapidly as can your exposure to it. The Russian invasion of Ukraine and the constantly fluctuating fear of the so called “cyber war” likely has insurers worried. Maybe rightfully so. It raises some interesting questions for the future though and as always leaves us with more questions than answers.

    An unsurprising move from Lloyds and insurers in general. This point below stood out to me though, we see a lot of organisations compromised by relatively simple attacks and then claiming that it was nation state sponsored attackers so “Couldn’t have been our fault”.

    This push from insurers means that if someone claims that it was a state backed attack, then the insurance won’t pay out, but may also mean that if they say it wasn’t, then they need to prove that they are meeting a decent standard of cyber security.

    If we are lucky, this will mean an increase in the level of basic cyber hygiene in companies in order to meet insurance requirements.

    André Staltz - Time Till Open Source Alternative

    https://staltz.com/time-till-open-source-alternative.html

    Let’s be a bit skeptic about this data for a moment, we can learn a few truths from the details. This list of open source projects has a mix of complex projects, simple projects, popular projects, and just-5k-GitHub-stars projects.

    For instance, take BitKeeper (proprietary) versus Git (open source). Anyone who is a developer today knows what Git is, while BitKeeper is just a small anecdote in Git’s history. Contrast that with Apple Siri – known by everyone with an iPhone – versus SEPIA Framework, which has… 70 stars on GitHub.

    It is clear that these open source projects are at various stages of maturity and industry leadership, and it’s a long shot to say that SEPIA Framework will disrupt Siri. Just because there exists an open source alternative to something, doesn’t mean that this alternative is yet of high quality. There is often a long journey for these projects before they are ready for mainstream. That’s a whole another aspect to measure.

    That said, TTOSA is still a powerful measurement because it tells us it doesn’t take long until you have some kind of barely usable alternative to a proprietary software. If we would measure “Time Till High Quality Open Source Alternative”, we would figure out that… duh… it takes a lot more time. But, maybe we would also find a downward trend in that dataset. And that’s a powerful trend. High quality open source should send a chill down the spine of business dudes, and they already exist: Linux, VLC, Firefox, Git, OBS.

    There’s some bits I don’t like about this analysis, because it neither measures quality (as referenced), nor does it measure uptake. The fact that some opensource software is vastly popular and some exists but is barely used is also relevant in this world.

    It’s not necessarily true that you only get a year or so until an open source alternative comes along, but it’s an interesting trend that we see increasing numbers of products where there is an open source alternative, and that the time between the product existing and the open source alternative existing is coming down.

    Useful analysis and I’d love to see someone take it a bit further and determine whether there are specific market segments, audience segments or categories where quality and uptake are higher than others.

    Less is more agile | Tales about Software Engineering

    https://beny23.github.io/posts/my_take_on_engineering_room_9/

    Talking of gospel

    (Dave quoting Allen): Agile has become a priesthood where the priests don’t understand the rituals

    I think this is a very apt observation. I’ve had my share of agile coaches teaching developers about sprints and planning sessions, show and tells, backlog prioritisations and retrospectives and standup meetings. It is ironic that these are often referred to as ceremonies and the dictionary definition of ceremony is

    :an action performed only formally with no deep significance

    Now, I’m not saying that we should do away with all planning or retros or standups, far from it, but these things have to be worked out by the team and decided what works for them, in their context. Note, I am also not saying that agile coaches are useless, I’ve worked with plenty of clever people that really know agile, but there have been a few that ended up beating any joy out of software development with the scrum guide.

    Allen mentioned another problem with the ceremonial approach: Don’t wait for the next scheduled retrospective, if something goes wrong, talk about it now. Not in two weeks time when nobody remembers the details anymore.

    A nice writeup from a great conversation between Dave Farley and Allen Holub. It’s also a nice reminder that agile is about “being agile”, not about following any specific process. If an agile process works for you, then great, but that doesn’t mean that you must force people to comply with an empty process for no good reason.

    Cyber Signals: Defend against the new ransomware landscape - Microsoft Security Blog

    https://www.microsoft.com/security/blog/2022/08/22/cyber-signals-defend-against-the-new-ransomware-landscape/

    RaaS is often an arrangement between an operator, who develops and maintains the malware and attack infrastructure necessary to power extortion operations, and “affiliates” who sign on to deploy the ransomware payload against targets. Affiliates purchase initial access from brokers or hit lists of vulnerable organizations, such as those with exposed credentials or already having malware footholds on their networks. Cybercriminals then use these footholds as a launchpad to deploy a ransomware payload against targets.

    The impact of RaaS dramatically lowers the barrier to entry for attackers, obfuscating those behind initial access brokering, infrastructure, and ransoming. Because RaaS actors sell their expertise to anyone willing to pay, budding cybercriminals without the technical prowess required to use backdoors or invent their own tools can simply access a victim by using ready-made penetration testing and system administrator applications to perform attacks.

    The endless list of stolen credentials available online means that without basic defenses like multifactor authentication (MFA), organizations are at a disadvantage in combating ransomware’s infiltration routes before the malware deployment stage. Once it’s widely known among cybercriminals that access to your network is for sale, RaaS threat actors can create a commoditized attack chain, allowing themselves and others to profit from your vulnerabilities.

    I’m not sure I’d really describe Ransomware-as-a-service as new, it’s been going for around 2 years now at scale, and Ransomware affiliates have been a thing for even longer, but it’s definitely getting worse.

    The critical thing to remember is that, as Microsoft says here, it enables people who have strong intent to compromise your network to make use of capabilities that others have developed. That means that “Bad Actors X wouldn’t bother targeting me” is far less of a defense, because one actor is trading intent for capability, and developing a marketplace that values that.

    Of course right now, all of the eyes are on monetarily focused intent, so someone will pay for access if they think they can make more money by holding you to ransom. But there are other actors with other intents out there, from commercial espionage to unhappy customers/ex-employees to single issue action groups who might start determining whether buying initial access might be worth their time.

    The CyberSignals issue itself is worth a read for their recommended defences, but it’s the same list of things that we always talk about: authenticate identities, monitor for stuff, harden internet facing systems, patch. You can find Microsoft’s guide to the things that deter and defend against ransomware actors at https://www.microsoft.com/security/blog/2022/05/09/ransomware-as-a-service-understanding-the-cybercrime-gig-economy-and-how-to-protect-yourself/

    Socialite, Widow, Jeweller, Spy: How a GRU Agent Charmed Her Way Into NATO Circles in Italy - bellingcat

    https://www.bellingcat.com/news/2022/08/25/socialite-widow-jeweller-spy-how-a-gru-agent-charmed-her-way-into-nato-circles-in-italy/

    The next day, 15 September 2018, a woman with a long, Latin-sounding name bought a one-way ticket from Naples, Italy, to Moscow. For around a decade, this individual had travelled the world as a cosmopolitan, Peru-born socialite with her own jewellery line. Later that evening, she landed in Moscow and is not known to have left Russia since. She flew on a passport from one of the number ranges Bellingcat had outed the previous day – in fact, hers only differed by one digit from the passports on which Boshirov and Petrov’s GRU boss had flown to Britain just six months earlier.

    The name on her passport was Maria Adela Kuhfeldt Rivera, and as Bellingcat and its investigative partners have discovered, she was a GRU illegal whom friends from NATO offices in Naples had for years believed was a successful jewellery designer with a colourful backstory and chaotic personal life.

    […]

    Led by all of these clues, our team was able to obtain a fresh photograph of Olga Kolobova from a whistleblower with access to Russia’s database of drivers’ licences. That photo – which appeared to be from 2021 – provided a convincing match between the faces of “Maria Adela” and Olga Kolobova.

    A positive match between photographs of “Maria Adela” and Olga Kolobova using the Microsoft Azure facial recognition tool. However, facial recognition software, while useful, is not sufficient to prove conclusively that two individuals are the same person in an investigation such as this. Reporters then searched for Olga’s phone number on WhatsApp and found solid proof that Olga and “Maria Adela” were indeed the same person.

    The picture that “Maria Adela” had used as her Facebook profile had also been used by Olga as her profile image on WhatsApp. “Maria Adela” had also posted the picture to her Instagram page.

    An absolutely astonishing story that could have been something from The Americans . What’s most astonishing about all of this is that someone carrying out a decade long false life would also commit so many persec failures, including using the same photo from their alias on their (presumed) real identity whatsapp profile.

    Inventing Anna: The tale of a fake heiress, Mar-a-Lago and an FBI investigation

    https://newsinteractive.post-gazette.com/anna-de-rothschild-trump-mar-a-lago-security-fbi-investigation/

    It’s not clear how many trips Ms. Yashchyshyn made to the former president’s home, but Mr. Lawrence said she made enough of a splash that members of the Trump entourage recognized her photo immediately.

    “She had been there more than once,” he said.

    Ron T. Williams, a former Secret Service agent who is now a corporate security consultant, said there are many reasons that Ms. Yashchyshyn may have avoided detection, including the possibility that agents didn’t conduct a background check.

    “Should she have been run for a background check — yes,” he said, but that “doesn’t mean it happened.”

    A basic check would have shown that no such person exists with the Rothschild name and her 1988 birthdate.

    In fact, an online resource devoted to the Rothschild family lists descendants dating back hundreds of years, but the name Anna de Rothschild does not appear anywhere.

    Gary McDaniel, a longtime Florida security consultant, said because Mar-a-Lago is not just a private club but Mr. Trump’s home, the level of protection should be elevated beyond the security protocols typically afforded former presidents and also extend to the entire premises.

    “I want to know everybody who comes into that facility, their name, date, date of birth,” he said. “And I want them somewhere on a roster because we never know when he is going to walk into that crowd. She should have been on a list” at the “pre-screening level.”

    The idea that a person with a fake identity can get into the former president’s estate — even if they’re looking to find investors — “is not OK,” he said. “Who else can get in there? Who is behind that person? It’s just wrong on so many [levels].”

    Mr. Marino, the former Secret Service supervisor, said the revelations of her visits to the sprawling estate underscores the challenges that his former agency faces in protecting Mar-a-Lago.

    Another astonishing story about fake identities and potential spying, and yet another young women who seems to be working her way into senior circles. This is the second story in a few months about people in these senior circles who are simply not who they appear to be, which begins to ask the question about just how prominent or rampant this is. Of course, if you aren’t hanging around in these circles, then it’s still very unlikely to ever affect you.