Cyberweekly #38 - Digital transformation is hard
Published on Saturday, February 09, 2019
What is the strategy for doing digital transformation in a large organisation?
Do you run little agile projects in the midst of all of the other major projects going forward? How do you get budget for that project that makes sense and how do you engage with the organisational governance models? "Just trust me" generally isn't acceptable as a governance model, and yet it tends to be the one that agile teams push for.
The hardest parts about digital transformation is around the organisational borders of the teams trying this stuff. It's when the team needs a budget, approval to spend, approval to operate. Lots of bits of the organisation are focused on providing solutions for the old way of working, but the organisation might rely on those solutions existing, and you need some way to jury rig the outputs of your agile program to look satisfactory to the bureaucracy, but not to strangle your digital transformation programme and prevent it working.
Security is just one of those strangleholds, but it tends to be a significant one, because security people really dislike risk, and any transformation project is about taking risks. Security teams that are there to enable the organisation to take risks, and see their role as highlighting the risks and offering use risk mitigation advice will do better here than security teams who see their role as mitigating the risks themselves.
Unfortunately, implementing these new technologies can be a thorny process. In fact, research by McKinsey found that fewer than one third of digital transformation efforts succeed.
For the most part, these failures have less to do with technology and more to do with managing the cultural and organizational challenges that a technological shift creates. It’s relatively easy to find a vendor that can implement a system for you, but much harder to prepare your organization to adapt to new technology
Digital transformation is hard, really hard, and quite likely to fail. We should all know that already. But it's most likely to fail because it requires a change to people, to organisational behaviours and to bureaucratic systems but most digital transformation projects I've seen (including 25 of the GDS digital exemplars) focused on the technology, on the "move to digital" rather than fixing those things.
It's stuff like this that make me realise how important the hybrid nature of GDS's interventions, from spend controls to the service manual, from appointing CDO's to updating the Treasury guidance for budgeting projects. It would have been easy for people to focus on any one of those things, but all of them need to change in concert to make digital transformation work.
Packaged frameworks are appealing because you can easily find this overwhelming. You want this Business Agility magic you keep hearing about but you don’t want to, or are unable to, challenge the corporate funding and governance processes, and whatever you do has to fit within strict departmental cost controls. So you start by sending a handful of project managers for Scrum Master certification, and maybe some business analysts for Product Owner training, then you try Scrum for a bit. Later you might try Scrum-of-Scrums, SAFe, LeSS, or similar with the best of intentions. If you can just to prove it works in a contained pilot environment, the rest of the organisation will surely wake up and beat a path to your door!
The problem is that this kind of transformation requires a fundamental shift in how people think and how they operate. This in turn requires acknowledging that the way they are running things is based on assumptions and constraints that are no longer relevant to the modern business environment; that they are still in business in spite of, rather than because of, these inherited management and governance systems. No one likes to acknowledge that, even to themselves. Their subconscious minds vehemently resist this paradigm shift, which makes them easy prey for a vendor selling a product that promises said transformation while conveniently fitting their existing paradigm.
The vendors and consultants can do this regardless of whether it will deliver the results, safe in the knowledge they and their methods won’t be held accountable if it doesn’t, because the one thing everyone can agree on is that transformation is hard and at least we tried. Many proponents of these methods have a religiosity about them. Their method works; if you don’t believe this you are misguided, misinformed, or just antagonistic (I’ve been accused of all three); and if it doesn’t work then you are applying it incorrectly.
Scaling without a religious methodology by Dan is one of the best descriptions around of how to apply agile at scale, to your organisation rather than to a project. It reminds us how important all of these things are, and of course the perceived value of bringing in consultants with a neatly packaged and branded "agile" solution.
However, organisations adopt SAFe because they want agile practices. So there is no need to have a Trojan Horse because the organisations want Agile practices. In fact, the presence of SAFe is likely to put off exactly the experienced Agile practitioners that the organisation is hoping to attract. As a result, they will have to fall back to their normal consultancies and system integraters that provide them with plug compatible programming units.
So SAFe is a Trojan Horse. Its a way for traditional consultancies to pass themselves off as agile with no agile experience. A short SAFe training course introduces their existing consultants to a new over constrained command and control mechanism.
This is an interesting point. Almost all of the agile experts I know tend to decry SAFe, primarily because they think that it offers something that it can't deliver, a totally valid and usable scaled agile framework. If you employ it, even if you employ it as a crutch to demonstrate the value of agile, you are setting up some signalling to the agile aware community, and not good signalling.
The security researchers who first discovered this vulnerability, Dylan and Me9187, told me that the vulnerability was just the tip of the iceberg when it came to sloppy security practices at Atrient. They saw casino WiFi network passwords stored in plaintext, user personal data stored in plaintext and no attempt to secure anything.
They even found Atrient's third party contractors (based in India) posting Atrient's source code on Github and asking stack overflow questions about it, an indicator which made it obvious to the researchers that security was not being taken seriously.
This story will continue to develop over the coming week (the twitter storm about it was still going yesterday when I last checked). However, we should be clear that some of these activities need context to be defined as secure or insecure. Posting your code on Github isn't necessarily insecure, in fact it can indicate a very mature and secure coding workflow. Asking stack overflow questions isn't necessarily insecure either.
However, depending on the context, these things can also indicate insecure practices, or can verify that insecure practices are going ahead.
(and of course, this is a textbook example of how not to respond to vulnerability disclosure. If only there was a Freely available ISO standard for vulnerability disclosure
It’s worth watching this video just to see what good Deepfakes look like. It’s deeply unsettling. Also note that this is done using free and open source tooling and a bit of compute power by an amateur. What could an organisation with significant investment do? Today probably nothing, (deep fake technology is definitely cutting edge and large organisations are bad at cutting edge), but this gives you an idea of what is coming in the next 3-5 years.
The opening ceremony on Friday was a long time coming: the building was originally scheduled to be finished in 2011 but this was delayed by construction issues, and for other reasons.
In 2015 thieves stole taps from toilets across the building, resulting in flooding and damage, in an incident nicknamed “Watergate” in the German press.
Can we talk about the fact that 4 years after an intelligence agency building was supposed to be open, some people managed to get on site with enough equipment to steal plumbing supplies (taps) and take them off of the site?
Would you be confident that none of the foreign states who might have a vested interested in placing bugs into the fabric of the building had not done so given this?
This open challenge invites researchers all over the globe to submit countermeasures against fake (or "spoofed") speech, with the goal of making automatic speaker verification (ASV) systems more secure. By training models on both real and computer-generated speech, ASVspoof participants can develop systems that learn to distinguish between the two.
This will hopefully help to counter the research into creating deepfakes. A good detection capability desperately needs to be built and it's good to see that Google made the dataset publicly available so that research doesn't just have to feed back into Google either.
Conflating the war exclusion with a non-physical cyber event like NotPetya grows out of two factors: (1) NotPetya inflicted substantial economic damage on several companies, and (2) the US and UK governments attributed the NotPetya attack to the Russian military. These two factors alone, however, are not enough to escalate this non-physical cyber-attack to the category of war or “hostile and warlike” activity. These terms of art that have been considered by courts, and the resulting decisions, which are now part of the Law of Armed Conflict, make it clear that much more is required to reach the conclusion of “warlike” action.
This will be interesting to watch to discover whether cyber insurance is valuable or not. Reasonably, insurers generally do not insure against "acts of war", but in the cyber realm, with attribution so hard, it's going to be hard to see the value of insurance if it wont cover actions that can be claimed to be state sponsored