Cyberweekly #200 - Issue 200

Published on Sunday, June 26, 2022

I can't really believe that I've managed to write 200 of these, and that people continue to subscribe week after week.

This has been a somewhat idiocentric project that started about 4 years ago, a few weeks before I flew out to California for a conference.

I had found myself over a few months saying the same phrase over and over again to people, "This article I read talked about...". And pretty consistently people would say "I haven't read that, do you have a link?".

I realised that very few people around me read all the same stuff that I did, and that I frequently failed to have any easy ability to back up what I had read or where. I read stuff voraciously, but I never journaled or kept what I read, which made sharing it, or even going back and reviewing it difficult to do.

I discussed doing this with a few people, and inspired by a couple of other newsletters, such as DevOpsWeekly, I figured why not create a weekly newsletter that sums up what I read. I started gathering links, just in pinboard.in, and then at the end of the week, I wrote my first newsletter. I'd signed up to TinyLetter [ed: wrong word in emailed edition] because Dan Hon used it for his newsletter.

That first newsletter shows my varying interests, which is probably the thing that makes this newsletter unique, there was links to security guidance from the NCSC, because of my work with (at the time) the UK Government digital services, links to eFail, the vulnerability in PGP encryption, several stories on data breaches, something on risk management, and a watch on the UK's first ever TV advert for MI6.

What wasn't clear then was just what I wanted to achieve, but the mix of having a link, a bit of commentary from me, and an introduction felt right to me, and it clearly seemed to work. The second newsletter added in the pull quote from the article, and while that made my life much harder, since I had to work out where to store it, I think it really added value to my commentary, because you can see, without reading the article, which bit of it caught my attention.

My first ever subscriber was bob (lowercase b, in his personal corporate style, hi bob!), and there were a number of people I knew from a security minded slack instance that I was on. But I e=never really sought to advertise it at all. I tweeted about it, and then more or less left it to manage itself.

Four years later, and I've moved email platform. I've migrated how I store links from pinboard to instapaper to a bit of python I wrote myself running on Google AppEngine, to migrating it to Google Cloud Run, to using Notion - my current storage for links. I've continued to gain subscribers pretty constantly over the last 4 years, but without any advertising, any publicity, it's entirely been word of mouth.

And that's because to me, the real value of the newsletter is not really in your ability to read it. It's in the fact that it forces me to read articles more critically. To determine whether I can summon the will or interest to comment on them, and it causes me, every week for the last four years, to sit and systematise the way that I consume the vast amount of security news out there.

But, disappointingly to my ego, it's not just about me.

I hope and strongly suspect that you find it valuable, and that concept isn't something new for me. Back nearly 12 years ago, at a hack day at the Guardian, I had worked with two other developers on a product we called "The Social Guardian", which was a hackday project designed to allow you to assemble your own collection of stories that you read, and recommend them to friends via your social feed. This was inspired at the time by watching people in a pub reading a physical newspaper and showing each other news articles, and wanting to replicate that online. But further conversations with media and technology luminaries and columnists at the Guardian had me convinced that the future of online news lay not just in widening the pool of people who could write for the Guardian (a project at the time called Mutualisation would test that theory to destruction), but that there was some real value in the concept of Curation. I worked with some of the local journalists, who wanted features to enable them to select articles from RSS newsfeeds from other journalists to feature on their homepage (Now sadly dead). These journalists had tiny niche audiences (in Guardian newspaper terms), but their audiences trusted them as someone who would read and find and point out news that was relevant to them. The long tail of potential niche audiences and curators felt like a real missing piece of the puzzle.

That idea never really went anywhere with the editorial bigwigs at the time, we had too many other ideas, and it was relegated to the traditional bin that hackday ideas go into, the projects that sound interested but will never get funding.

But the concept and idea of curation has really stuck with me. Because the growing flood of newsletters means that you could be out reading CloudSecList or This week in security or The CyberWire or ... well, the list goes on. And some of those newsletters will be brilliant, and some will simply be RSS feeds of adverts and reissued press releases.

The value in CyberWeekly is not that it's just a round up of articles, it's that it's my roundup of articles that I found interesting, and thought was worth sharing. The power and trust is not that I'm a good writer, that you want to read the bits that I write, it's that if I thought it was interesting, you probably follow me because our interests align somehow.

The fact that I get value from reading all those articles (and if you look at Notion, you'll see a lot of the ones that I read and didn't feature) means that I find it generally self-rewarding and useful to write this newsletter, and hopefully you find value from it too.

That said, next week, for the 201 issue, I'll be including a survey to try to get a bit more understanding about what you find interesting, valuable and useful about the newsletter, because I've changed it based on feedback over the years, longer quotes, more comment, moving to substack and it's own non-tracking led webpage, but those comments have come from individuals, so I want to know where I should be spending my time. Secondly, I'm going to experiment with some promotion, probably some paid tweets or similar. Finally, I'm going to look into what a sponsored section or advert would look like, so I'll be asking what you would find the least offensive. The main reason for that is that hosting the webpage costs me a small amount of money each month, and I'd like to recoup that small outlay if at all possible.

I'm still passionate about reading, and writing, and while this has been a very meta newsletter, there's still a good collection of links further down.

Hope you enjoy Cyberweekly, and it'll be far more back to normal next week

    The Ultimate List of Developer Newsletters | Draft.dev

    https://draft.dev/learn/the-ultimate-list-of-developer-newsletters

    Newsletters are one of the most effective ways to reach an audience, even with the proliferation of various forms of media.

    In developer marketing, newsletters play a unique part in your arsenal of marketing tools. Software development is constantly evolving. Breaking changes, new tools, and updated best practices are added every day, so keeping up can be incredibly hard.

    Curated newsletters help developers keep up because a set of experts have already sorted through the content and picked the best stuff. Developers don’t have to worry about being overrun with information while keeping tabs on everything that’s changing.

    In addition, newsletters are typically trusted by readers who opt in to receive them. The fact that they provided their email signals interest, engagement, and trust. Within the developer community - a segment that’s known for its skepticism of hardcore advertisements - having that trust factor can make a big difference in reaching leads.

    So in this piece, I wanted to offer a collection of a wide variety of developer-focused newsletters. I’ve included each newsletter’s category, sponsorship rates, and contact information in case you want to pitch them stories or sponsor them.

    I’m only slightly heartbroken to not be featured in this list. There’s a good point here that many of these newsletters are simply marketing, which can reduce the value. But quite a few of these newsletters are ones that I read as well, and give interesting insights into the developer culture at the moment

    The Collapsing Quality of Dev.to | Lane's Blog

    https://wagslane.dev/posts/collapsing-quality-of-devto/

    Building a great platform is hard, but building an engaging and worthwhile community on that platform is much harder. There is a constant yin/yang relationship between content moderation and censorship needs , whether most of the moderation decisions are made by the platform itself or are crowdsourced by the users.

    Dev.to did an amazing thing by focusing its branding and growth strategy around newer developers.

    The content on Dev.to was almost exclusively entry-level JavaScript content, by beginners, for beginners. It all went south when the community reached a point where writers realized that it’s easy to write low-effort content that will be liked by brand-new programmers, even if you’re brand new yourself. I’m not saying new devs shouldn’t be writing, I absolutely think they should. It’s that the return on investment of writing clickbaity listicles on Dev.to is much better than writing something new, interesting, or insightful.

    An interesting view on the rise of low quality articles on free websites. I’ve noticed this as well, but I’m a lot more forgiving in my interpretation. There are a lot more junior or young developers who are finding a problem or area for the first time and writing about it. That means that we’re always far more likely to see entry level content, by beginners, for beginners in almost all online publishing.

    The missing thing for me, as I’ll talk about this week, is the value of not moderation, but curation. Some of those beginner articles will be really good and really useful. Their existence isn’t a problem, they don’t need to be moderated, but what’s needed is more expert people to curate them into collections that are more interesting, or more relevant

    DriftingCloud: Zero-Day Sophos Firewall Exploitation and an Insidious Breach | Volexity

    https://www.volexity.com/blog/2022/06/15/driftingcloud-zero-day-sophos-firewall-exploitation-and-an-insidious-breach/

    Earlier this year, Volexity detected a sophisticated attack against a customer that is heavily targeted by multiple Chinese advanced persistent threat (APT) groups. This particular attack leveraged a zero-day exploit to compromise the customer's firewall. Volexity observed the attacker implement an interesting webshell backdoor, create a secondary form of persistence, and ultimately launch attacks against the customer's staff. These attacks aimed to further breach cloud-hosted web servers hosting the organization's public-facing websites. This type of attack is rare and difficult to detect. This blog post serves to share what highly targeted organizations are up against and ways to defend against attacks of this nature.

    […]

    While gaining access to the target's Sophos Firewall was likely a primary objective, it appears this was not the attacker's only objective. Volexity discovered that the attacker used their access to the firewall to modify DNS responses for specially targeted websites in order to perform MITM attacks. The modified DNS responses were for hostnames that belonged to the victim organization and for which they administered and managed the content. This allowed the attacker to intercept user credentials and session cookies from administrative access to the websites' content management system (CMS). Volexity determined that in multiple cases, the attacker was able to access the CMS admin pages of the victim organization's websites with valid session cookies they had hijacked.

    Interesting compromise of a firewall component in the network. It’s slightly unclear exactly how the original compromise is carried out, but once carried out and implemented, the attacker can use the firewall, designed to protect the network from attackers, as a pivot point into the network. Furthermore, the position of the firewall, inspecting traffic that comes out of the network, means that the attacker used it to capture bearer tokens of administrative staff accessing external administrative systems.
    Of course, I’m sure that the network administrators and security insisted that the firewall be able to intercept TLS encryption with a corporate trusted certificate in order to inspect outgoing traffic to prevent bad actor traffic , which likely meant that the compromised firewall was far more capable of accessing staff traffic than it would otherwise be.

    Incident Management Guide

    https://incident.io/guide/

    Every company needs a plan for when things go wrong. I've written these plans many times now, and every time I've wished for a reference that reflects the way companies actually work today. So here it is — our many years of collective knowledge and experience distilled into a practical guide for your whole organisation. Enjoy! Hypergrowth companies use incident.io to automate incident processes, focus on fixing the issue and learn from incident insights to improve site reliability and fix vulnerabilities.

    This is a nice guide into building a good incident response plan for your teams. I note that there are some slight differences between doing security incident response versus operational incident response, primarily related to the priority of actions such as taking the system down versus restoring system functionality.

    In an operational incident, the priority is to restore service, and then remediate the incident. In a security incident, the priority is to find, constrain and get rid of the attacker, while trying not to destroy evidence of their attack chain.

    But, those details aside, this guide covers all of the useful coordination functions that are similar between both incident response types, including important information on how to arrange on-call, how to call an incident and some of the things you must do while the incident is running.

    Cloudflare outage on June 21, 2022

    https://blog.cloudflare.com/cloudflare-outage-on-june-21-2022/

    In order to be reachable on the Internet, networks like Cloudflare make use of a protocol called BGP. As part of this protocol, operators define policies which decide which prefixes (a collection of adjacent IP addresses) are advertised to peers (the other networks they connect to), or accepted from peers. These policies have individual components, which are evaluated sequentially. The end result is that any given prefixes will either be advertised or not advertised. A change in policy can mean a previously advertised prefix is no longer advertised, known as being "withdrawn", and those IP addresses will no longer be reachable on the Internet. While deploying a change to our prefix advertisement policies, a re-ordering of terms caused us to withdraw a critical subset of prefixes. Due to this withdrawal, Cloudflare engineers experienced added difficulty in reaching the affected locations to revert the problematic change. We have backup procedures for handling such an event and used them to take control of the affected locations.

    This is very similar to what happened to Facebook last year. BGP is a super powerful capability, one that incidentally often doesn’t have good security safeguards. But critically for many companies, as it affects the underlying routing of packets on the network, if you get it wrong, you can often isolate your engineers away from the network they are using to detect and fix the problem. Cloudflare had backup procedures and kicked them into action, and the outage was measured in minutes/hours rather than days, but for an organisation that fronts a significant chunk of the web, any outage that isn’t in seconds is severe!

    Spyware vendor targets users in Italy and Kazakhstan

    https://blog.google/threat-analysis-group/italian-spyware-vendor-targets-users-in-italy-and-kazakhstan/

    This campaign is a good reminder that attackers do not always use exploits to achieve the permissions they need. Basic infection vectors and drive by downloads still work and can be very efficient with the help from local ISPs. To protect our users, we have warned all Android victims, implemented changes in Google Play Protect and disabled Firebase projects used as C2 in this campaign.

    How Google is Addressing the Commercial Spyware Industry

    We assess, based on the extensive body of research and analysis by TAG and Project Zero, that the commercial spyware industry is thriving and growing at a significant rate. This trend should be concerning to all Internet users. These vendors are enabling the proliferation of dangerous hacking tools and arming governments that would not be able to develop these capabilities in-house. While use of surveillance technologies may be legal under national or international laws, they are often found to be used by governments for purposes antithetical to democratic values: targeting dissidents, journalists, human rights workers and opposition party politicians. Aside from these concerns, there are other reasons why this industry presents a risk to the Internet. While vulnerability research is an important contributor to online safety when that research is used to improve the security of products, vendors stockpiling zero-day vulnerabilities in secret poses a severe risk to the Internet especially if the vendor gets compromised. This has happened to multiple spyware vendors over the past ten years, raising the specter that their stockpiles can be released publicly without warning. This is why when Google discovers these activities, we not only take steps to protect users, but also disclose that information publicly to raise awareness and help the entire ecosystem, in line with our historical commitment to openness and democratic values.

    Interesting deep dive into a commercial spyware enabled phishing campaign and how it runs out. As Google says, the takeaway starts with “Attackers will use basic infection tools if they can”, and then for further escalation on the target device, the use of commercial zero-day capabilities was used.

    Learning to Play Minecraft with Video PreTraining (VPT)

    https://openai.com/blog/vpt/

    The internet contains an enormous amount of publicly available videos that we can learn from. You can watch a person make a gorgeous presentation, a digital artist draw a beautiful sunset, and a Minecraft player build an intricate house. However, these videos only provide a record of what happened but not precisely how it was achieved, i.e. you will not know the exact sequence of mouse movements and keys pressed. If we would like to build large-scale foundation models in these domains as we’ve done in language with GPT , this lack of action labels poses a new challenge not present in the language domain, where “action labels” are simply the next words in a sentence. In order to utilize the wealth of unlabeled video data available on the internet, we introduce a novel, yet simple, semi-supervised imitation learning method: Video PreTraining (VPT). We start by gathering a small dataset from contractors where we record not only their video, but also the actions they took, which in our case are keypresses and mouse movements. With this data we train an inverse dynamics model (IDM), which predicts the action being taken at each step in the video. Importantly, the IDM can use past and future information to guess the action at each step. This task is much easier and thus requires far less data than the behavioral cloning task of predicting actions given past video frames only , which requires inferring what the person wants to do and how to accomplish it. We can then use the trained IDM to label a much larger dataset of online videos and learn to act via behavioral cloning.

    Absolutely astonishing development. The problem of there being so much free but unlabelled content around has always been difficult to manage. The nice thing about a constrained system like minecraft is that you only need a small amount of tagged content to work out the model of how inputs turn into outputs. That wouldn’t work for typical computer vision problems, because the method of taking the original pictures is almost unconstrained, but there’s lots of examples of domains where I think this could be reapplied successfully to develop new AI models.

    How Threat Actors Hijack Attention: The 2022 Social Engineering Report  | Proofpoint US

    https://www.proofpoint.com/us/blog/threat-insight/how-threat-actors-hijack-attention-2022-social-engineering-report

    Proofpoint researchers analyze key trends and behaviors in social engineering throughout 2021 that highlight some common misconceptions people may have about how criminal or state actors engage with them, including:

    • Threat actors may build trust with intended victims by holding extended conversations
    • Threat actors expand abuse of effective tactics such as using trusted companies’ services
    • Threat actors leverage orthogonal technologies, such as the telephone, in their attack chain
    • Threat actors know of and make use of existing conversation threads between colleagues
    • Threat actors regularly leverage topical, timely, and socially relevant themes

    The 2022 Social Engineering report looks at what services are frequently abused, such as Google Drive or Discord; how Proofpoint sees millions of messages directing people to make phone calls as part of the attack chain; and why techniques like thread hijacking can be so effective.

    A good reminder that advice that runs counter to actual attacker behaviour is bad advice. Don’t trust links in emails from unknown senders assumed that attackers have not built up a pre-existing relationship or other pretext to contact the user.

    Attackers can and will use psychological techniques to try to get you to click that link, open that file, or download that software. Our systems need to ensure that they don’t fail the moment a user clicks a link, not train our users not to click bad links!

    Introducing Tailscale SSH · Tailscale

    https://tailscale.com/blog/tailscale-ssh/

    Normally, Tailscale connections are based on your node key’s expiration — so that you re-authenticate to the tailnet regularly, but not as part of every interaction. (You can also disable node key expiry for servers.) For some more sensitive operations, you really do want to verify that a human is on the other end of the connection. (On the internet, nobody knows you’re a dog, and it really is harder to type your password with paws.)

    Tailscale SSH check mode requires the user to have recently re-authenticated to Tailscale before establishing the connection. By default, this is a 12-hour check period — so if you’re connecting to various log servers to debug an outage, you can keep working throughout your day, uninterrupted. If you’re dealing with a particularly sensitive application or set of permissions, then you can set a much shorter check period — you might only need 15 minutes to access your database and identify which customers are affected by a bug. You can require a check on any Tailscale SSH connection and set the desired check period as part of your SSH access rules. For example, what if you only wanted Alice to be able to connect as root on the production server, as long as she authenticated in the last hour?

    […]

    So: Say hello to Tailscale SSH — and say goodbye to managing SSH keys, setting up bastion jump boxes, and unnecessarily exposing your private production devices to the open internet.

    I keep coming back to Tailscale and the amazing stuff that they are doing rethinking internet scale authentication. The root of all tailscale is the trust that the users devices can manage the individual identity. This check mode feature of the new SSH capability means that you can not only validate the device identity, but also force the user to reauthenticate to the Tailscale login system recently, which means that it’s not only a trusted device, but that the user has reauthenticated to Tailscale, which probably means a trusted human as well. The vast benefit of that is that you start moving the trust away from an application that has to make trust decisions and into the Tailscale access control system, which is far more able to make trust decisions.

    GitHub - LGUG2Z/unsubscan: A tool to help you find unsubscribe links in your emails

    https://github.com/LGUG2Z/unsubscan

    unsubscan A tool to help you find unsubscribe links in your emails

    About I created unsubscan because I think that anyone should be able to quickly and easily look at their emails and:

    • Unsubscribe from whatever they want
    • Unsubscribe whenever they want
    • Unsubscribe for free
    • Unsubscribe without yet another subscription service
    • Unsubscribe without having to give another company access to their emails
    • Unsubscribe without having to forward emails to other companies

    This looks like a nice tool for unsubscribing from mailing lists you no longer care about. At the moment, the tool requires you to export emails in eml format, which is possible from a few webhosts, but notably not Microsoft365 or Google Mail. Thunderbird and others can do that for you, but the scanning is pretty simple, so changing this to support other mail hosts wouldn’t be hard