Cyberweekly #68 - Do we trust machines?

Published on Saturday, September 07, 2019

How much do we trust machines? It turns out, according to research I read this week, that the majority of people expect an automated aid to perform better at a task than a human. That can include examples such as navigation, driving aids, medical checklists and automated highlighting systems.

AI is a hugely wide and advanced field but journalists and observers are not equipped enough to actually understand many of the topics that come up. The more I read, the more I realise just how poor the average level of technical awareness actually is in policy and journalistic professionals. If you haven't read James Ball and Ad Greenway's Bluffocracy, then it's worth a read, but in essence the very people that we expect to hold politicians and civil servants to account are often ill equipped to actually carry out that function. The typical technology journalist is expected to cover everything from cyber security to the launch of new apple devices, mobile phones to new networking equipment. And the we have journalists covering policy areas like health, defence and cyber, and very rarely do they have a technology background to understand the impact of technology.

Behind all of this is the unending technological revolution that just keeps changing and improving on computing, not to mention to sociological impact on the humans involved in these systems. I follow technology closely and have a background in writing code and building systems, and I feel like I can barely scratch the surface of AI advances in deep fakes, or the privacy impacts of our technology.

As Arthur Clarke famously said, "Any technology sufficiently advanced is indistinguishable from magic", and we have to trust in magic, because we can't do anything else. For most users of the systems that we build, they have to trust that the machine does magic and tells them what to do and when to do it. Trust is implicit in the use of these advanced systems, and that means that we feel we have to trust in the internet giants, or else opt out of modern society.

Of course fraudsters and criminals like to take advantage of blind trust that is granted to technology systems. How we will build appropriate trust in our technology systems is an area of study that desperately need work. In the meantime, we need to ensure that our systems are trustworthy and able to demonstrate their workings, so that trust can be built and maintained.

    The Enigma Machine

    This notebook simulates an Enigma Machine and visualizes how it works. The Enigma Machine is an especially neat thing to visualize because it was electromechanical. As you used it, it moved. Instead of circuit traces, it had beautiful real wires connecting its pieces.

    This is lovely to play with. If you haven't seen an enigma in action, and seen how they are made, then it's a great visualisation that helps explain why the letters change from letter to letter. My only disappointment is that it only inputs 20 characters, so you can't see the moment where the first wheel comes back to the initial point and moves on the second wheel.

    (I also notice that the author has used the Enigma javascript implementations and constants from GCHQ's Cyberchef tool as the underlying implementation)

    Apple Disputes Google’s Claims of a Devastating iPhone Hack - VICE

    A former Apple security employee criticized the company's reaction and its statement, saying it was misleading. For example, the former employee said, the fact that the the attack was narrowly focused "doesn’t say anything about the security of iOS, merely about the restraint of Chinese attackers."

    "There was nothing keeping the Chinese from putting their exploit(s) in an advertising iframe and paying Huffington Post to serve it. They could easily have compromised tens of millions of iPhones, but chose not to. As a result, we didn’t find out about these attackers for years," the employee, who spoke on condition of anonymity, said.

    This is a poor response from Apple. These bugs (covered last week) are significant flaws. The fact that they were used in a highly targeted mode doesn't make them any less worse, and doesn't change the impact for the communities that were affected. It's unlikely that if they had used the exploit on a much wider group of people that they'd have gotten the access that they wanted. More people would be aware, condemnation would be higher, and nations would have been appalled at the attacks on their citizens. Instead they focused on people within their own borders, which can be rightly condemned, but no other states are going to step in to intervene.

    Spy chief says foreign espionage and interference an 'existential threat' to Australia | Australia news | The Guardian

    “It’s my view that currently, the issue of espionage and foreign interference is by far and away the most serious issue going forward,” Lewis said. “Terrorism has never been an existential threat to established states – for weaker states, yes, but for a place like Australia terrorism is not an existential threat to the state. It is a terrible risk that our populations run and it is a very serious matter which must be addressed every day: the counter-espionage and foreign interference issue, however, is something which is ultimately an existential threat to the state.”

    While terrorist attacks attracted intense public attention, Lewis said, the threat of espionage was often harder to immediately recognise.

    “The harm from acts of espionage may not present for years or even decades ... these sorts of activities are typically quiet, insidious and have a long tail.”

    This is a lovely description of the way that impact is part of the risk equation. Physical terrorism is a bad for the individuals affected, and has an impact on morale and national spirit, but when it's individual incidents rather than an ongoing campaign, it doesn't have that high an impact on the existence of society or government in general. Whereas espionage and foreign interference is changing in modern times to have an ongoing impact on the structures of society itself.

    NSA looks to ‘up its game’ in cyber defense

    Neuberger, speaking Sept. 4 at the Billington Cybersecurity Conference, said NSA officials have heard that some of the information the agency provided – such as IP addresses and domain names – was a case of too little too late or didn’t include enough context for organs to defend themselves properly.

    She explained that the NSA and the cybersecurity directorate want to arrive at a place where essentially offense is informing defense.

    “Bottom line, it’s recognizing the power we have to prevent an attack through rapid sharing - and ideally at the unclassified level - so it can be easily used to defend a network,” she said.

    The Billington cybersecurity conference seemed to have a lot of good sessions looking at the writeups online. But this interests me. There are now two big agencies in cyberdefence in the US, the NSA's new cybersecurity directorate as well as the Department for Homeland Security's Cybersecurity and Infrastructure Security Agency. It will be interesting to see how well they work together and how they can work to provide information effectively.

    Those of you who work with the National Cybersecurity Center (NCSC) in the UK will know how hard it is for these organisations to get over the internal processes and ways of working that they inherit from their classified operations in their parent agencies. A lot of the staff and systems are shared, and habits about classifying information and not sharing are really hard to break, especially when those processes and routines are designed for protection of the agency and drilled into their staff constantly.

    You can see that in Neuberger's line that sharing should be rapid and ideally unclassified. That's not an ideal, it's the fundemental baseline for having it be effective.

    Twitter CEO and co-founder Jack Dorsey has account hacked - BBC News

    A group referring to itself as the Chuckling Squad said it was behind the breach of Jack Dorsey's (Co-Founder and Chief Exec of Twitter) account. [...] It is as yet unclear how the attackers gained access, though it appears a vulnerability in a third-party app could potentially have been to blame. The tweets appeared to be posted via Cloudhopper, a platform Twitter acquired in 2010 to help with SMS text integration.

    (Joel) Beyond stealing/guessing usernames/passwords or attempting to circumvent multi-factor authentication (MFA, aka 2FA) authorised integrations can be used in nefarious ways whether from tweeting to crawling personal data.

    Main takeaways for personal cyber security hygiene?

    1. Use a password manager for unique complex passphrases for each site/system.
    2. Enable multi-factor authentication wherever is it available (preferring app/push-based over SMS, but SMS is still good).
    3. Be mindful of allowing integrations and purge unused ones when no longer required (
    $5.3M Ransomware Demand: Massachusetts City Says No Thanks | Threatpost

    New Bedford Mayor Jon Mitchell said that the attack had specifically dropped the infamous Ryuk ransomware, and that attackers had demanded a ransom of $5.3 million in Bitcoin.

    “On Friday, July 5, 2019, the City of New Bedford’s Management Information Systems (MIS) staff identified and disrupted a computer virus attack, known as ransomware, in the early morning hours before city employees began the work day,” according to a New Bedford press release. “The city’s MIS department has now completely rebuilt the city’s server network, restored most software applications, and replaced all of the computer workstations that were found to be affected. The attack did not disrupt the city’s delivery of services to residents. The city’s MIS staff is now addressing the internal impact on city government.”

    More Ryuk infections, but in this case, despite the insurer wanting to pay, the City IT was able to restore from backups. Described as a mixture of luck and skill, they've had a minimal impact on the City's IT systems. We'll see more and more of these, especially as the technical writeup on Ryuk seems to suggest that there are multiple actors with access to their the malware source code, or variations of it. says a third party exposed user data but didn’t tell anyone | TechCrunch

    A company statement attributed to Monster’s chief privacy officer Michael Jones said the server was owned by an unnamed recruitment customer, with which it no longer works. When pressed, the company declined to name the recruitment customer.

    “The Monster Security Team was made aware of a possible exposure and notified the recruitment company of the issue,” the company said, adding the exposed server was secured shortly after it was reported in August.

    Although the data is no longer accessible directly from the exposed web server, hundreds of résumés and other documents can be found in results cached by search engines.

    But Monster did not warn users of the exposure, and only admitted user data was exposed after the security researcher alerted TechCrunch to the matter.

    “Customers that purchase access to Monster’s data — candidate résumés and CVs — become the owners of the data and are responsible for maintaining its security,” the company said. “Because customers are the owners of this data, they are solely responsible for notifications to affected parties in the event of a breach of a customer’s database.”

    This is an interesting conundrum. Monster transferred the data to a third party who then got breached. Monster claim that they don't have to inform their customers, even though most of their customers are probably entirely unaware of the transfer, because monster themselves didn't get breached.

    Legally, they are probably (if dubiously) right, but ethically...

    This will be interesting to watch and see if there were any european citizens in the datadump, and whether a european ICO will take this on.

    Brave uncovers Google’s GDPR workaround

    Google Push Pages are served from a Google domain ( and all have the same name, “cookie_push.html”. Each Push Page is made distinctive by a code of almost two thousand characters, which Google adds at the end to uniquely identify the person that Google is sharing information about. This, combined with other cookies supplied by Google, allows companies to pseudonymously identify the person in circumstances where this would not otherwise be possible.

    All companies that Google invites to access a Push Page receive the same identifier for the person being profiled. This “google_push” identifier allows them to cross-reference their profiles of the person, and they can then trade profile data with each other.

    Interesting whether this is on purpose or a mistake by Google. Google generally creates one identifier per person/per bidder, so that bidders cannot compare identifiers in order to match their databases. But the push id appears to be only unique per person, so bidders could theoretically use it to combine their datasets to identify a user. Whether they have done so, or will do so, or are contractually allowed to do so is another matter of course.

    Strangelove redux: US experts propose having AI control nuclear weapons - Bulletin of the Atomic Scientists

    One of the risks of incorporating more artificial intelligence into the nuclear command, control, and communications system involves the phenomenon known as automation bias. Studies have shown that people will trust what an automated system is telling them. In one study, pilots who told researchers that they wouldn’t trust an automated system that reported an engine fire unless there was corroborating evidence nonetheless did just that in simulations. (Furthermore, they told experimenters that there had in fact been corroborating information, when there hadn’t.)

    I feel like I watched a movie about this back in the 80's!

    That aside, this is a really interesting study although it's a little heavy in places, but in essence it says that people overestimate how accurate computer aides are at making decisions compared to another human doing the same job, and that they suffer from a form of attention blindness, where if the computer is suggesting something, they stop paying attention to the source inputs, even if they are there and visible.

    Of course social engineers have known this for years, if you present an id card that matches what a guard is expecting, you'll tend to get waved in, even if other details don't quite add up. When we are building decision making tools, or decision aide tools, we need to take these findings into account and consider how we can ensure that users either have appropriate levels of trust, or distrust in the algorithm

    Inside the world of AI that forges beautiful art and terrifying deepfakes - MIT Technology Review

    GANs are having a bit of a cultural moment. They are responsible for the first piece of AI-generated artwork sold at Christie’s, as well as the category of fake digital images known as “deepfakes.”

    Their secret lies in the way two neural networks work together—or rather, against each other. You start by feeding both neural networks a whole lot of training data and give each one a separate task. The first network, known as the generator, must produce artificial outputs, like handwriting, videos, or voices, by looking at the training examples and trying to mimic them. The second, known as the discriminator, then determines whether the outputs are real by comparing each one with the same training examples.

    Each time the discriminator successfully rejects the generator’s output, the generator goes back to try again. To borrow a metaphor from my colleague Martin Giles, the process “mimics the back-and-forth between a picture forger and an art detective who repeatedly try to outwit one another.” Eventually, the discriminator can’t tell the difference between the output and training examples. In other words, the mimicry is indistinguishable from reality.

    A great explainer for GANs and how they work.

    Watch the everybody dance video to see a neat demonstration in action. You can still see the visual distortions and changes at the moment, but this is only the start of such technology

    Artificial-intelligence voice used in a theft - The Washington Post

    The victim director was first called late one Friday afternoon in March, and the voice demanded he urgently wire money to a supplier in Hungary to help save the company in late-payment fines. The fake executive referred to the director by name and sent the financial details over email.

    The director and his boss had spoken directly a number of times, said Euler Hermes spokeswoman Antje Wolters, who noted that the call was not recorded. “The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent,” she said.

    After the thieves made a second request, the director grew suspicious and called his boss directly. Then the thieves called back, unraveling the ruse: The fake “‘Johannes’ was demanding to speak to me whilst I was still on the phone to the real Johannes!” the director wrote in an email the insurer shared with The Post.

    The money, totaling 220,000 euros, was funneled through accounts in Hungary and Mexico before being scattered elsewhere, Euler Hermes representatives said. No suspects have been named, the insurer said, and the money has disappeared.

    I keep predicting that deepfakes will be used to commit this kind of fraud on video calls, and didn't think that audio fraud would be the first example. A great example of this technology in use, and how fraudsters will use it.

    Still sticking to my prediction that the next stage is a deepfake video attached to a skype call. The level of trust we associate with our own eyes is really high, and we'll break all kinds of rules if we can see our boss telling us what to do.