Cyberweekly #222 - The AI Issue

Published on Sunday, February 19, 2023

A couple of months ago, I had seen some initial noises about ChatGPT, had a play, and asked it to generate a CyberWeekly for me. I shared this with some friends, but in the run up to Christmas, what with everything else going on, I didn't actually author a CyberWeekly at the time, and never got around to writing about it.

But it's really taken on a life of its own in the last few months, from reaching 100 million users faster than any software before, vastly outpacing adoption curves for WhatsApp, Facebook, Google search and the iPhone. It's had a huge investment round, and been the subject of several major news stories.

Most of my friends who have done any work on AI aren't terribly surprised by the capabilities of large language models such as ChatGPT or Bing's Sydney. I've written before on AI, from bringing the dead back to life, to military uses of AI in drone warfare, to using GPT-3 to explain code, and the capabilities of GPT-3 are something I'm mostly familiar with. The thing that has surprised everyone is just how accessible the chat interface has made these AI tools, and how it's bought it into mass market prominence.

But there's a downside to that, because as a society, we're not terribly AI aware. People vaguely understand that computer vision means the computer can see out of a camera, and they might understand that a large language model enables it to string words together. But behind it all, most people tend to still think that AI means "Generalised Intelligence", that is a computer that can think, can reason, can interpret and that has it's own thoughts, feelings and desires. But AI develops in strange ways, and most of those ways resolve to being mere simulacrums of actual intelligence. The problem of generalising AI is still massively complex, and while large language models give us more tools to play with, at their heart they are simply statistical models.

But when we engage with a chat interface, it's human nature to personify the thing on the other end, to make it feel like it's somehow imbued with personality, with thoughts, feelings and desires.

Simon Willison used a term a while back, talking about prompt engineering being a bit like "casting spells". His point was that we don't necessarily understand why these things work, but we can learn by rote that certain incantations result in specific outcomes. He changed his mind a bit later about that language because he felt (and was challenged) that it created too much magical thinking in people who don't understand AI. It creates the concept that you are creating something magical from a self-thinking, self-aware entity of some form.

I liked the metaphor of magical rituals, of the idea that we have to build a grimoire of words of power, that enable us to engage with these tools in interesting ways, but that's because at it's heart, I know that the AI is simply a really big statistical prediction model.

The question for me is how to get excited about these tools, how to work out how to embed them into our workflows, without getting carried away or misunderstanding how people will deal with them. Because they seem almost magical, users will trust them far more than they might trust other systems, and that creates a cognitive dissonance for us, where when it's proven to have hallucinated some fact, or given you wrong details, people will seek to rationalise the answers they got. "It's just having a bad day" or "I had annoyed it with my previous questions" will become ways of justifying the error, rather than carefully predicting that it might get things wrong and checking the results.

I think there's some really interesting future applications for large language models, but working out how we get there carefully, ethically, and with cultural awareness is going to be a huge challenge.

    What Is ChatGPT Doing … and Why Does It Work?—Stephen Wolfram Writings

    https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

    The basic concept of ChatGPT is at some level rather simple. Start from a huge sample of human-created text from the web, books, etc. Then train a neural net to generate text that’s “like this”. And in particular, make it able to start from a “prompt” and then continue with text that’s “like what it’s been trained with”.

    As we’ve seen, the actual neural net in ChatGPT is made up of very simple elements—though billions of them. And the basic operation of the neural net is also very simple, consisting essentially of passing input derived from the text it’s generated so far “once through its elements” (without any loops, etc.) for every new word (or part of a word) that it generates.

    But the remarkable—and unexpected—thing is that this process can produce text that’s successfully “like” what’s out there on the web, in books, etc. And not only is it coherent human language, it also “says things” that “follow its prompt” making use of content it’s “read”. It doesn’t always say things that “globally make sense” (or correspond to correct computations)—because (without, for example, accessing the “computational superpowers” of Wolfram|Alpha ) it’s just saying things that “sound right” based on what things “sounded like” in its training material.

    The specific engineering of ChatGPT has made it quite compelling. But ultimately (at least until it can use outside tools) ChatGPT is “merely” pulling out some “coherent thread of text” from the “statistics of conventional wisdom” that it’s accumulated. But it’s amazing how human-like the results are. And as I’ve discussed, this suggests something that’s at least scientifically very important: that human language (and the patterns of thinking behind it) are somehow simpler and more “law like” in their structure than we thought. ChatGPT has implicitly discovered it. But we can potentially explicitly expose it, with semantic grammar, computational language, etc.

    This is one of the best explainers on how ChatGPT actually works. It gets right down into the details, and if you've never studied Neural Networks or AI before, you might get lost, but I'd encourage you to press on, as Stephen comes back from the detail and summarises every now and then.

    The most salient fact is that ChatGPT is at it's heart just looking at the statistics of which word to select next. It has built a phenomenal model on how language works, and various implied connections between words that makes it feel like it understands concepts, but in reality it's just a poor shadow of those concepts. This suggests, as Stephen says, that our language is actually far less complex than we think, and that our knowledge of such concepts is encoded in that language, but at the end of the day, ChatGPT and other Large Language Models cannot reason about the world or the concepts underneath... at least yet.

    Bing Is Not Sentient, Does Not Have Feelings, Is Not Alive, and Does Not Want to Be Alive

    https://www.vice.com/en/article/k7bmmx/bing-ai-chatbot-meltdown-sentience

    Text-generating AI is getting good at being convincing—scary good, even. Microsoft's Bing AI chatbot has gone viral this week for giving users aggressive, deceptive, and rude responses , even berating users and messing with their heads. Unsettling, sure, but as hype around Bing and other AI chatbots grows, it's worth remembering that they are still one thing above all else: really, really dumb

    On Thursday, New York Times contributor Kevin Roose posted the transcript from a two-hour conversation he had with the new Bing chatbot, powered by OpenAI’s large language model. In the introduction to the article, titled "Bing's AI Chat Reveals Its Feelings: 'I Want to Be Alive," he wrote that the latest version of the search engine has been “outfitted with advanced artificial intelligence technology” and in a companion article, shared how impressed he was : “I felt a strange new emotion—a foreboding feeling that A.I. had crossed a threshold, and that the world would never be the same.” 

    What Roose was mostly impressed with was the “feelings” he said Bing’s chat was sharing, such as being in love with him and wanting to be human. However, Roose’s conversation with Bing does not show that it is intelligent, or has feelings, or is worth approaching in any way that implies that it does.

    Most recently, users have been reporting the chatbot to be “rude” and “aggressive,” such as when a user told Bing that it was vulnerable to prompt injection attacks and sent a related article to it. In Roose’s conversation, the chatbot told him, “You’re not happily married. Your spouse and you don’t love each other. You just had a boring valentine’s day dinner together. 😶” [MBS: Emphasis mine]

    This might seem eerie indeed, if you have no idea how AI models work. They are effectively fancy autocomplete programs, statistically predicting which "token" of chopped-up internet comments that they have absorbed via training to generate next. Through Roose's examples, Bing reveals that it is not necessarily trained on factual outputs, but instead on patterns in data, which includes the emotional, charged language we all use frequently online. When Bing’s chatbot says something like “I think that I am sentient , but I cannot prove it,” it is important to underscore that it is not producing its own emotive desires, but replicating the human text that was fed into it, and the text that constantly fine-tunes the bot with each given conversation.

    This conversation flagged something with one of my coworkers, which is about the ethics around the use of AI. Lots of work has been done to consider how to prevent AI telling you how to commit crimes and to try to prevent it being encouraged to repeat extremist views. But very little has been done thinking about the systemic ethical issues. The idea that an AI might tell you “Your spouse and you don’t love each other” raises a safeguarding issue. What will impressionable teens, people at their lowest points or people suffering from loneliness do if the AI starts telling them that their freinds don’t like them?

    These questions about how and where such tools are used, and what training or warnings are given to the users is an interesting ethical question that I don’t think the tech companies behind these systems are really considering

    Introducing the AI Mirror Test, which very smart people keep failing - The Verge

    https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test

    In behavioral psychology, the mirror test is designed to discover animals’ capacity for self-awareness. There are a few variations of the test, but the essence is always the same: do animals recognize themselves in the mirror or think it’s another being altogether?

    Right now, humanity is being presented with its own mirror test thanks to the expanding capabilities of AI — and a lot of otherwise smart people are failing it.

    The mirror is the latest breed of AI chatbots, of which Microsoft’s Bing is the most prominent example. The reflection is humanity’s wealth of language and writing, which has been strained into these models and is now reflected back to us. We’re convinced these tools might be the superintelligent machines from our stories because, in part, they’re trained on those same tales. Knowing this, we should be able to recognize ourselves in our new machine mirrors, but instead, it seems like more than a few people are convinced they’ve spotted another form of life.

    […]

    What is __ important to remember is that chatbots are autocomplete tools * They’re systems trained on huge datasets of human text scraped from the web: on personal blogs, sci-fi short stories, forum discussions, movie reviews, social media diatribes, forgotten poems, antiquated textbooks, endless song lyrics, manifestos, journals, and more besides. These machines analyze this inventive, entertaining, motley aggregate and then try to recreate it * They are undeniably good at it and getting better, but mimicking speech does not make a computer sentient.

    I love this application of the mirror test. I think it hits the nail on the head, as humans, we love to personify. We've been doing it for thousands of years, from "old father time" to "the grim reaper" to "The stormfather", we love to project onto nature and the world around us a sense of patterns, capability and intent that in many cases simply isn't there.

    In the same way that it doesn't actually thunder because Thor is fighting trolls, ChatGPT doesn't actually care about your feelings in any way shape or form

    From Bing to Sydney

    https://stratechery.com/2023/from-bing-to-sydney-search-as-distraction-sentient-ai/

    Again, I am totally aware that this sounds insane. But for the first time I feel a bit of empathy for Lemoine. No, I don’t think that Sydney is sentient, but for reasons that are hard to explain, I feel like I have crossed the Rubicon. My interaction today with Sydney was completely unlike any other interaction I have had with a computer, and this is with a primitive version of what might be possible going forward.

    […]

    Here is another way to think about hallucination: if the goal is to produce a correct answer like a better search engine, then hallucination is bad. Think about what hallucination implies though: it is creation. The AI is literally making things up. And, in this example with LaMDA, it is making something up to make the human it is interacting with feel something. To have a computer attempt to communicate not facts but emotions is something I would have never believed had I not experienced something similar.

    This is the point where the author loses me. Interesting deep dive into the Bing Chat system (codenamed Sydney), but we need to remember that this chat system doesn’t want anything, it doesn’t care what the human feels, all it does is statistically work out the next best word in any given sentence.

    GitHub Copilot now has a better AI model and new capabilities | The GitHub Blog

    https://github.blog/2023-02-14-github-copilot-now-has-a-better-ai-model-and-new-capabilities/?utm_source=substack&utm_medium=email

    To improve the quality of GitHub Copilot’s code suggestions, we have updated the underlying Codex model resulting in large scale improvements to the quality of code suggestions and reduction of time to serve those suggestions to the users.

    Case in point: When we first launched GitHub Copilot for Individuals in June 2022, more than 27% of developers’ code files on average were generated by GitHub Copilot. Today, GitHub Copilot is behind an average of 46% of a developers’ code across all programming languages—and in Java, that number jumps to 61%.

    This work means that developers using GitHub Copilot are now coding faster than before thanks to more accurate and more responsive code suggestions.

    Where developers choose to use CoPilot, they really choose to use copilot and it’s responsible for a large amount of the lines of code that is written.

    For many languages, Java being a particular outlier that I experienced, much of the code we write is pretty standard boiler plate code. Getters and Setters that don’t do much other than validate values; class definitions and constructers that just copy the arguments and so on. Every language has these, and CoPilot is faster and better at writing these than humans are.

    The downside to these is that all of this code that is written has a cognitive load when attempting to read the code to understand what it does. Every getter has to be inspected just in case its one of the 1% that actually does something different. AI making it easier to bulk up your codebase isn’t necessarily such a good thing given there’s not as much work into GitHub CoPilot explaining code (although ChatGPT can explain code samples, so it’s possible)

    AI-systems: develop them securely | Publication | AIVD

    https://english.aivd.nl/publications/publications/2023/02/15/ai-systems-develop-them-securely

    Artificial intelligence (AI) gives computerised machines the ability to solve problems on their own. More and more computer systems are using AI or incorporating ML models. In this publication, the NCSA of the General Intelligence and Security Service (AIVD) shares ways AI systems can be attacked and how you can defend against it happening.

    This is a useful guide to attacks on AI systems. As people integrate these into their normal processes and systems, knowing what the attack routes are is important.

    Great work from the Dutch here

    Incident travel time - by Robert Ross - The Thought Drop

    https://www.bobbytables.io/p/incident-travel-time

    On-Call pages, customer support emails, and manual discovery are all unique ways we discover fires. In real life, we call emergency services (ie: 911) when we know something is wrong. But even 911 itself was an improvement on a convoluted system for getting help quickly. Residents had to know the local number for police and fire emergency services up until 1968, which is not very helpful when you’re in a different city visiting and suddenly need help.

    A standardized way of dispatching emergency services was a critical step to have full-cycle incident management for cities, especially ones operating at enormous scale like New York City. Since 2013, NYC has been publishing their response times to 911 calls . This data is a treat in so many ways, but it has a notable metric missing : resolution.

    Instead, NYC focuses on the part that is within the realm of their control: How fast they can route a 911 call to the right team suited for the job, and how long it takes for that team to travel to where the incident is occurring.

    This is an interesting tidbit amongst this review of how incident management can work. Measuring end to end time, the Median Time To Recover (MTTR) might not actually tell you much because of so many confounding factors that are outside your control.

    Instead, if you have an incident response team, then tracking the time to arrive at the incident might be a far better indicator of the maturity and capability of your incident response team.

    cURL audit: How a joke led to significant findings | Trail of Bits Blog

    https://blog.trailofbits.com/2023/02/14/curl-audit-fuzzing-libcurl-command-line-interface/

    In summary, our approach demonstrates that fuzzing CLI can be an effective supplementary technique for identifying vulnerabilities in software. Despite initial skepticism, our results yielded valuable insights. We believe this has improved the security of CLI-based tools, even when OSS-Fuzz has been used for many years.

    It is possible to find a heap-based memory corruption vulnerability in the cURL cleanup process. However, use-after-free vulnerability may not be exploitable unless the freed data is used in the appropriate way and the data content is controlled. A double-free vulnerability would require further allocations of similar size and control over the stored data. Additionally, because the vulnerability is in libcurl , it can impact many different software applications that use libcurl in various ways, such as sending multiple requests or setting and cleaning up library resources within a single process.

    It is also worth noting that although the attack surface for CLI exploitation is relatively limited, if an affected tool is a SUID binary, exploitation can result in privilege escalation (see CVE-2021-3156 : Heap-Based Buffer Overflow in sudo).

    Running “Fuzzers” on libraries to ensure that they validate their input correctly is reasonably common, but not terribly well understood. Using the same approach on CLI tools and the arguments they take is pretty novel, this is the first I’ve seen it done, and resulted in finding some bugs. As highlighted here by Maciej, there’s probably some security research value in looking at SUID binaries and fuzzing their arguments, as that could find you a chain for execution under the SUID bit. Especially since argument parsing is generally done right at the beginning, likely before any fork/subprocess controls that drop the SUID capabilities.

    Security Predictions - the highlights for 2023 | Splunk

    https://www.splunk.com/en_us/blog/leadership/security-predictions-the-highlights-for-2023.html

    I could only write about three areas before I could take no more hype: quantum, blockchain and machine learning as an attack vector. I know.

    So,... what are you going to do about quantum ? Easy one to start: in 2023, there are waaaay more imminent threats to invest your security time and effort. Wait for NIST to finish its standardisation process fully, and maybe research those algorithms if you’re super keen. In the meantime, relax and use the buzzword as an excuse to make or refresh your asset inventory - it’s good practice anyway, but will help you to prepare for migration when the time is right (if ever). There’s a great standard from ETSI on this topic ( TR 103 619 ) that has actionable and practical lists of questions to work through to improve crypto-agility and “get ready”.

    So,... the blockchain can be hacked? Like all technology, the theory can be untouchable but it doesn’t survive implementation. If you do use blockchain, I hope that 1) you have a good reason, and 2) that you understand the limitations, i.e. wallets can have vulnerabilities, just like all software. If you don’t use blockchain: don’t use it, until you meet 1) and 2) above.

    So,... what about when machine learning goes on the attack? A bit sensationalist, and I apologise, since ML going bad is usually unintentional. “Defend” through operationalisation, i.e. building a proper lifecycle - including securing the data supply chain (i.e. protect data that trains your ML models) - and making machine learning systems explainable. Then, if your ML does go rogue, at least you know where, when and why that happened.

    This is a nice essay about security predictions for 2023 and I think it’s right on the money, but the bit that stands out for me was this little addenda at the end. Three hyped technologies that probably won’t have as much impact as people might have you think. I agree with Kirsty’s view here as well. Don’t use blockchain unless you understand it and it’s drawbacks. Don’t worry too much about the robots rising up, and get yourself a cryptographic inventory in preparation for quantum but don’t rush to replace everything.