Cyberweekly #223 - Separating the reality from the hype in AI

Published on Sunday, March 26, 2023

AI is having a huge impact right now. I think that whenever I log into twitter, most of what I see is covering AI in some form.

There are other emerging technologies that I think are interesting, but AI is really having it’s moment in the spotlight, and boy is it having a moment!

We need to carefully think about AI through at least 3 fundamentally different lenses. The evolution of the technology, the evolution of use and the evolution of misuse.

The technology behind AI hasn’t been this exciting for decades. All kinds of researchers and workers who have been working on this for years are popping up and discovering that their skills and capabilities are incredibly in demand. This means that there is increasing visibility into some of the ways that these AI models are being generated, optimised and used. The technology is accelerating at a substantial rate, with the costs of training models coming down, the size and complexity of models coming down, and the understanding of those models coming up.

But technologists love to get excited about their technology, and the silicon valley cadre of product managers, ex-googlers and tech-bro’s that I follow on twitter are probably not the best examples of normal people. They tend to fall into either AI-adopters who are keenly explaining how AI is making almost all jobs entirely redundant, or into AI-refusniks who are predicting the downfall of humanity (sometimes because they think AI will make all jobs redundant).

What we also need to do is understand the evolution of use and misuse of these technologies. How do early adopting normal people feel about such technologies.

It takes time for new technology to drift into most industries, because most industries don’t employ or often even care about those silicon valley technologists. Adding a new technology can take time and effort. But looking at technology adjacent companies, I’ve seen demos of AI embedded into Blender (a 3d rendering engine), Visual Studio and plenty of games developers creating games using the new GPT-4 engine. Right now many of these are tech demos, but it starts to show how these tools might form part of the tool chain for creatives, used as a crutch to help them achieve their goals.

But of course, the one place that tends to be fast at adopting technology is unethical or criminal people. People who want to make content as quickly as possible with as little effort as possible because they know that they are looking at tiny conversion rates. We’ve seen an increase in AI generated content being submitted to journals and as you can see below, to a Magazine that publishes short fiction. Anything that was already burdened by SEO experts creating content farms is going to be looking at multi-fold increases in the amount of content they need to manage.

If my gut feel is right, then I think we’re probably right at the hype bit of the gartner hype cycle for LLM’s in particular, which I suspect means that over the next few months, we’ll see some spectacular successes and lots and lots of failures along the way. The key will be looking for what bits of this will survive in the future and ensuring that you know not just how it will impact your business, but also how it will impact your customers, your supply chain and your executives.

    ChatGPT plugins

    https://openai.com/blog/chatgpt-plugins

    Language models today, while useful for a variety of tasks, are still limited. The only information they can learn from is their training data. This information can be out-of-date and is one-size fits all across applications. Furthermore, the only thing language models can do out-of-the-box is emit text. This text can contain useful instructions, but to actually follow these instructions you need another process.

    Though not a perfect analogy, plugins can be “eyes and ears” for language models, giving them access to information that is too recent, too personal, or too specific to be included in the training data. In response to a user’s explicit request, plugins can also enable language models to perform safe, constrained actions on their behalf, increasing the usefulness of the system overall.

    We expect that open standards will emerge to unify the ways in which applications expose an AI-facing interface. We are working on an early attempt at what such a standard might look like, and we’re looking for feedback from developers interested in building with us.

    Today, we’re beginning to gradually enable existing plugins from our early collaborators for ChatGPT users, beginning with ChatGPT Plus subscribers. We’re also beginning to roll out the ability for developers to create their own plugins for ChatGPT.

    In the coming months, as we learn from deployment and continue to improve our safety systems, we’ll iterate on this protocol, and we plan to enable developers using OpenAI models to integrate plugins into their own applications beyond ChatGPT.

    This is a pivot point for ChatGPT itself. As more and more competitors arise, the power of ChatGPT will be dependant on the ability for it to be a critical part of a bigger ecosystem.

    These plugins enable the ChatGPT to implement the ReACT model, allowing ChatGPT to formulate action plans, and then call the plugins to resolve the actions. If you watch the videos, it’s pretty mindblowing at what it can do, but there’s a note of caution in the docs.

    The AI itself determines when to execute the plugin based on its own language model. That means that it’s not going to be repeatable or reliable necessarily to get the AI to call your plugin reliably.

    This could work brilliantly, and demos I’ve seen suggest that it’s pretty good, but it could also end up a bit like Alexa skills, where users end up having to remember just how to invoke the skill with just the right incantation.

    British people who use Alexa reguarly may have experienced the frustration in having to learn to stop saying “Alexa, play BBC Radio 2” and start saying “Alexa, Ask the BBC to play Radio 2” in order to invoke the BBC skill rather than the TuneIn skill.

    ChatGPT runs the risk of repeating this user experience, and only time will tell if it works for end users.

    Could you train a ChatGPT-beating model for $85,000 and run it in a browser?

    https://simonwillison.net/2023/Mar/17/beat-chatgpt-in-a-browser/

    Could we run it in a browser? Alpaca is effectively the same size as LLaMA 7B—around 3.9GB (after 4-bit quantization ala llama.cpp ). And LLaMA 7B has already been shown running on a whole bunch of different personal devices: laptops, Raspberry Pis (very slowly) and even a Pixel 5 phone at a decent speed!

    The next frontier: running it in the browser.

    I saw two tech demos yesterday that made me think this may be possible in the near future.

    The first is Transformers.js . This is a WebAssembly port of the Hugging Face Transformers library of models—previously only available for server-side Python. It’s worth spending some time with their demos , which include some smaller language models and some very impressive image analysis languages too.

    The second is Web Stable Diffusion . This team managed to get the Stable Diffusion generative image model running entirely in the browser as well!

    Web Stable Diffusion uses WebGPU, a still emerging standard that’s currently only working in Chrome Canary. But it does work! It rendered my this image of two raccoons eating a pie in the forest in 38 seconds.

    I think Simon is right here. The next frontier in the AI revolution is not about big central services that can do all kinds of things. It will be about “AI at the edge”, it’s about deploying pre-trained models into locations where it can access local data without leaking that data.

    A model running in your browser, or running on your computer, could access all of your private and personal documents, but be interacted with by your web browser, providing a truly personalised version of the AI that has much better understanding of what you actually want.

    A Concerning Trend – Neil Clarke

    http://neil-clarke.com/a-concerning-trend/

    (Note: This is being published on the 15th of February. In 15 days, we’ve more than doubled the total for all of January.)

    I’m not going to detail how I know these stories are “AI” spam or outline any of the data I have collected from these submissions. There are some very obvious patterns and I have no intention of helping those people become less likely to be caught. Furthermore, some of the patterns I’ve observed could be abused and paint legitimate authors with the same brush. Regional trends, for example.

    What I can say is that the number of spam submissions resulting in bans has hit 38% this month. While rejecting and banning these submissions has been simple, it’s growing at a rate that will necessitate changes. To make matters worse, the technology is only going to get better, so detection will become more challenging. (I have no doubt that several rejected stories have already evaded detection or were cases where we simply erred on the side of caution.)

    One of the first vocal victims of large language models is this SciFi Magazine that pays its authors. The huge influx in submissions for short stories written by AI means that it has shut down submissions temporarily while they work out how to detect and weed this stuff out.

    Note that this isn’t the problem of the AI itself, but rather the humans around the world who have seen a tool, and seen a way to mass generate submissions in the hope that even microscopically small acceptance rates will be worth the effort.

    We can expect to see this distort other markets, I’m sure it’s already affecting Amazon’s ePublishing system for creating kindle books, but I’ve also seen reports of midjourney created pictures being submitted to art and photo competitions as well.

    ReAct: Synergizing Reasoning and Acting in Language Models

    https://react-lm.github.io/

    While large language models (LLMs) have demonstrated impressive capabilities across tasks in language understanding and interactive decision making, their abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g. action plan generation) have primarily been studied as separate topics.

    In this paper, we explore the use of LLMs to generate both reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with external sources, such as knowledge bases or environments, to gather additional information.

    We apply our approach, named ReAct, to a diverse set of language and decision making tasks and demonstrate its effectiveness over state-of-the-art baselines, as well as improved human interpretability and trustworthiness over methods without reasoning or acting components.

    Concretely, on question answering (HotpotQA) and fact verification (Fever), ReAct overcomes issues of hallucination and error propagation prevalent in chain-of-thought reasoning by interacting with a simple Wikipedia API, and generates human-like task-solving trajectories that are more interpretable than baselines without reasoning traces.

    On two interactive decision making benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and reinforcement learning methods by an absolute success rate of 34% and 10% respectively, while being prompted with only one or two in-context examples.

    If you thought ChatGPT and other LLM’s were good already, with their chain of thought prompting, then wait until they start being able to flip forwards and backwards between a chain of thought and an action plan.

    This model is likely to hugely impact the ability of locally run LLM’s to appear far more competent, and critically, may enable better safer access to corporate data without leaking it to the underlying AI model in the same way

    Stanford CRFM

    https://crfm.stanford.edu/2023/03/13/alpaca.html

    Instruction-following models such as GPT-3.5 (text-davinci-003), ChatGPT, Claude, and Bing Chat have become increasingly powerful. Many users now interact with these models regularly and even use them for work. However, despite their widespread deployment, instruction-following models still have many deficiencies: They can generate factually incorrect information, propagate social stereotypes, and produce toxic language.

    To make maximum progress on addressing these pressing problems, it is important for the academic community to engage. Unfortunately, doing research on instruction-following models in academia has been difficult, as there is no open-source model that comes close in capabilities to closed-source models such as OpenAI’s text-davinci-003.

    In this post, we are releasing our findings about an instruction-following language model, dubbed Alpaca , which is fine-tuned from Meta’s LLaMA 7B model. We train the Alpaca model on 52K instruction-following demonstrations generated in the style of self-instruct using text-davinci-003. Alpaca shows many behaviors similar to OpenAI’s text-davinci-003, but is also surprisingly small and easy/cheap to reproduce.

    We are releasing our training recipe and data, and intend to release the model weights in the future. We are also hosting an interactive demo to enable the research community to better understand the behavior of Alpaca. Interaction can expose unexpected capabilities and failures, which will guide us for the future evaluation of these models. We also encourage users to report any concerning behaviors in our web demo so that we can better understand and mitigate these behaviors. As any release carries risks, we discuss our thought process for this open release later in this blog post.

    After my explanation of LLM’s last time, I was left wondering a bit about some of our intereactions and realised that I was missing something.

    I described LLM’s as essentially autocomplete. The model is just trying to find the right next word to put in the document through statistical modelling. That makes sense, but what I was left wondering was how it could make sensible decisions between prompts like “What’s the difference between an alpaca and a llama” or “Write me an apology email to my boss for not doing as much work this week because I’ve been distracted by AI research”.

    It turns out that what I was missing was a number of things, but the main thing is the instruction-following model.

    What gets passed to the underlying model isn’t just your question. Instead there’s a simpler model that looks at your question and tries to understand what form of question it is, and then turns it into a much bigger prompt. In OpenAI this is trained on a dataset called DaVinci, which is a whole set of prompt questions and expected documents, but sadly this dataset is entirely closed. We don’t know what went into it.

    This release by Stanford lets us create and see our own version of this, so you can see how a small model turns prompts into well… prompts.

    Don’t trust AI to talk accurately about itself: Bard wasn’t trained on Gmail

    https://simonwillison.net/2023/Mar/22/dont-trust-ai-to-talk-about-itself/

    As always with language models, the trick to understanding why they sometimes produce wildly inappropriate output like this is to think about how they work.

    A large language model is a statistical next-word / next-sentence predictor. Given the previous sequence of words (including the user’s prompt), it uses patterns from the vast amount of data it has been trained on to find a statistically satisfying way to continue that text.

    As such, there’s no mechanism inside a language model to help it identify that questions of the form “how do you work?” should be treated any differently than any other question.

    We can give it hints: many chatbot models are pre-seeded with a short prompt that says something along the lines of “You are Assistant, a large language model trained by OpenAI” (seen via a prompt leak ).

    And given those hints, it can at least start a conversation about itself when encouraged to do so.

    But as with everything else language model, it’s an illusion. It’s not talking about itself, it’s completing a sentence that starts with “I am a large language model trained by ...”.

    So when it outputs “Google’s internal data:”, the obvious next words might turn out to be “This includes data from Google Search, Gmail, and other products”—they’re statistically likely to follow, even though they don’t represent the actual truth.

    This is one of the most unintuitive things about these models. The obvious question here is why : why would Bard lie and say it had been trained on Gmail when it hadn’t?

    It has no motivations to lie or tell the truth. It’s just trying to complete a sentence in a satisfactory way.

    I really enjoyed this from SImon this week, especially after the Bard incident where it told people it was trained on Gmail email data when it clearly wasn’t.

    I said it a while back, but we humans have a really strong intuitive feel for personification, our subconscious really wants to feel like this thing is alive. We need to constantly be on our guard for understanding how it works and why it might provide the answers it does.

    How WIRED Will Use Generative AI Tools

    https://www.wired.com/story/how-wired-will-use-generative-ai-tools/

    Writing stories is another matter, though. A few publications have tried— sometimes with disastrous results . It turns out current AI tools are very good at churning out convincing (if formulaic) copy riddled with falsehoods.

    This is WIRED, so we want to be on the front lines of new technology, but also to be ethical and appropriately circumspect. Here, then, are some ground rules on how we are using the current set of generative AI tools. We recognize that AI will develop and so may modify our perspective over time, and we’ll acknowledge any changes in this post. We welcome feedback in the comments.

    This is the first journalistic statement on the use of AI that I’ve seen, and it’s something that I applaud.

    I’m generally very AI bullish, I think that it’s mostly a great technology and that it is going to transform the way we interact with technology for the better. But I also think that it’s going to have substantive impacts on the creative industry in particular, and I think clearly labelling when it’s been used, and having an internally coherent statement of usage is a powerful indicator of an organisation you can trust.

    I don’t necessarily agree with WIRED’d decisions on some of these areas, I’m fine with AI being used to edit and summarise articles, and with image generation as well. But that’s what’s important about these statements, they allow us as consumers to make decisions about which publishers to read that align with our ethical decisions. WIRED holds itself to a higher level than I need it to, and as such I’ll continue reading and supporting where possible.

    Not making anti-ChatGPT one's whole thing

    https://dznz.substack.com/p/not-making-anti-chatgpt-ones-whole

    Mostly it has been crypto and its hell spawn like NFTs, but we’ve also had the Unicorn era of “companies” designed specifically to break existing systems like regular employment, stable workspaces, public transport, mail and couriers; and “self-driving cars” that don’t and can’t; social media designed to hurt us ; and now ChatGPT.

    [ Twitter disclaimer that these disruptions vary in malignancy and danger - their common nature is their use of an unquestioning pipeline of hype and legitimacy-building] Sometimes these things are complicated tech such that, while the lay person might suspect yet another scam, they can’t prove it, and regular commentators simply publish PR lies uncritically or even shill for them. So it falls on the loud tech folk to find ways to communicate the scam and reassure people that their instincts are not, in fact, unfounded Fear, Uncertainty, and Doubt

    The problem is that these are voices in the dark. The hype engine they are countering is huge and well-funded. And so a magazine can run an uncritical puff piece for an alleged fraudster one month, but hey, next month they and their writers are onto the next thing.

    And so it falls on regular nerdy folk like Molly White , Cory Doctorow , Folding Ideas and, to a more specialised extent Cas Piancey , David Gerard etc, to get pulled into having to refute and debunk yet another cycle […]

    I like to read opinions that I disagree with in part because it challenges my own preconceptions. As such, I read posts against the rise of AI and Large Language Models to understand just what the realities are of them, and I couldn’t agree more with this summary.

    There’s a lot of people on Twitter, Substack and the media who are in many ways rewarded for uncritically joining the hype train. Telling you how amazing this AI stuff will be. But instead it relies on a small cadre of technologists who blog to hold that narrative to account and keep reminding us that “it’s not actually alive, it doesn’t have opinions, and it doesn’t want to escape”

    Mythbusting cloud key management services

    https://www.ncsc.gov.uk/blog-post/mythbusting-cloud-key-management-services

    Myth 2: ‘It’s better to generate and use your own keys than to rely on a KMS’ Many KMSs can be used in a mode where you bring your own encryption key (BYOK), or hold your own key (HYOK), rather than use keys generated by the KMS. Some customers use HYOK or BYOK because regulations require them to generate their keys, but others don’t feel comfortable trusting the cloud service to generate keys securely.

    However, you already rely on the KMS to generate and protect keys well for the foundation of the cloud service. You also have to trust that the KMS handles data encryption keys securely, no matter how key encryption keys are generated or protected. Furthermore, when keys are generated externally and imported into or accessed by the KMS, you’re just providing more opportunities for the key to be lost or stolen. For all these reasons you should avoid HYOK or BYOK where you can.

    This is a great article, and I think that it’s spot on with many of these myths.

    Calling out this second myth in particular feels quite bold. As the blog post says, currently lots and lots of regulations, security compliance certificates and other expert schemes might require you to do this.

    Those schemes are wrong, for exactly the reasons set out here, but this concept that you might trust a third party to manage your keys on your behalf is still percieved as a bit of an odd decision in many cases.

    With a good bit of work, you can probably convince your accreditor or assessor that you have successfully subcontracted key managed to a competent authority, but I think it’s going to be hard work and certainly not something they’ll approve by default.

    r/RedditEng - You Broke Reddit: The Pi-Day Outage

    https://www.reddit.com/r/RedditEng/comments/11xx5o0/you_broke_reddit_the_piday_outage/

    All at once the site came to a screeching halt. We opened an incident immediately, and brought all hands on deck, trying to figure out what had happened. Hands were on deck and in the call by T+3 minutes. The first thing we realized was that the affected cluster had completely lost all metrics (the above graph shows stats at our CDN edge, which is intentionally separated). We were flying blind. The only thing sticking out was that DNS wasn’t working. We couldn’t resolve records for entries in Consul (a service we run for cross-environment dynamic DNS), or for in-cluster DNS entries. But, weirdly, it was resolving requests for public DNS records just fine. We tugged on this thread for a bit, trying to find what was wrong, to no avail. This was a problem we had never seen before, in previous upgrades anywhere else in our fleet, or our tests performing upgrades in non-production environments.

    For a deployment failure, immediately reverting is always “Plan A”, and we definitely considered this right off. But, dear Redditor… Kubernetes has no supported downgrade procedure. Because a number of schema and data migrations are performed automatically by Kubernetes during an upgrade, there’s no reverse path defined. Downgrades thus require a restore from a backup and state reload! We are sufficiently paranoid, so of course our upgrade procedure includes taking a backup as standard. However, this backup procedure, and the restore, were written several years ago. While the restore had been tested repeatedly and extensively in our pilot clusters, it hadn’t been kept fully up to date with changes in our environment, and we’d never had to use it against a production cluster, let alone this cluster. This meant, of course, that we were scared of it – We didn’t know precisely how long it would take to perform, but initial estimates were on the order of hours… of guaranteed downtime. The decision was made to continue investigating and attempt to fix forward.

    About 30 minutes in, we still hadn’t found clear leads. More people had joined the incident call. Roughly a half-dozen of us from various on-call rotations worked hands-on, trying to find the problem, while dozens of others observed and gave feedback. Another 30 minutes went by. We had some promising leads, but not a definite solution by this point, so it was time for contingency planning… we picked a subset of the Compute team to fork off to another call and prepare all the steps to restore from backup.

    In parallel, several of us combed logs. We tried restarts of components, thinking perhaps some of them had gotten stuck in an infinite loop or a leaked connection from a pool that wasn’t recovering on its own

    This is a fun writeup, full of dense technical details about calico, kubernetes and the technology that powers a major site like Reddit.

    But more interesting in here is just how people respond and react when there is an incident going on, and how important human processes are.

    Within minutes, all hands were “on deck” and in the call. That’s presumably a live meeting, but it could be an incident slack channel. They realised early on that they needed a plan, and so the incident manager is in charge of formulating plans, directing people to go look into the alternatives and actions and then regroup.

    This act of constantly splitting work up, and then recombining is a super power during an incident, and something I’ve only seen in highly skilled operations teams that practice incident response. Normally everyone wants to “chase the ball” to use a sporting metaphor. Everyone dives in, contributing ideas and focusing on “the thing”. But this ability to direct the group, set some people on tasks that aren’t directly focused on the incident, such as how to restore from backup, can give you more options if other ideas don’t work out.

    Practice, and practice and practice. It’s the only way that you’ll build inter-personal trust, that you’ll have people not panic, and that you’ll have a smooth operating process during an incident.

    Exploiting aCropalypse: Recovering Truncated PNGs | Blog

    https://www.da.vidbuchanan.co.uk/blog/exploiting-acropalypse.html

    The bug lies in closed-source Google-proprietary code so it's a bit hard to inspect, but after some patch-diffing I concluded that the root cause was due to this horrible bit of API "design": https://issuetracker.google.com/issues/180526528 .

    Google was passing "w" to a call to parseMode() , when they should've been passing "wt" (the t stands for truncation). This is an easy mistake, since similar APIs (like POSIX fopen ) will truncate by default when you simply pass "w" .

    Not only that, but previous Android releases had parseMode("w") truncate by default too! This change wasn't even documented until some time after the aforementioned bug report was made.

    This is a horrific bug to exist, one that may well put lives in danger, expose people to fraud and otherwise destroy lives.

    The worst part about this bug is that it was embedding invisible data into images shared at almost anytime in the past where this was a legitimate bug. The only saving grace here is that it looks like previous android releases didn’t exhibit the bug.

    But think of anytime you’ve taken a photo and then cropped out data. It could have been hiding identifying details, it could have been cropping out people you didn’t want in the photo or info that shouldn’t be shared. There’s a possibility that the original data could be recovered.

    But why did the bug happen? The fact that it’s just a very simple parameter in a low level API reminds us just how vulnerable software is. Our world is built on horrifically complex piles of code and even tiny small changes can have huge implications

    We updated our RSA SSH host key | The GitHub Blog

    https://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-key/

    This week, we discovered that GitHub.com ’s RSA SSH private key was briefly exposed in a public GitHub repository. We immediately acted to contain the exposure and began investigating to understand the root cause and impact. We have now completed the key replacement, and users will see the change propagate over the next thirty minutes. Some users may have noticed that the new key was briefly present beginning around 02:30 UTC during preparations for this change.

    Please note that this issue was not the result of a compromise of any GitHub systems or customer information. Instead, the exposure was the result of what we believe to be an inadvertent publishing of private information. We have no reason to believe that the exposed key was abused and took this action out of an abundance of caution.

    This is pretty bad as incidents go, but also not quite as bad as it could be thanks to the way that SSH does it’s cryptography.

    Publishing the private key means that any attacker that has a copy of the key can carry out the same mathmatical operations as the server and client. This gives rise to two types of attack normally. Decryption and Impersonation.

    In this case, SSH tends not to use the private keys for encryption/decryption, but instead negotiates a new key for every connection. This means that an attacker in possession of the private key probably cannot go back through time and look at previously captured traffic and decrypt it.

    However, this does give rise to impersonation attacks, and the way that the public uses keys makes this complex as well. An attacker with a copy of the key who creates a badgithub.com style domain can impersonate the original github.com key. Github has released a new key, and new public fingerprint, and so clients will ask to connect to the new one, but many ssh clients cache the public key of the old server, and there’s not an obvious revocation list that will be automatically checked.

    You can fix this by removing the old key for github from your cached ssh keys, and then connecting against and validating the new fingerprint against the published public key (like you did the first time, right?)

    There’s probably some questions to be asked about just how the private key was able to be published. These sorts of keys should ideally be managed by machines, and held in trust, such that no humans computer should ever have a copy of the private key. But, nobody does that when they are creating their startup, and these technical decisions are easy to make in theory, but hard in practice.