Guest: Charles Humble
In the third episode of Asynchronous and Unreliable, amongst many other subjects, Anne & writer, author, and editor Charles Humble discuss the past and future of human communication.
Watch on YouTube
Listen on Spotify
Listen on Apple Podcasts
Read shownotes & transcript below
Discover how communication methods in tech have evolved over decades, from early magazines and conferences to the challenges and opportunities presented by AI and large language models. This episode explores the importance of clear, honest, and varied communication for tech progress and societal impact.
In this episode:
The historical shifts from magazine learning, to books, to conferences and open source, and finally even podcasts for tech education
The role of conferences in motivating further reading and understanding complex topics
How the reliability of spoken vs. written communication impacts knowledge sharing
The significance of context, audience, and repetition in effective communication
Challenges of AI-generated content: surface-level writing, fact-checking, and nuance
The societal implications of transparency and responsible tech communication
Strategies for engaging non-technical audiences in tech debates
Personal stories and lessons from industry veterans about editing, clarity, humility and not giving up (all early drafts are rubbish)
Link to Anne's 2020 Tech Ethics course at the University of Hertfordshire.
Anne Currie (00:00)
Hello and welcome to asynchronous and unreliable, a new weekly podcast where we discuss the most interesting ideas and concepts in tech. I'm your host, Anne Currie co-author of building green software, the cloud Native Attitude and author of the science fiction, panopticon series. And for our guest today, we have my friend, long-term collaborator and co-author of the second version of the Cloud Native Attitude, Charles Humble. So Charles, do you want to introduce yourself?
Charles Humble (00:27)
Sure, yeah. So I've been in IT in various different ways for, gosh, over 30 years at this point. I started in desktop support. I then became a programmer and a senior programmer and architect and eventually a CTO. I was CTO for a startup that got acquired. And then I switched to more content stuff. So I was InfoQ's chief editor from about 2014, I think, to 2020, about six years there.
And then Chief Editor for Container Solutions, where you and I both worked from, in my case, 2020 to about 2023. And now I am back freelancing. So I do about two and a half days a week as a consultant. And I do about half of that on sustainability. And I do about the other half of it working with, it's actually very hard to describe, working with teams who are doing really cutting edge things, helping them to think through what they're doing and articulate what they're doing.
Um, it's the easiest way I can describe it. Otherwise we'll be here all day. So that's about half of my time. And then the other half of my time is I do some training and I do some tech journalism stuff. I speak at conferences and you know, I'm a middle-class white guy. So obviously I also host a podcast because you know, that's the law now, I think. Uh, I'm a composer and half of an ambient techno band called Two Fish. Uh, I'm also, you know, a father and have two children. So life is pretty busy.
Anne Currie (01:51)
Now, it's interesting that you said we'll be talking all day, if you started talking about what you did day to day, but that is in fact exactly what I want to be talking about in the podcast today. So we met actually, although you say you were Chief Editor for Container Solutions where we both worked, we actually met some years before that when you were editor of InfoQ Magazine
Anne Currie (02:20)
And that is related to the topic that I want to be talking about today. So we met when we were co-organizers of a big UK tech conference called QCon - for many years, actually - and we've worked on lots of projects since. But I kind of want to talk about that and about communicating complex tech concepts.
Anne Currie (02:49)
At the time, and it was nearly 10 years ago that we first started working together on QCon, tech conferences were the way to communicate complicated topics, really. Well, obviously you can read books, and we both write books, but tech conferences were a more interactive way to do it. And I want to talk today about the progress of how communications of complex subjects has happened through your career. Because you've been absolutely part of the inside track on communicating complex topics in tech for decades.
Charles Humble (03:34)
Yeah, that's such an interesting area. And you're right, I think there was a period of time when all of that was happening at conferences. I mean, it goes in cycles, I think. So when I was growing up and learning programming, I learned from reading programming magazines. So if you go back to the 8-bit era, there were magazines. There was one I remember called Input, for example, and every week it would have like a bunch of programming listings and you could sort of type them in and that's pretty much how I learned to code There were also like games magazines, was one called Zap64, but they would get games developers who had worked out how to do really interesting things with the kind of hardware of the day to write articles about what they'd discovered.
Some of the Commodore 64 games, I might get this wrong now, but as I recall it, you could have 16 sprites at any one time. And then someone worked out a way you could basically fool the processor into allowing you to have a whole load more by using very low level raster interrupts or something like that. I forget the details now. It was 40 years ago. But it would have been a raster interrupt of some kind. And again, somebody cracked, worked out how to do that. And then they put that in a magazine and then we could all know how to do it.
So there was a time when that was how we learned. And in fact, because I have a sort of weird background, because I did an English literature degree and then got a job in computing, I had to learn all of my computing through a mixture of on the job, but also by going off and reading The Structure and Interpretation of Computer Programs and the Gang of Four, the Design Patterns book. And that was how I learned the the stuff that I would otherwise have learned through my degree. I learned that through books.
But you're right, think there was a shift for a while that happened. And I think it happened probably as a result of a couple of different things. One of those was that the period of time when software became very, proprietary and became a sort of secret sauce that we weren't allowed to talk about.
Anne Currie (05:57)
That was the situation at beginning of my career 30 years ago. There was no information. It was illegal to reverse engineer code, but you ended up having to do it all the time. You fired things in and then saw what came back out because you needed to know what APIs looked like to talk to the code. And often the documentation, even if it was supposed to be documented, was so terrible that the only way you could work out how it worked was to reverse engineer it.
Charles Humble (06:27)
Microsoft is a bit of a counter example to that, I think. I mean, you could make an argument that says that the reason we ended up with proprietary software was because of Bill Gates. You if you go back to his very early days and there was a memo that he wrote, this is a home computing memo, and he was basically saying, you know, sharing software is the death to the software industry. But a side effect was Microsoft was one of the very few tech companies that took technical documentation seriously. Because they had to.
Because if you wanted people to run on Windows, the only way you could do that is if the APIs were properly documented. So they hired really good writers. They paid them properly, which is a novelty. And as a result, the Windows API documentation is still the benchmark of really good documentation. And actually Azure has its faults, but that mentality, I think, carried on through. If you look at Azure's docs versus Google's docs and AWS's docs, none of them are perfect, but Microsoft's are so much better than anyone else's. And it's because I think they had a culture of taking documentation really seriously, kind of from the get go. But yeah, I mean, I'd certainly remember early Java days and literally ⁓ there were things you could use that would reverse bytecode back into readable Java code. There were ways to do that. And I remember trying to figure out what somebody was doing. And the only way I could do it was literally to reverse engineer the source code and go and read it, which, I mean, so you're right.
Anne Currie (08:13)
Yeah. We're just showing our age here.
Charles Humble (08:23)
Yeah. So your point about conferences. I think conferences are a great opportunity to get excited and inspired. I love doing conferences. I love talking at conferences. I have had genuinely career changing moments that occurred at QCons for example.
There's one I trot out from time to time, but it is absolutely true. I went into a room, it was before AWS existed, or at least was public. And it was someone from Amazon talking about their deployment pipeline and the fact that they were deploying to production multiple times a day. And I'd never heard anything like it. Like it was just, it was...
I mean now, of course, it is absolutely routine, but at the time, I think we were deploying four times a year to production and we were like pretty cutting edge. Do you know what I mean? Like we were pretty good, right? And there was this chap from Amazon saying, yeah, yeah, we literally deploy to the website multiple times a day. And you're like, huh? And that was such a sort of game changing moment. But I also think it's true that the amount that you remember from a conference talk is not a lot, right? I normally say when I'm mentoring new speakers, I normally say to them, you know, really and truly, if someone comes out of the room and remembers three things you said, you're doing pretty well, right? What they will remember is how you made them feel and the sort of sense and the impression. And they might even think six months later, gosh, I wonder whether there's a video of that. Cause I seem to remember Anne saying something like that, I wonder what she said, but they won't remember what you said, chances are. Unless they like writing a lot of notes, right? So there are immense advantages of written communication.
Anne Currie (10:17)
Well, I'm going to break in here and say that I really like what you said. Conferences are terrible for communicating the actual details of the information. As you say, you do have to go off and read it. But they're amazingly good for giving you the motivation to go off and do the reading, because what they provide you with is social proof. Like, is everybody going, "yeah, that sounded like a good idea." It's where you all come together and ask, is that a stupid idea or is that a great idea? ⁓ And that's the first step really.
Charles Humble (11:04)
Yes. A hundred percent. And also the value of being in a group of your peers and being able to have that ability to go up to a speaker who might be the world authority on a thing and go, "have you got a minute? Can I just ask you..." and have that conversation. That's amazing. I'm a huge fan. I probably speak at seven or eight big conferences and several meetups a year and go to more. I'm a huge fan. I'm absolutely not knocking it. I'm just saying there's a mistake that I see people fall into a lot with communication, which is that they imagine that if you communicate a thing once, it is communicated. And as you know, that's not true. ⁓ And you have to keep making similar points or the same points or finding new ways to say things, but you also need different approaches.
We talk a lot about learning surfaces, which is a slightly bizarre word or bizarre phrase, but really all we mean is people learn differently. So some people are very visual. I basically learn by reading, but some people learn by drawing diagrams or looking at pictures or whatever. I'm not a very visual thinker, so that doesn't really work for me. But my point is if you're trying to communicate, you need a variety of different ways to communicate.
Conferences are one, written is another. But it's all variations on a theme.
Anne Currie (12:43)
I was thinking about this earlier today: the communications thing. I'm very pleased with the name of this podcast, asynchronous and unreliable. It's a distributed systems term and you can relate distributed systems terms to all kinds of communications.
So, for example, a conference is in theory synchronous, but it's still unreliable in that you're there in front of people, you're talking and they're listening, so it's synchronous. But you don't know whether they're actually listening, you don't know whether they're taking away what you what you're saying, what you think you're saying. It has the appearance of being synchronous, and it has the appearance of being reliable it's in fact utterly unreliable.
Charles Humble (13:36)
I want to quickly yes and you on that because the other thing, the other big difference between a conference and a book or a bit of reasonably serious technical journalism is with a book or a reasonably serious bit of technical journalism, someone will have fact checked it. And when someone is speaking, it's just their opinion, right? Their opinion might be very well founded and very well sourced, and they might have done the research and done the fact checking, but there is a big difference between that and an article that multiple people have gone through and followed up and gone, are you sure about that? Is that really true? And rung sources and asked, can you just confirm this is what you meant? And all the stuff that proper journalism involves. That stuff obviously doesn't happen with a conference talk. so I think, again, I'm not saying they're not valuable, I absolutely think they're incredibly valuable, but there is definitely a distinction.
So your unreliable thing, there's a level of, it's a bit like the difference between a personal blog and The Times, where you would hope that the Times would be a bit more serious on the fact checking.
Anne Currie (14:41)
So I had an interesting discussion, oddly enough, with my husband, Jon who you've met, is also on this podcast. We had both just read the new Hannah Ritchie book on the energy transition, "Clearing The Air" which is very good. I would totally recommend it, it was excellent. And we both noted when we reading it that there was a sentence towards the end of the book where you read it and you thought, well, I know that's not what you think. It was quite clear that the word "not" had been omitted from the sentence. So the sentence was clearly and obviously intending the opposite of what it actually said. And it was a good reminder that even if you're publishing with a tech publisher, a book is usually not edited by a tech editor. And it's quite easy for an editor to apply grammatical rules, but not apply the sense rules because they don't know the answer. So even with a book, you have to use your brain.
Charles Humble (16:24)
That's reminding me of one of my favorite stories ever. So James Joyce, early 20th century Irish novelist, wrote a particular style of writing, stream of consciousness, which is basically just, you know, like imagine trying to capture the thoughts that are running through your head. And over time he got more and more extreme. He effectively kind of invents his own language as he goes. It starts off as being sort of Irish dialect. By the time you get to things like Finnegans Wake, it's kind of, we used to joke at university that the only person who could possibly actually read this book and understand it was James Joyce. But there is a story, and I don't know, I will say I'm not 100 % sure it's true, but it's a great story that Joyce didn't, he deliberately didn't correct mistakes that he found in the text at the proofreading stage because he felt that they were part of the art form. he kind of thought that the mistakes were actually adding to it.
Nothing is ever going to be perfect. you know, I have certainly, I've read books that where the editing is dreadful or non-existent.
It's quite interesting for me because I'm dyslexic. Basically I have someone who works for me, who edits my stuff, which is brilliant, but she is not technical. She does an amazing job and we sort of work around that quite well. But every so often there'll be something where she's changed something and I've used the word in a particular way, knowing my particular audience for that particular publication. And she's changed in a way that actually changes the sense. Even though grammatically she's right. So it is interesting and these things will slip through. it's also there's an absolute rule of my father-in-law, who used to run a Whats On magazine in Spain.
He used to say that you stand in front of the printing press and you take the first magazine off the press and the first thing you would see would be a typo. It was guaranteed. There was always one, it always slipped through. It doesn't matter how many times. And there's something about reading it in a different context. You suddenly spot it. I'm sure with your novels and things, you've had that experience .
Anne Currie (19:07)
all the time. It's unavoidable.
I also came from a background of not doing computer science degree. I did a physics degree. So I had to learn programming on the job. Early days, you had books. Then you had open source code so you could actually look at other people's code, which was incredibly useful.
Charles Humble (19:55)
Yeah, that was really, when that started to be a proper thing, that's hugely valuable, I think.
Anne Currie (20:00)
Yeah, because one of the things in tech is it's like the William Gibson quote, the future is here, it's just not evenly distributed. There's a lot of people doing amazing stuff. A lot of progress in tech happens through painful trial and error. And you don't want everyone to have to do that painful trial and error. So you really want to do a bit of cross pollination. Find out what other people are doing and then do that. So a lot of it is about how do you share that knowledge. Is it magazines? Is it books? Is it talking at conferences? The modern way, of course, the podcast.
Charles Humble (20:38)
I think you need a mixture of all of these things really is the answer. I mean, I don't mean you, one individual human necessarily needs to do all of these things, but I think collectively we need a lot of those things.
I've sort of built the whole of second half of my career around this idea of we need to get better at sharing our knowledge with ourselves as an industry because that's how we progress, right? That's kind of been a mantra of mine for I'm going to say at least a decade, probably longer. But I'm increasingly interested by another thing, which is I also think we need to get much better as an industry of communicating what we're doing to people who are not technical.
And the reason that I feel very strongly about that is because I feel increasingly that the work we're doing has an outsize influence on people's lives, on society as a whole, in a way that was pretty unthinkable to me 20 years ago, 30 years ago. And some of that's qualities of scale. So some of it's just software is so pervasive now. When I started working professionally as a programmer, if I built an app for a company maybe 40 or 50 people used it. If 40 or 50 people are massively inconvenienced, it's annoying, but it's hardly a disaster. Well now even my little noddy work website has several thousand people on it every day. it's an order of scale difference. that matters. But the other thing is a lot of the systems that we are building now, particularly now with AI, with, with large language models and and their large transformer models will dramatically and materially change society in weird and unpredictable ways. And I feel like as an industry, we need to get much better at thinking that through and then communicating it to the people affected rather than just assuming everyone's fine with it all the time, which is kind of it's been the industry default for a long time.
No, of course it's fine for me to like take all of the intellectual property of every creative on the planet and just use that to build my plagiarism machine. Why would you have a problem with that? Genuinely, I've had these conversations with people and they're genuinely surprised that other people have a different perspective on it.
There's a Leslie Lamport quote, which I found myself thinking about a lot recently, which is, if you're not writing you only think you're thinking. Which I like a lot. Leslie Lamport is a Turing award winner. He's probably best known for something called the Lamport Clocks, which was a way of doing distributed tracing, essentially. ⁓ He's a Turing winner. ⁓ I met him, actually. ⁓ He did a talk at, I think it was QCon New York And I interviewed him afterwards. I kind of ambushed him and said can I interview for you. And he was very generous, actually. But he's also done a lot around formal proofs and that kind of thing. And he's just a really lovely, really interesting, proper old white beards type, programming type. But yes, I really like that. He has a few variations on it, but that I really like. So I think there's a lot in that. But I also feel very strongly, as I say, we need to get better at talking about what we're doing and thinking about. Is the world that we are making the world we want to be making? Is this really the world we want? And if it isn't, are there better applications of this technology than the things we're applying it to? And I kind of feel like, I don't know, I think we are in danger of going into quite a dark place with tech And that's hard for me because technology has been a huge part of my whole life. I've been incredibly excited about computers since I was in short trousers, which as you can probably tell was a while ago. So it's quite hard for me to be reflecting on this, but I think there's a real danger that we're using computers to do things that aren't really for the good of anybody, other than a few very, very rich white people who are getting richer and whatever. But beyond that, I think you can absolutely make an argument that technology has backed itself into a very dark corner. And I think part of our problem is we're not very good at talking to people about what we're doing and listening to what they think about it.
A thing that I'm thinking about a lot at the moment is how do we involve the people who are affected more in this conversation.
Anne Currie (25:52)
We both worked together on actually even the step before that. When we were working on QCon London. We ran the first track at QCon, I've reckon the first track at any major tech conference which was a tech ethics track where we've got people to talk about that.
Charles Humble (26:10)
Yes. Well, you were hugely instrumental in that, of course. I mean, it was very much your idea. I think I really smiled and nodded and said, yes, what a good idea. But we were very ahead at the time on talking about ethics. And actually, we did a bunch on InfoQ. I used to get angry of Tunbridge Wells emails about our ethics coverage. So we were clearly upsetting people, which is always a good sign as a journalist. You're not doing your job if someone isn't cross with you. But yeah, so we were quite early. And as I say, you were very instrumental in a lot of that. That was kind of pre the large language model thing.
Anne Currie (26:50)
It was, yes. Because the closing keynote we had on our track for the first time we did it at QCon was by a chap called ⁓ Steve Worswick. Did you see that?
Charles Humble (27:08)
Yeah.
Anne Currie (27:19)
Steve Worswick was basically the earliest successful chatbot developer. He developed a chatbot called this was before LLMS existed. Kuki was written in something called AI ML, which was actually stood for AI markup language. It was like a choose your own adventure story. It was a kind of animated book.
And he kept winning the Loebner Award prize for best human emulating chatbot. He won year upon year upon year. And so he gave the talk about how to write a chatbot that was good, that you felt proud to have written, that was supportive of people, that was a nice chatbot, that didn't tell people to kill themselves. And he said it was actually a really difficult thing to write a chatbot that interacted like a human, but didn't agree with things like, "I think I should kill myself". You actually had to learn and plan and check and make sure that that wasn't happening. I would say when you listen to ChatGPT now, the way it talks is very like Steve Worswick. It's very like the way that he defined chatbots in the early days.
He was strangely influential in a way that we do not appreciate these days. Very nice chap, working out of his back bedroom in Goule on producing the world's top chatbot at the time.
Charles Humble (28:54)
I've had some very interesting conversations with, you know, psychologists and psychiatrists and things around the design patterns of chatbots and large language models. And those are conversations that get quite dark quite quickly. But there is absolutely a problem with designing something that constantly wants you to think that it is not a machine.
That does very, very bad things to a proportion of humans. And again, wouldn't very much matter if you were talking about half a dozen people, but you're not talking about half a dozen people. You know, I forget what chatGPT's usage figures are
Anne Currie (29:45)
It's just a little bit under a billion.
Charles Humble (29:49)
Yeah, right. So roughly one tenth for the world's population is using ChatGPT at least part of the time, right? So these systems have a very outsized impact. you know, just there have been cases of people here who have committed suicide as a result. As a result is a bad choice of words. So go with me here ⁓ suicide is complicated, right? cause and effect is really hard to say, but who were encouraged to do so through their interactions with a large language model. That's fairly well documented at this point. There are an awful lot of examples of people who have gone into some form of psychosis as a result of interacting with large language models. ⁓ this stuff is quite dangerous and very poorly understood by an awful lot of people. I have this idea that one of the problems with large language models is they break a whole lot of our own heuristics as humans. because a large language model sounds very human, we ascribe intelligence to it that it doesn't exhibit because we use our language as a shortcut for assessing how smart somebody is. Right?
Anne Currie (31:18)
Oddly enough, I've heard a similar comment made about British people, that they sound a lot more intelligent than they are.
Charles Humble (31:28)
Yeah, It's awful, but we do it all the time. As an English person, if someone speaks to you in a thick East End accent, like a strong East End accent, you probably instinctively think they're a bit less smart than they actually are. Or, you know, someone who speaks differently to how you speak, it's probably a better way to put that, right? There's a bunch of, you know, frankly, prejudices there. But I think the converse is also true that because the large language model sounds quite human, you know, all it is is a token predictor, right? But we think it's much cleverer than it is. We think it's much cleverer than, our house cat, even though our house cat is probably exhibiting far more intelligent behaviors than your large language model is, it just doesn't speak to you in English. so I think there's something in that.
I know as an editor if someone sends me AI generated copy, which generally these days refuse to edit. but, you know, from time to time, one has to. And it is the most joyless thing in the world to edit. And the reason it's the most joyless thing in the world to edit is partly because it's written really badly. Like, it's just is very poor writing but also because you have to fact check every single fact.
While with a human, I can usually have a pretty good idea if I need to fact check or not, with a large language model I have no idea. So it breaks all of my heuristics. I have to fact check everything. It's also one of those things where it's all surface. So it's exactly the opposite of what you want in a piece of writing.
It looks great until you think. And as soon as you start thinking, you're like, this is awful. You know, it's just, it's like a whole bunch of like hundreds and hundreds of words signifying nothing. And so it's, it's the sort of weird combination of it. It's, it, it, as I say, if you, if you don't understand, if you don't appreciate an art form,
when it does a very, very good impression of something that sounds quite convincing.
Hannah, who I do AI for the rest of us with, did a bit of music that she had AI generated and she was quite excited by it and I thought it was awful. And she won't mind me saying that because we made quite a joke about it at the time. But to me, it doesn't sound like music. But then that's because I love music and I spend a bunch of time thinking about how music works And so it doesn't sound like music at all to me. ⁓ It sounds like something that sounds like the sort of thing that somebody who didn't know what music sounded like but had read about it in a book might produce, which isn't that surprising because that's essentially what it is.
And with copy, what you end up with is these vast tracts of text. one of the things you do a lot as an editor is you're trying to find the sort of nugget and you end up redlining the entire bloody thing because there's just nothing there. But when you first look at it, it looks okay. Maybe that's totally fine. If you're doing list calls for Buzzfeed or something, it probably doesn't really matter very much, right? You could probably use a large language model to generate those. But for anything where the meaning is important...
I find it the very opposite of helpful. Now, as I said, I'm dyslexic and it is the world's best spell checker. My goodness. So if I'm doing email, which used to be one of my absolute dreads, cause my typos in emails are just horrendous and there's never time to go and read it afterwards. ⁓ so like chucking it into Claude and saying any typos? is wonderful. Using it for doing transcriptions of interviews, if I'm in a hurry is quite useful up to a point, but using it to actually write? Just like, no.
Anne Currie (35:43)
So ironically I love AI and use it for all kinds of things. And this podcast, cleaning up of audio and all that. It's amazing. And I do like chatting to chatgpt but ironically, the worst thing is its writing. It's always saying, shall I write some text for you? ⁓ no, because it's so awful. I can't bear it. It's so wordy.
it's, no, you can say that in like a 10th what you've taken to say that, you know.
So Charles, you have often been my editor over over the years. I won't say you're my favourite editor, because some of the other editors might be listening and I love you all.
I've done quite a lot of editing myself and as you say, and I do a lot of of editing of my own stuff and it's always too wordy at first, that's the the main fault of everything, it's always too wordy, you're just asking how can I make that clearer, how can I make it so that it's not like six clauses when one clause would do it, yeah so you don't have to read an enormous sentence before you can understand the meaning of the sentence, you try and make it so you can understand the meaning as you're going along, that kind of thing. It's very easy to say effectively the same thing three times in three consecutive sentences. That's very, very common. In fact, for anybody who has not done a heavy amount of editing, you will find that whenever you write your novel or whenever you write your book, every paragraph you're writing the same thing three times and yyou have to pick one of the three and keep that. But ChatGPT much as I like using it, it never picks one.
Charles Humble (37:37)
And it'll do the, it'll write the three and then it'll probably do like five hours of subcloses as well. People talk about the mdash a lot, which drives me mad because it's a bit of punctuation I actually have used for years. And suddenly everyone thinks that you've used AI because you've got an mdash in there, which is utter nonsense.
Charles Humble (38:06)
But the reason it does it is because it's basically using it for yet another subclause. it's, you know, there's only so many commas you can have. I'll mdash it. You know, so you've got these paragraph long sort of gobs of endless, endless subclause.
Anne Currie (38:13)
Well, the thing is that AI has a huge context window. So it can have it can hold an enormous sentence with loads and loads of subclause clauses in its head. And one of the one of the things you learn when you want to write simple stuff that people understand is that when you write, you tend to have a lot of sub clauses, and then you have to go and unwind the sub clauses so that you kind of like have maximum three per sentence. And even then that's a of a push. So that you can read a sentence as you go, don't have get all the way to the end of the sentence before you understand what the sentence meant.
Charles Humble (38:56)
I think there's another interesting aspect to that, which is one of the things that's interesting when you're doing technical writing, particularly is the temptation always to use very technical language to describe something. And that's because that's easy, right? Because that's the language that you you instinctively know.
And quite often it's the wrong, it's actually the wrong way to do it. But to actually step back and go, well, what do I mean when I talk about X or Y? What is that actually, you know, what does that term actually mean? And giving the, giving the context is really interesting. And some of that's understanding who your audience is, but I would argue even with a technical audience, a lot of the time it's very easy to fall into the sort of very lazy trope of just using a lot of very complicated, you know, calling a spade an earth interverting a horticultural implement, you know, that kind of idea. It's very easy to sort of fall into that trap where what you actually want to do is to get back to what is the simplest, cleanest, most concise way I can get this point across without using masses and masses of jargon to do it.
Anne Currie (40:02)
Yes, yes indeed and not expecting that what you wrote down out of your head, you you are not James Joyce, don't stick to your stream of consciousness writing. It's very complicated difficult to parse.
Charles Humble (40:31)
I always say to people, genuinely, I've been doing this a long time now and I don't let people read my first drafts like ever. I do my first drafts, I draft everything in pages because I have a Mac. But one of the reasons I draft everything in pages is because it's on my machine and no one else will ever see it. And only at a point when it's not awful, will I put it into Google Docs or wherever it's going to the publisher. You know, like there is a draft before the draft that anyone sees, which is just... I just throw words at a screen until I've got words and then I'll go and fix it up. And sometimes that fixing up is, oh, whole paragraphs or whole sections need to go or move or, it's not just striking out words. mean, quite often I'll throw away whole sections of a thing that just, oh, well, that was an interesting digression. There's absolutely no purpose in being there. Should we get rid of it? And sometimes those things are hard. Sometimes you need to go and have a cup of tea afterwards and pull yourself together. Because sometimes it's something you're really proud of and you want it to stay, but it doesn't add anything.
Anne Currie (41:45)
Yeah, I think we're all guilty of that to a certain extent. We're all definitely guilty of that in that you cannot show the first, well, if you're writing a novel, you can't show the first three drafts to someone and after that you can only show it to the person who's used to looking at stuff and then going, that's rubbish. This is going to have to go, I didn't understand this.
Before someone who you have less psychological safety with. It's at least six drafts, I would say, before they see them. If it's a novel or it's a longer piece of writing.
Charles Humble (42:21)
There's an interesting parallel for me with the music that I do. ⁓ So there is always a point where we will play a new piece of music to somebody. So they will be in the studio, it will be someone we know, who we admire, and we'll sit there and play it to them. And it's really interesting, because the point of that isn't what they tell you. The point of it is how you hear it when someone else is listening, which is a completely different thing from how you hear it when no one is listening. And it's so, so interesting. And suddenly you go, God, those eight bars are really boring. Or, you know, that drum fill that I was so happy with doesn't work at all. Or, you know, what on earth is that line doing? But there were all these things you just, like, I'm embarrassed on your behalf because that was so bad. Do you know what I mean?
Like, then that stuff never happens if you don't, at some point you have to, you have to play it to an audience in whatever way that is, and that's the only way. So with a novel, it's someone reading it and giving you feedback, right? But for other things, it might be a slightly different, a slightly more interactive way of doing it. But it's the same thing. You need that feedback. That's how we learn. That's why pair programming is such a good idea. Same thing, because you get feedback and you learn from other people.
Anne Currie (43:34)
Yes. Yeah.
And that is kind of the difference of being at a conference, it? Although you're only kind of taking away the vibes, it's kind of vibe learning, isn't it? Does everyone look interested in this? Is everybody horrified by what they're hearing? Are they interested by what they're hearing? I mean, I've given a lot of controversial talks where I've talked about something that you think, I know that this isn't something that folks normally hear. So maybe it's about ethics, maybe it's about new things about code efficiency or whatever, but things that people are going to be a bit shocked to hear or it's not what they normally hear.
And you look out at people and they're looking at you totally blank faced and you think, I have no idea how this is landing at all. This was a total disaster. And then everybody goes away and no one comes up to you afterwards, which is a bad sign that there's like no one comes up to you afterwards. And I've done that before and I've thought, oh, that was a disastrous talk. And then after the coffee break,
Then the next coffee break, people have come up to me and said, I enjoyed your talk. And I thought, you went away at last coffee break. People talked about whether or not that was an acceptable talk to like And they decided that it was OK. And then only after that did they come back to say to me, no, I enjoyed the talk. But I had to check whether I really did enjoy the talk or not.
Charles Humble (44:48)
Interesting. I wonder whether if something is genuinely new, particularly if it's new and a bit uncomfortable. we all need a bit of time to process that before we're ready to talk about it. I did a talk last year in Germany and while I was doing it, I thought it was a total disaster. Like, tumbleweed. Nothing from the audience at all. And I don't know if that's because it's a German audience and I'm English or, and just, you know, like wasn't reading the room well or whatever. But it was, it was hard for me to do it because I just felt like, you know, nothing is landing.
I have never had so many people come up and talk to me afterwards with really interesting questions and points of view and like the interaction afterwards was amazing, but the bit in the room was, was, you know, like I could feel my confidence kind of draining out through the bottom of my feet. Cause like nothing is, no one's laughing. No one's even smiling. You know, it's like there's nothing happening at all. And like, God, have I totally lost my touch. It's terrifying.
It is interesting when you go around and do international conferences and speak to different audiences that certain things that will land in one location won't land in another. And I mean, you and I both talk quite a lot about sustainability and that's, you know, there are people who are still really uncomfortable with that conversation. I did a talk on AI and sustainability for Yell at the back end of last year. So out in Australia. was one of the best conference experiences I've ever had. Wonderful. Wonderful people, brilliant organization, absolutely fantastic. The way YAL works, they bring a bunch of international speakers in and you do three cities. So you sort of travel around like a road trip. It's great fun. Three very different audiences. You do Melbourne, Brisbane, Sydney. And I think it was Brisbane. There were some people who were really cross with me afterwards. Like properly cross.
Anne Currie (47:11)
Really?
Charles Humble (47:13)
Not in a nasty way, but just like, you've got no idea about how important mining is to the Australian economy. And, you know, we can't back away from fossil fuels for all of these reasons. you know, that was essentially, I'm slightly paraphrasing a much longer conversation, obviously, but that was sort of the gist of it. But I was, it's been a while. I've had, you know, I get climate change deniers at every talk I do, more or less.
⁓ but that's the first time in a while I've had someone who, it wasn't a denial of climate change We were utterly accepting that what I was saying was probably scientifically fair. They were just saying it isn't realistic. ⁓ and actually it's really interesting. If you look at the electricity map, if you look at the electricity mix in Australia,
Anne Currie (47:54)
Yeah.
Charles Humble (48:02)
Not great from an environment. You'll know one of our standard things is you find a green location and run there. And that's really hard in Australia. Essentially your choice is New Zealand or possibly Tasmania, which isn't, as it turns out, exactly on the doorstep. But the energy mix in Australia is, which surprised me, because I imagined there's a lot of sun. Surely you'll have lots of, say, solar panels that would seem like a really logical footprint.
Anne Currie (48:12)
Yeah.
you
Yeah, well they've
gone quite solar panel ⁓ crazy, great stuff, they've made some really amazing improvements in solar panels.
Charles Humble (48:36)
It is happening. mean, the shift because of
mainly because of the Chinese manufacturing of solar panels and also the efficiency. So you've got two things that happen. One, they've got cheaper. I mean, you know what this like, one, they've got way cheaper. Two, they've got way more. It's a bit like if you remember going from sort of 286 to 386 to 486 and, know, Pentiums and whatever. It's a bit like that at the moment with solar panels. Every year you look at them and go, sorry, how much better? Like we did the solar panels here about
Anne Currie (49:04)
Yeah.
Charles Humble (49:07)
eight or nine years ago. And we've got quite a big, I think we have 16 panels or something like that, a out in the garden. ⁓ And, you know, if we did them now, I could probably run the entire house on them all year round. I certainly can't do that with the one. You know, they've got a lot better, as my point, and a lot cheaper.
Anne Currie (49:23)
Well, so it is interesting because I mean, I've got solar panels that I put in a bit like you. was kind of like, are they good enough yet? Are they going to get better? do you get, when some point you have to go, I'm going now. So we put them in in 2022. We've got quite a lot and it's, and they produce extraordinary amounts of power in the summer and even in the, in the spring and the autumn enough to run the house. But the winter.
Charles Humble (49:33)
Mmm.
Anne Currie (49:49)
it's still pretty much none because it doesn't matter how much you've got, there's almost no power being generated.
Charles Humble (49:53)
Yes.
Yeah, yeah. mean, there are, you know, there obviously there are seasonal effects and, you know, that's going to be a thing. you know, using 100 % renewables for the grid is actually quite a hard problem. Although it gets a lot easier when you start interconnecting grids. Again, you will know all of us, but, know, like there are places in Europe where it's sunny when it's not sunny in, say, Scotland. And, you know, we could run a cable and,
Anne Currie (49:59)
Yeah.
Absolutely.
day.
Charles Humble (50:23)
start sharing renewables more. And, you know, we do a lot of this already. It's just an extension of stuff we're doing anyway.
Anne Currie (50:31)
And wind often is often a counterpoint to solar. So if you've got a whole load of wind as well, then you're doing pretty well.
Charles Humble (50:36)
Yes.
Yeah, yeah. And I mean, you there are other things you can have a bit of nuclear on the the mass as well, which gives you a bit of background load It's again, it's I know it's a bit controversial, but it is a carbon free energy source. And, you know, so there are there are a variety of ways you can do this that will that will work out. And I think I mean, as you and I are recording, I'm not sure how quickly this will actually appear. But as you and I are recording ⁓ the war in Donald Trump's war in Iran is ongoing.
we are seeing a massive spike in the prices of fossil fuels. ⁓ There is a bizarre narrative in the UK media, in the UK right wing media that what this means is we need more of our own fossil fuels, which is such a deep misunderstanding of both the problem and fossil fuels. Like having our own doesn't make any difference to this problem because we sell them on the international market.
Anne Currie (51:07)
Yeah.
Charles Humble (51:32)
fixed rate. Also, there isn't a lot left in our bit of the North Sea, etc. etc. etc. What this is telling you is we need to get the heck off fossil fuels really quickly. ⁓ And actually, the UK has made good progress in that regard, which was a surprise to me. That long ago, I didn't think that was going to happen. actually, we've made other countries have done better, but we've done pretty well. have quite a...
Anne Currie (51:41)
Yeah.
You
Charles Humble (52:00)
we have a much better energy mix than we did not that long ago, at least for things like electricity, which for our industry is the key thing.
Anne Currie (52:06)
That's true.
And if listeners to this podcast want to read someone who we both approve of massively when it comes to communicating all of this stuff, it is Hannah Ritchie. ⁓
Charles Humble (52:20)
Yeah,
so I recommend her, book Not the End of the World to literally everybody. It's in every core subject that I do. I joke, but it is true. So when I was a teenager and thinking about what I wanted to do, one of the things I wanted about doing was climate science. I thought I might go and be a climate scientist. And then I read lots about climate science and got quite depressed, to be truthful. And that's why I ended up doing an English literature degree, more or less. It's not quite true, but it's sort of half true.
Anne Currie (52:25)
Yeah.
Charles Humble (52:49)
The other reason I did an English literature degree is because I had an English teacher who told me I shouldn't because I was dyslexic. And I kind of hated him. So I kind of went and read English literature mostly to annoy him, I think. But both of these things are kind of true. But I always joke that with Hannah Richies's book, if I'd read Hannah Richies's book as a teenager, I might well have ended up a climate scientist. It's wonderful. I really rate her. I don't agree. Some of the things that she says around large language models, I don't agree with.
Anne Currie (52:55)
you
Yeah.
Charles Humble (53:17)
Or more accurately, think, slightly lacking in nuance. I know where she's coming from, but I think there's an argument in these is true that your use of a large language model doesn't affect your individual carbon profile very much. And so you shouldn't worry about using a large language model as an individual human. And that's true. I've got 100 % true. But I also think it's quite unhelpful, because I think the problem with large language models is a one of scale.
Anne Currie (53:44)
Yeah.
Charles Humble (53:45)
that's
a good one. So again, that comes back to the scale thing, right? The problem isn't, I mean, actually there's two problems with it. One, she's averaging out stuff and large language models, like a single query in a large language model is so inherently unpredictable in terms of how much energy it actually uses. The averaging is unhelpful. But the other thing is I think it misses the point that really the problem is, I don't think she misses the point, but I think there's a danger that people who read
what she's saying missed the point that the problem isn't your use. It's the building out the infrastructure at such speed is the problem. Gone. ⁓
Anne Currie (54:21)
Yeah. So I'm
going to wrap you up somewhat on that one in that I think we'll either we or actually I've got quite a few ⁓ speakers on to talk about that very subject and I may well have you back to talk about it there's a lot to talk about that green AI and what we ⁓ do.
Charles Humble (54:39)
Yes. there so is There so is A lot of my
day job at the moment is doing AI related stuff and it's fascinating. It's like, you know, go where the work is, right?
Anne Currie (54:47)
⁓ yeah. Well, unsurprisingly, isn't it? Because it's the big thing.
Indeed, yeah. we also, you know, I knew it's a complicated subject and it's one where, well, you know, that's a whole thing of AI. is, it's not all AI. Some AI is so much more efficiently targeted to the task than others. So if you go in thinking it's all or nothing, AI is good or bad, you're a doomer in some sense in that.
Charles Humble (55:13)
absolutely, yes.
Anne Currie (55:22)
that it kind of makes you think, well, I might as well just go for it because it's all bad. And it's not, you know, it's like, no, it could be bad or it could be fine depending on how you choose to use this.
Charles Humble (55:30)
There's a general,
there's a really interesting thing there. And it's a point that I make, I tend to do it more at uni. So I do a certain amount of talking at universities. And it's a point I make more in university talks than elsewhere. But it is true that one of the really unfortunate things that happened to the climate movement was that a bunch of people who don't understand psychology got involved in the mass messaging
And the messaging essentially says, we're all going to die. And it turns out that when you give that message to people repeatedly, they don't go, this is a thing we've got to fix. They go, well, maybe I'll be too drunk to notice. And it's really unfortunate because it's not true. This is a bunch of problems we can solve. But it's why you end up with the very well-meaning, just stop oil people throwing tomato soup at Van Gogh paintings. That's going to help.
Anne Currie (55:58)
Yeah.
Yeah.
No.
Yeah.
Yeah.
Charles Humble (56:25)
You know, like the way you get out of this hole is by doing really good engineering and building more efficient solar panels and working out how to store electricity, which is a problem we haven't had to solve. You know, like it's all engineering and thinking and it's all solvable, right? We're really good at solving this stuff. Well, I mean, some of it's really hard, like feeding. Food is a real problem. Right. But my basic point is coming back to the communication thing.
Anne Currie (56:30)
Yeah. Yeah.
Yes.
Yes.
Charles Humble (56:55)
There are ways to, what you want to say to people about green and it is true is this is not a doomsday scenario. This could be very bad, right? But it doesn't have to be that it's only as bad as we choose to make it. And actually, do you know what? We're making really good progress. I was like genuinely, cause as I said, I'd gone away from the thing. I hadn't really looked at the field for 30 years because I got so depressed.
And then I started, I only really came back to it about a decade ago and I was stunned at how much progress had been made. I genuinely had no idea. I kept reading things and going, I was wrong. I was totally wrong. I was wrong about that as well. You know, like the ozone hole you know, we signed the Montreal agreement and we stopped releasing ozone, depleting gases. had like, genuinely, I didn't know that. Because if you watched the news, you wouldn't.
Anne Currie (57:29)
Yeah. Yeah.
Yeah.
Charles Humble (57:51)
Right? Because the news they never told you about the hole in the ozone layer, and then it stopped telling you there was a hole in the ozone layer. They didn't say, by the way, I mean, there is still a hole in the ozone layer, to be clear. But what it didn't say is, by the way, that problem isn't getting any worse. And over time, is, in fact, getting better. ⁓ So yeah, think that it's just an interesting example with communication about ⁓ so often we
Anne Currie (57:52)
No, because it never, it's never important.
Yeah. Yeah.
Charles Humble (58:22)
use the wrong tool to communicate something because we don't think through the, we don't think carefully enough about what it is we're trying to communicate and or what effect we want to have.
Anne Currie (58:37)
Absolutely. Now, I'm probably going to wrap us up at about this because we've now talked for an hour and it's been a very, it's been a very topic and idea and concept dense conversation, which I'm not going to be able to trim much out of. So it's going to be a full hour. But I've really enjoyed it. So I'm hoping that I will be able to persuade you to come back again.
Charles Humble (58:42)
Thank you.
Sorry, I actually had to rub it on. It's terrible.
I would be delighted to come back, absolutely. Yeah,
there's so much more we could talk about. So that would be wonderful, anytime.
Anne Currie (59:11)
So I'm going to just let you know, there are going to be show notes for this talk and hopefully I'll put in links and information in the show notes so you'll able to see what we've been talking about and look it up and read it and do your own fact checking because as Charles pointed out, a podcast is a little bit like a conference talk.
most of the time when you're listening to a podcast, there's no fact checking going on. So do bear that in mind. But anyway, so thank you everybody who's listening. Thank you again for listening to asynchronous and unreliable podcast And hopefully I will catch you again at the next recording. So goodbye from me and.
Charles Humble (59:56)
Yes, goodbye from me and thank you so much. Always lovely to chat to you and that was huge fun. I hope it's fun to listen to as it was to record. Yeah, brilliant. All right, thank you so much. Bye.
Anne Currie (1:00:00)
Thank
Hopefully so.
Bye.
Anne Currie (1:00:13)
Thank you very much for listening to Asynchronous and Unreliable podcast. If you enjoyed the show and you want us to create more content, please do show your support by hitting the subscribe button below. It really does make a difference. Thank you very much.