Generative AI - what you can use it for today and what you can't
Dive into the AI Frontier: Join AI enthusiasts and industry leaders as they unravel the intricate relationship between artificial intelligence, creativity, and human behaviour. Explore unique insights on AI's impact, potential, and the ethical quandaries it presents. Perfect for those keen to understand AI's evolving role in our world.
Guests
Maz Nadjm, Global Social Media Practice and Generative AI Head at TCS
Giorgia Guantario, Content Director for UpSlide
Nick Mason, CEO and Founder for Turtl
Nataliya Tkachenko, ESG Data & Generative AI Ethics Lead for Lloyds Bank
Chris Zacharia, Content Writer for Atkins
Transcript
Tom - [00:00 - 00:50]
Alright. Okay. Well everybody, welcome to our podcast today. Really delighted to have you all on here. We only have one hour and we have such a lot of exciting stuff to discuss, so I'm pretty sure it's gonna be a a, a really nice crammed hour. Um, as always, we're gonna try and talk about anecdotes and practical, precise examples rather than kind of, uh, airy fairy stuff. 'cause I think that's probably what people, um, get most value out of. And again, as always, I'm gonna try and talk as little as possible to give you space to debate and talk amongst yourselves. 'cause I'm certainly not, um, the one that people wanna talk when I hear from here. So, um, just to kick us off, um, let's do a round of introductions. So, um, Maz, perhaps you would start out, just give us a sentence on yourself, mas - [00:50 - 01:12]
uh, I am MAs uh, na I, uh, amm part of the T C Ss. This is interactive. I lead the social media, uh, practice globally. Uh, I'm a learner in generative ai. I am a keen, uh, passionate person about it. So I look forward to the next hour, uh, listening to other folks who are on the call as well and see how their experiences with it. That's me.
Tom - [01:12 - 01:15]
So Chris, perhaps you would give, um, one sentence on yourself.
Chris - [01:15 - 01:30]
Sure. I'm Chris Zacharia. Um, I'm a writer with background in advertising in journalism, and for the past few years I've been running my own, um, small creative agency and I'm using generative AI with my clients. Um, have been using it for the past like three or four months.
Tom - [01:30 - 01:33]
Fabulous. Thanks Chris. Um, Natalia.
prash - [01:33 - 02:06]
Hi. Hi everyone. Thanks for the invite. Uh, so I'm Natalia at Kanka. Um, I have a PhD in large language models, funny enough, um, which I obtained well before large language models where I became known as large language models. Uh, that is 2019. Um, so now I'm a visiting researcher at the Allen Turing Institute of Artificial Intelligence and Data Science. And I also hold several visiting research positions at, uh, Cambridge Judge Business called and Oxford University AI Ethics Institute. Looking forward to the debate.
Tom - [02:06 - 02:10]
Thank you so much, Natalia. Okay. Prash, what's the sentence on yourself?
prash - [02:10 - 02:20]
Hi everyone. Um, my name's Prash. I'm the C t O at a act. Um, and yeah, my background is in machine learning and, uh, software engineering.
Tom - [02:20 - 02:23]
Thank you very much. And Nick?
Nick - [02:23 - 02:52]
Yes, Hello everyone. Uh, Nick Mason. Good to meet you all. I am the co-founder and c e o of Turtle. We are a content platform. Uh, and so our interest, well my personal interest in AI is I've always sort of, for the last 10 years been very interested, um, in how that, how that, um, you know, plays out both technically and also sort of, uh, socially or philosophically I guess. Um, and, and then obviously we have interest in terms of how, how AI and particularly generative AI is gonna play out in our space, in the content space. So that's, uh, that's me.
Tom - [02:52 - 02:55]
Thank you Nick. And Georgia.
Giorgia - [02:55 - 03:15]
Yeah, I'm Georgia. I'm the content director at Slide. Um, we're a SaaS company. My background is in journalism, specifically B two B Tech enterprise journalism. So I've been talking about AI for a while, um, in my career. Um, and similarly to Nick, very interested in seeing how, uh, generative AI can, can help in the content space.
Tom - [03:15 - 04:30]
Okay. So today we're gonna talk about generative ai. The three kind of areas of the agenda are, uh, what can we use generative AI for today that we are not using it for generally. So what is, what are the capabilities that are somewhat hidden that it can already do really well, but we, the general public are usually not asking it to do these things just 'cause we don't know that it can do them. And in parallel, what are the things that we keep asking it to do? We keep bashing it over the head with that. It's just not prepared to do, it's just not ready to do today. The second thing is, I think it would be very interesting to think about brands. So Chris, I'm really happy you are here 'cause you can answer this question for us, but is it possible to create a brand purely with generative ai? Um, I think it's gonna be really interesting to think about can we bash together images or concepts or prompts to create brand identities in something like Mid Journey or a, or a um, um, stable diffusion based kind of, you know, image model. And can we, can we create tones of voice, unique tone of voice with a large language model? Um, anyway, so that brand I think would be really interesting to kind of explore that. And then the last one is, should we be scared? So, I don't know. Tom - [04:30 - 05:09]
Let's, we'll, we'll be able to ask Natalia some interesting questions on should we be scared of it. Okay. So, um, uh, let's see. So I think, hmm, perhaps perhaps I will start actually by asking, by asking all of you, perhaps Natalia and PRS might have a perspective on this, particularly that first area, um, tackle either one of those two questions. Either what are the things that people on chat G p t keep trying to get it to do and keep failing to get it to do well? Or Yeah. What are the things that are just hidden that we should be asking it to do that that, you know, people in general are not?
prash - [05:09 - 06:45]
Yep. Um, I can kick off if you don't mind. Yeah, yeah, absolutely. Yeah. So, uh, so first of all, uh, yeah, um, I'm kind of intrigued, but I was less slightly annoyed why, uh, we always talking about, uh, large language models only from the perspective of chat. G p t large language models are so much more than chat G P T and always, uh, very often, uh, people are, um, equating limitations, uh, technical limitations, design limitations of chat, G P t as the, uh, kind of deficiencies of large language models, the entire family of those mm-hmm. So for example, I would say I give very kind of, uh, concrete, concrete example that it's a well known fact that, um, the, uh, data Chad g p has been traded upon us, uh, uh, cuts off at, uh, 2021. So as a consequence, people will say, okay, it's not recent information. And the deficiency of this model is that we cannot find out the most recent answers. This is not a case at all for their large language models in as a family because there's others one, uh, which actually perfectly capable of responding, uh, with the much more recent information. So this is something I think public should have been educated on, and, uh, I think we should really differentiate and stop actually calling Russian language models and Chad DP as a synonyms. Um, another thing I wanted also to mention, if we scale out a little bit even more, is that, uh, LLMs are also equated to foundation models, uh, as a family. So foundation models is a much broader trend than LLMs, obviously. And when we're talking about LLMs, we only take into one modality into account, it's a text, but, um, the future actually goes into the multimodal because we need much more broader context than text can provide. prash - [06:45 - 07:24]
And, uh, for example, uh, there will be a kind of, and we already see those injections of, of audio visual inputs, uh, other types of modalities which render, uh, LMS and chatbots much more robust and stronger, uh, interlocutors than they're simply chatbots of the current state. So, uh, so I would like to answer that, uh, question is that, uh, what is hidden in chat g p t is is a design deficiency of chat. Yeah. But we shouldn't necessarily equate to the other models. So, um, lots of things we can do, uh, with large language models. Personally, I'm very, very excited. But I would like to give a chance for my other to, to respond to this question as well.
Tom - [07:24 - 08:25]
Can I, can I just ask one, one tiny follow up question, Natalia, which is, so I, I was accused at our last, uh, podcast recording of being obsessed with Mid Journey. 'cause all the examples around, you know, creative generative AI kept talking about the limitations of Mid Journey. And someone said to me, you're obsessed with it. Look at, uh, automatic 1111, which is basically more of a, a a more capable in, in, you know, um, interpretation deployment of, uh, a stable diffusion ai. And I've since come across this app called Draw Things, which is very bad title, but anyway, um, which is a much better, and it can do stuff like in painting and, and all sorts of more sophisticated, interesting things that allows you to create proper poses for, uh, ads and things which, which, uh, mid journey can't do. But on a similar vein, are there, are there, um, generative AI technologies that you wish people were talking about more if they're just obsessed with chat G P T? Are there other ones that people should be looking at exploring?
prash - [08:25 - 09:08]
I think it's a, it's an interesting questions, uh, uh, as it's asked, uh, whether people should be interested more. I think it's, um, it's an application domain, so I think people will discover what they actually need. Uh, and it's very easy to discover these days anyway. So I think the reason Chad g PT was, uh, took so fi widely, um, and broadly is, um, uh, I think it's people just like chatbots. And I think this type of kind of interaction with technologies really suits them. I think, uh, others which just kind of seemingly undiscovered is simply because people are still learning about this technology. So prompting people to discover them, I think it's not necessarily very ethical thing to do. I think, uh, people should just be, discover them by themselves according to their needs. Okay. Um, that would be my answer here. Yeah.
Tom - [09:08 - 09:16]
Fair enough. No, we, we don't wanna kind broad recommendations. Um, does anyone else have a have a kind of perspective on those two questions that I asked at the beginning?
Nick - [09:16 - 10:05]
I have a, I I have a thought. Uh, can you hear me okay? Uh, Greg? Yeah, we can. You fine. And, and I feel like I'm gonna be saying this a lot, which is, there are things that I think are true, uh, with Natalia on the, on the call. I feel like I'm gonna be asking her to tell me if this is, uh, urban, urban myth, internet legend. The question around like, what we should be asking large language models to do, uh, that we don't know that they can do is a really interesting one. Because my understanding is that they do things that we don't know that they can do. And so my understanding is, for example, you know, large language models, for example, up to three years ago, could pass a chemistry degree level. And we didn't know until we found out, or that they could answer questions in French and Japanese. We'd never just thought to try. And so by the, by the very definition and design of these things, they're capable of stuff that until we ask it the question we don't know. So it's sort of, it's sort of one of those unknown, you Get the prompt Exactly right.
Tom - [10:05 - 10:09]
I mean, sometimes I see people doing a prompt and I was like, I never would've thought to do it that way.
Nick - [10:09 - 10:30]
Yeah, Yeah. But, and, and so that, that I think is a really interesting thing, which is because they haven't been designed to answer specific questions, they've just been designed to, to sort of do what large language models do. It's sort of unknowable, which I think is interesting and curious and very different to any other sort of software paradigm, uh, or sort of knowledge paradigm that we've had previously, which I think is really exciting and also slightly terrifying.
prash - [10:30 - 10:30]
mas - [10:30 - 11:38]
I, I have a thought on that. The whole concept of, uh, prompting and prompt engineering, first of all, how amazing that you need a human brain to go back to it again so that it can come back with intelligent answers. A human brain needs to come up with what prompt should you use in order to get the best answer. So yeah, for humanity, you know, we're, we're still in there uh, and, and the robots haven't taken over. I think that's, you know, when you go through the prompts and ask the questions, I've really, and, and it the way, uh, this, this individual or individuals have, have outlined it, uh, I think it's surprising. I think it's a journey for us to learn actually what you can get out of it. The brain is there, uh, for us to, to figure out, but curiosity is a good thing. But nonetheless, I, I love the fact that you can get, you try over different people the way they've tried it, and then they've got, they've got it. You know, like the first person who did a recipe for omelet, how did they come up with that Do you know what I mean? It's the same with the generative ai. It's like once you put it in there for that specific good benefit, uh, I love that part. Mm-hmm.
Chris - [11:38 - 12:53]
Hmm. I think, I think there's something that connects, um, MA's point that, that, uh, the point that Nick made, which is about this idea of a human interface. So in the early days of computing usage exploded only when software translated code into a user-friendly interface. It was the same in the early days of the internet, when the internet was around for a long time. But it was only when web browsers made navigation straightforward. That widespread adoption followed in the same way we've had these pictures of cats, suddenly You could see pictures of cats and that was You Can interact with it, you can, um, have cat day and all your other acute animal needs fulfilled, and people suddenly get it when there's an element of, there's a, there's an interface there that you can interact with and people love interacting with something that gives them a chance to talk, express themselves and connect with others. So LLMs machine learning, it's been around for a long time, but because there wasn't an easy to use user interface, adoption didn't grow. Then suddenly last November, chat two b t came about, and you have this explosion of uses because suddenly people get it. They can see how they can use it and, and benefit from it. And I think that that's a really key moment. And this is where we get the liftoff stage and it's, it's hard to anticipate what use cases might emerge.
prash - [12:53 - 14:13]
Yeah, absolutely agree. I would like to mention also that, uh, it's such a fascinating point, Chris, uh, on the window into the world of LLMs. That's exactly right. So chatbot is the interface, the window through which we can talk to LLMs, but what LLMs can do, I think as a, as a, uh, the, um, universe of words as they call them, I think there's lots of things we can do. And I think the most exciting ones, uh, I can, I could, I could came across recently is, uh, potential even talk to other species, for example. So for ex example, um, you can use the same principle as you've trained lms, you can train it on other species kind of language or communication on science communication. So, uh, I'm, I'm biologist by background ecologist, so computational one. So I quite interested in biodiversity and restoration of biodiversity. So I was actually thinking we are actually in a massive planetary crisis of species decline. And we need to restore them. We need to prioritize which species we restore. We attach human values to restoration of the, of, of the, of the, uh, natural world. However, if we can use a principle of large language models and create models of certain type, which will be able us to have a glimpse into what natural world talks about how it actually feed backs to our interaction with it, I think it'll be a fascinating application for us to get.
Tom - [14:13 - 14:39]
I've, I've got this app called bird net, which is pretty good. You can, if you come across bird net, it's like an m i T project, you can record and it will then tell you which species of bird. But yeah, if you could somehow have a multimodal, multimodal, I think would, I think what you mean by that is that it can have multiple different types of input, right? So you could have audio input, like the sound of birdsong and an image and text and whatever, but yeah. So then you could potentially train it to translate what the bird is saying.
Nick - [14:39 - 15:27]
I think that's a, a part of it that I, I find really interesting, which is that when, when I first sort of heard about large language models, my thought, and I dunno whether um, other people thought the same way, is that they're to do with text and they're to do with whatever. But the point that really opened my mind is, well, you can put anything, or you can design a large language model or, or create one around anything that you can express as a language, which almost by definition is anything. So anything you can express as a language, you can build the model around. And that to me was the real like, aha moment around, you know, where this stuff can go, whether it's images or bird sounds or anything. Um, that's like a really, I can see an Italian nodding, which is like, that's good. Okay. I've not been fed off information That to me is like the really exciting thing. Anything you can express as language, which I think is pretty much anything you can, you can build one of these models around Other galaxies, perhaps we'll be talking to our, you know, extraterrestrial soon, who knows.
prash - [15:27 - 15:28]
Right, right.
Nick - [15:28 - 15:29]
It's nuts.
Tom - [15:29 - 15:49]
Yeah. You try and feed it the language of the stars or patterns of, and it's all, yeah. Um, are there any other things, um, Prash or Georgia that you've seen people try to make it do that you think it's just not ready for yet? Or, or things that you've found people doing where you're like, why isn't anyone everyone doing this?
mas - [15:49 - 17:19]
I mean, I, I I just thought it was very interesting listening to what everyone was saying. I, I kind of had a few thoughts on some of those things as well, though. I, I, I found it interesting. MAs was saying that we need a human to prompt these systems. Uh, I found it ironic because actually you don't, I mean, there are technologies like auto GT G P T that basically are intelligent agents where you can use G P T to prompt itself, or you can use chat G P T to create prompts for mid journey, for example. So you can ask it to create the most likely successful prompt, effectively to generate the kind of image that you want. So I'm, I'm a little more reticent as to whether humans are as necessary as, as perhaps we think we are. I don't know. Um, but no, I mean, I think it's, it's very interesting. I think there's, there's a lot that language models and chat G P T can't do, I would say, I mean, complex reasoning in general is still, it can kind of mimic it, but in, in actuality, it will fail if you start to give it quite complex logic tasks, you know, even basic arithmetic it can sometimes fail on. And so they get around that by allowing you to have plugins so that the, you know, for example, chat gpt can use plugins so that you can call out to a calculator. So rather than getting the language model to try and do the arithmetic, you get it to use a plugin to do the arithmetic, and you figure out what it's good at and what it's not good at. And to take Natalia's point about, you know, the famous September 21 cutoff or whatever, you know, there are ways around that as well. mas - [17:19 - 18:19]
You can use, a web will plug in, so you can say, go infection some data from this website, and you can go and scrape that data, inject it into the prompt, and then you can still use these language models that may have training data that's cut off at a certain point in time, but you can fetch data that's, um, up to date. So I think, you know, thinking more broadly, I, I I would say, you know, what are people not doing that they possibly should be doing? I think thinking more broadly of this as a set of components or an ecosystem of things that you might want to plug together, not just thinking about the language model in isolation is probably something we'll all want to start thinking about. I would say that I'm an engineer, I'm probably thinking more about how do you engineer a product out of this? But, um, you know, combining these systems together systems and figuring out their weaknesses and plugging them with, with other tools to, to help you get a kind of coherent overall system, I think is what people will wanna be doing, at least in the short and medium term.
Tom - [18:19 - 18:58]
Georgia, I want to ask you about content creation. What, what's your, I was reading a chapter called Ben Goody, who's a quite a well-known ss e o expert, um, this morning talking about there's this great opportunity to produce human driven content right now, because in his view, a huge amount of, uh, content being spewed onto the internet is, as he called it, weird, a thin, weird style of content with repeating sentences and few other things. And he was saying basically there's, I mean, what, what's your view on kind of AI generated content as a way of generating ss e o kind of incoming?
Giorgia - [18:58 - 20:35]
Yeah, I mean, as going back to Natalia's point, I think the idea that a lot of marketers have of LMS is chart G P T, which, you know, we've all tried to create a blog out of chart G P T, and it's not usually the the best outcome. It is what, you know, what you were saying, it doesn't sound natural. It doesn't sound like something that someone will write like a pass and will write. 'cause it just, it's a sometimes in that conglomerate of information. Um, so I think it's going back to what we was saying, I think the prompt, uh, makes a big difference in the outcome you're gonna, you're gonna get. Um, but then what, what About tone of voice? Have, have you ever tried it to, have you ever tried to speak in this tone of voice or give it an example of text and say, write like this, or, Yeah, and again, it's even like, tone of voice is an interesting one because while you have a brand tone of voice, your writers will have their own writing styles of that will reflect in the text. So, and that's why when you, from a content perspective and being a content lead within a team, I, I look for that personal way of writing for my team, that chart G p D will never be able to replicate. But it's also the one thing the marketing need to bear in mind is the way the Google algorithm, for example, works. Like s e o articles have changed massively from what they used to be to what they need to be now to actually be considered any good by Google, which is kind of the helpful content concept. So I think there is limitation, and again, what Prash was saying is like, there is a all conglomerate of things that you need to take into account that charge GP or whatever LMS you wanna use is not enough to respond to all those things that you need to consider when writing a blog. Giorgia - [20:35 - 20:58]
It's not just a copy. There is so much more to it. So it can help. I'm not saying it's absolutely useless or marketing shouldn't use lms, um, or charge GP specifically because that's what everybody defaults to. But it, I still think there is a long way to go before it. It covers every single aspects that are good ss e o blog needs to have.
Tom - [20:58 - 21:00]
What's your view on that, Chris? I could see you.
Chris - [21:00 - 21:41]
I mean, opinions, we were going, This, this is a question that has been preoccupying me ever since I first used chat G B T on the week. It was released chat GB T three in last November. And the night it was released, I couldn't go to bed because I was just utterly spell bound by what I was doing, having a conversation it seemed with Mm, with the machine. And it, it felt to me bigger than using the internet for the first time. Now, in terms of content, the lowest value use case is write this blog for me, or passing off AI work as your own to a client. That's a really low value use case. And I don't believe that's really gonna take off.
Tom - [21:41 - 21:44]
Really. You don't believe it's gonna take off?
Chris - [21:44 - 23:27]
Well, lemme qualify that. 'cause obviously AI will write a lot of the content on, on the internet, and I've heard some ai um, uh, specialists say that by 2025, about 80% of all the content on the internet will be AI generated. So I, I don't wanna downplay that, but what I'm interested in is what is our relationship to these models and what does it mean for us as human beings and us as creative, um, creative communication specialists? What does it mean for communication? So the industrial Revolution, um, enabled machines to take over human physical labor. Computers took over some mental labor. Now AI is taking over imaginative labor. This is almost like an existential crisis. Creative industries are threatened like them never before will people care if an masterpiece is produced by ai, what happens when all the Oscars are won by films that will written by ai? Will we care? Will we mind? We don't know. But I think there's one analogy that is really helpful for the creative industries here, and that's photography. And when photography, when camera technology emerged in the mid 19th century, contemporaries believe that painting would die out because why bother to paint a landscape when a camera can capture, capture it so much more perfectly than any artist could. Mm-hmm. But that's not what happened. Instead, European painting ventured towards impressionism and from their cubism and abstract art. So rather than competing with the camera on realism, the medium shifted towards representing what the camera couldn't capture. And it'll be the same with creative content and, and AI generation. It will become more valuable to have, as Georgia said, that perspective on writing that's very human, very idiosyncratic, difficult to replicate through AI and, and machine learning alone.
Tom - [23:27 - 23:38]
So we just get, we get much better at, at, at discriminating basically, we become much more choosy or sophisticated consumers of text. And we're like, well, that's different.
Chris - [23:38 - 23:53]
Yeah, I Would hope so. I would hope so. And I think that there's room for G B T to create that Wikipedia style formulaic content, but I think that should mean that idiosyncratic creativity is come to be valued more, more highly.
Tom - [23:53 - 24:10]
So AI can already produce because it, because It is, it is basically bound to be more generic than now, right? 'cause now ev all these people are doing it and they're all individual brains, but it's gotta, even if even if everyone's asking it to write in their own tone of voice, it's bound to be a little bit more similar than Yeah.
Chris - [24:10 - 24:53]
I, I, I think, um, one final point just to address what you said, Tom, which, which is that it, it'll be a combination of AI and human, soon it'll be indistinguishable, which was created by which in the next 10 years, just as when photographers submit photos to competitions, they've touched up and edited the photographs using technological software, right? We don't think, oh, that's AI generated. We recognize that there's still a human aspect to the, yeah. To the, the, the decision to take that photo and then to finesse it using the technology. And in the same way it'll be combination of human and AI that will come to be the best, um, and most optimal way of, of communicating. We just dunno what that looks like.
mas - [24:53 - 25:20]
The, it's still very Early is, is there not an argument in terms of the tone of voice argument, for example, that you could take a foundational model and then you could say, feed it and fine tune it on all of the content of your team of content writers over the last two years and get it to start to mimic and reproduce your exact, not exact, but, you know, try and get it to mimic your tone of voice and, and or your team's tone of voice.
Chris - [25:20 - 26:21]
Is that, is that, I think that's the one of the most promising use cases out there. But I think, um, training it, um, on your, all of your emails you've ever written, all the blog posts you've ever written, all the, all the articles, feeding that in to a specific G P T that's trained only on your content and getting it to reply to emails for you, getting it to write your content for you, um, set your team's work for you. Fantastic. But what happens when it becomes self consuming on Spotify, if you keep listening to a certain type of genre and it keeps funnelling you down that genre, you become stuck in this cul-de-sac. Just as in on the YouTube algorithm, if you keep watching a certain type of video flat earth videos soon, your timeline will only be flat earth videos. And it becomes self reinforcing. Yeah. So how do you make sure that you still input new things into that, um, into that cycle so that it doesn't just become a cul-de-sac or a satire or a parody of what you are actually like, which would be helpful, but Which would also be limiting.
Tom - [26:21 - 26:21]
Mm-hmm.
Chris - [26:21 - 27:43]
Yeah, no, that's a, that's a good point. I mean, these models tend to have, um, settings, right? Parameters that allow you to set, you know, things like temperature or, or top P, top K. So these, these parameters basically allow you to try and set the level of randomness. So there's, there's a trade off sometimes isn't there? Because if you want, um, these models to generate something quite creative, like a poem, you want the, you want it to be quite random in its output. You don't want it to be always giving you the exact same thing. 'cause if you've always got the same poem every time you asked it, that'd be, that'd be pretty rubbish. But on the flip side, if you want it to create information, knowledge, and you kind of just wanna give, give you factual information, perhaps you want to adjust that level of randomness, that temperature to be lower. But to answer your question, I suppose what I'm saying is, you know, I wonder if in the future that, or even now, that kind of ability to adjust or scale how creative you want this model to be, you know, how how random you want the output to be might, might still give you that level of novelty. Um, I dunno. I I I, I think there's the, the speed at which this is developed and the amount of data that we could train this on, and we still haven't really trained it on all of the data that humans have it, you know, at their disposal. There are still many saying it's only 10% of books or something, or, you know, there's, there's still not, you know, huge numbers of books that it hasn't been trading Very few books.
"Tom - [27:43 - 27:56]
Yeah. I mean, I think, yeah, I think books are just generally not on the internet. Google tried, but with Google Library, whatever it was. But very few books really. Yeah. That's gotta be the, one of the biggest stores of human knowledge books.
mas - [27:56 - 29:29]
Yeah. But also the question here is interesting one about if we start training on the books that, uh, large language models can, um, even certain libraries, uh, there's books like Biblio Tech for example. I dunno whether you're following this case, uh, litigation case where authors start, uh, taking, uh, of AI course saying, well, uh, my, uh, author right, hasn't been properly acknowledged whilst that, so, uh, there will be this ip uh, obviously, uh, infringements and um mm-hmm gonna be, cases of those will be emerging more and more often. But I wanted to, uh, quickly come back to Chris's point because I found them particularly, uh, fascinating ones about the, the problems of increasing hyper-personalization. So we are talking about we creating our own bubbles of large language models, which are really good for you because they're replicating your behaviors or they're replicating your own language, replicating your own patterns. But you are closing yourself in your own bubble. And if you want to extended bubble, they're gonna be a group bubble, but it's still going to be a bubble because there's going to be material which went into that, uh, model is, uh, learned from you trained on your own habits. So I think, uh, and uh, I want to also comment that, uh, some people say, okay, temperature can create, um, you know, some degree of creativity. We can, it can, we are still within that bubble, but what will the bubble will, what we'll be losing by adjusting temperature left and right is that we are either creating more hallucinating or less hallucinating model. So, and, and the creativity and hallucination, obviously, yes, uh, uh, it's more creative in this space, but this creativity is not always desired in other spaces or certain areas, for example, What do you mean?
Tom - [29:29 - 29:32]
What do you mean by hallucination in this context? I'm not sure. Hall.
prash - [29:32 - 30:53]
So when, for example, when you adjust temperature to, uh, uh, order one creativity, which we produce is the most creative output possible based on this particular tax ra, it'll produce very unfactual very kind of imaginative answer, which has nothing to do with the factual information on which it has been trained on. So it'll be nice, you can create a poem, you can create, and it's fine for creative industries, as I say, but imagine lawyers start building their answers on, uh, on the, on the creative answers of Chad d p t, which, uh, and, and based even in trained on legal documents, but pro providing all kinds of interesting and exciting answers, which has nothing to do with, with the, the Legal Act. That would've be an interesting one. So there is an element of, uh, bubble and model and trying to understand that we are not dealing with something is, um, something, uh, which we really don't have control over. We are we putting the data into that model. Uh, we should be aware of how that outputs can be, uh mm-hmm. defined by that particular decision we make when we build this model. And even we talking about creativity, creativity, uh, we're talking about hallucinations and bias. So the same very words, uh, which being used and, uh, the same, uh, uh, language we use. They, they have positive or negative, uh, connotations in different, uh, user, uh, user groups.
Tom - [30:53 - 31:59]
So, um, and I'm quite, do you Think we're, do you think we're limited at the moment by not having control or any, even any insight into the training data? So Nick, you, you think in the, one of the pre-courses you were talking about perhaps your view that there will be a move towards greater specialization rather than a move towards greater generalization. So more and more niching of ais that do a particular, particular thing. And, and Chris, you were talking about perhaps training it on your own writing, the chap that's not here, Mike would, he leads his team of creatives at, at Lando and Fitch. He was talking about his creators, his designers training, their own, models for, arch generation that were not connected to mid journey, because then they could have their own bespoke version, which had, you know, they could add it onto their cv that, yeah, I can come design for you. Plus I've got my own model, stable diffusion model, which is trained on my art. And that becomes part of their identity. Um, perhaps that's a, a something that we haven't yet grasped really yet the ability to make them more niche for our use cases. Nick, do you think?
Nick - [31:59 - 33:13]
Yeah, perhaps. I mean, certainly something that, that we are looking at. And trying to think back to what I would've said in, in that pre-call. Um, we are certainly looking at, at, at training AI models, you know, for our niche and, and to get good at doing what we are doing. Um, be interested to get everyone's take on this though, which is that, my my sort of impression from, from, you know, reading around this for the last few years is, is that we're gonna end up in a place, place which is potentially quite similar to open source software, where there's sort of open source models and you want this sort of capability. You just download it, it's, it's ready, trained on the model, and you can just go and you can plug that in. And so you have this democratization of these different capabilities that, that, that, you know, anyone can go ahead and use. Um, so, so you know that that's a, a direction I think that can go. I, I, I think it, it's, um, it's really interesting 'cause we, we work with a number of sort of large corporates and sorry, we don't wanna take the conversation in this direction, but, you know, this specialized models is causing them to ask, I think, some quite philosophical questions about what their business is. So I was at a conference where they were talking about imagine that you could have, uh, a trained AI assistant at, you know, large blue chip business a that knows everything about that contact everything about the way you should engage with them. Um, you know, so that when you pick up the phone or send an email, you've got that there and then you have to ask the question, well, that is the business, isn't it? Nick - [33:13 - 34:08]
In some sense, if you've got all of that knowledge and you know what to do, that is the business. And so do businesses now need to suddenly get really good at building and training and maintaining these models? Because, you know, if you then lose access to that model, that's like your whole like, top exec team and everyone else walking out the door. Yeah. So it's, I think it's causing businesses to think really about, you know, what they are and what's their point of difference. And does it come down to, which I think is where Chris was going, is it about what, you know that no one else knows at the end of the day? Is it about the extra thing that you can bring from outside, um, which, which, uh, recontextualizes or provides additional value into, into the bubble using that metaphor that we spoke about before, which I think is a good one. And, and, and within there as well, sorry, not to get too philosophical, it makes you wonder, you know, how, how do human beings do that? You know, how do we bring new things in? 'cause we live in a bounded in a bounded world, um, you know, so there's nothing new under the sun. How do we do it? Uh, and then you start to wonder what's the difference between artificial intelligence and intelligence?
Chris - [34:08 - 35:48]
But that's a whole other rabbit hole Maybe that gets on to, should we be scared? Sorry, Chris No, I, I think that's deeply profound and I think this is where, whether we like it or not, this is where we're heading. Because ultimately what these AI and LLMs were doing is they're changing our relationship to knowledge itself. Now, in, in the con modern contemporary business economy businesses, this is a knowledge economy. You know, this was back in the nineties, they called it the information age, you know, data insights, strategy. These are the things that a lot of tertiary industries, um, service industries are based on. This is what sustains business to business, um, advertising, communication, and services. But the reason I say it's gonna change our relationship to knowledge is because our idea of knowledge, when I say knowledge, an image comes in your head. Maybe it's a book, you know, maybe it's a wise old man or a wise old woman, right? And that's because knowledge is influenced by how it was formed. So through universities, through books, through the idea of truth. And that's because knowledge was scarce in the past and it was even sacred. Only really learned men could possess it, and it was really highly priced. Now, AI is creating what appears to be when you go on G P T a limitless fountain of knowledge on tap, infinite and entirely fungible, you can ask it to come up with parameters for a special study, um, looking into the effects of human behavior and how it's influenced by certain environmental factors. And then you could ask it now, write the same research paper in the style of Jeremy Clarkson, and it'll do that for you right now, true and forth false like knowledge are categories that emerged in a particular historical context. Chris - [35:48 - 36:45]
And already without LLMs, just for social media, we had fake news conspiracies, the decline of universal narratives in favor of specific stories, all of which only need a fragment of evidence to be true. So what will AI and l l m do when you can just get knowledge on tap? It's not something that has to be worked for or developed or, you know, um, approved by institutions like universities or other organizations. But going from knowledge to meta knowledge and, and reply to what Nick just said, when it's about what do you know that no one else does? I think that it's about can you skillfully make sense of streams of knowledge that are produced by AI's LMSs and combine them in ways that are effective to different contexts and human, A curator, yeah, a curator, a research searcher Reasoning.
mas - [36:45 - 37:46]
It's about reasoning what, what humans can do that none of these AI systems can do is complex reasoning, taking a huge variety of complex inputs and knowledge and drawing a line across all of that. And being able to carve out a path. You know, these systems fail on even very basic reasoning at the moment, and you can patch it up with better prompts and, uh, retrieval and of more data. But ultimately the thing that light bulb moment that humans can have that kind of Einstein moment where they can take a whole bunch of intelligence and facts and then come up with a genuinely novel decision or logic or way forward. That's what humans can do. And I, I totally agree with what Chris is saying about the kind of idea of AI as being a copilot or a contractor to the way we work. You know, you treat them like that. But I think our place in these businesses, you know, if knowledge is, you know, commoditized, I guess is what we're saying, um, you know, reasoning on that knowledge is where we will be adding value.
Nick - [37:46 - 38:05]
Is is there any reason to believe that AI can't do that in the future? I mean, that, that's the thing that I think is really interesting. I mean, we're framing it as knowledge, but you know, it's intelligence, um, which sort of goes beyond that. And when, when we reason, what does that really mean? Um, you know, surely what we are doing is we are just pattern matching and playing out patterns in the same way that AI Does.
Chris - [38:05 - 38:08]
Well, I don't, I think you're right. And I don't think it's necessarily anything to do with the ideas.
Tom - [38:08 - 38:15]
'cause the idea is sometimes the ideas it comes up with are brilliant, but it might not have judgment to know that they're brilliant.
mas - [38:15 - 38:44]
Yeah. You have to appreciate these systems don't Yeah, exactly. Know what they're doing. You know, classic example is these systems don't know that they're lying hallucinating, as we've been talking about. You can get, you can get them to after the fact. You can ask it again with another prompt. Do you believe you met the brief? And then it might say, oh no, sorry, I didn't, here's another answer. And, and you know, it's a classic example of they don't reflect or they don't understand that they've lied mm-hmm. So, um, but this is, we do we have this? Yeah, sorry, go on.
Nick - [38:44 - 39:26]
Yeah. But this is what's so interesting is I think that a lot of the things that you are talking about are also human characteristics. So we talk about hallucinations, human beings hallucinate all the time. Uh, I remember reading a study about people who've been robbed at gunpoint and they were asked, you know, the height of the guy, the hair, all the rest of it completely wrong. They got it completely wrong. It was captured on C C T V. Like all the time we hallucinate. And you talk about lying. I have a five-year-old at the moment. He lies all the time. It's a developmental stage, right? ai, ai, I also lies and gets these things wrong. But he's going to learn, I hope, in the fullness of time, hopefully in the next week, um, that, you know, lying doesn't work out and all the rest of it. And I, I think that AI will, will go through these phases in the same way. It's just a particular level of maturity at the moment.
mas - [39:26 - 39:32]
I suppose the question is, does your five year old know you that they're lying? Or do they believe that they're telling the truth?
Nick - [39:32 - 40:02]
That's, that's kind of very interesting one, I guess, I guess, I don't know. 'cause he probably liked it But, but, but, but the point is, the point I'm trying to make is like, where do you draw the line? I said it before between intelligence and artificial intelligence. 'cause at the end of the day, like I, I mean, you can take one or two camps. Either you believe that human beings are just very sophisticated pattern match matching machines, or you have to take a, a, a slightly different, uh, sort of approach, which I dunno exactly what that would be. But if you're in the first camp, the logical conclusion is that the two things will converge.
Chris - [40:02 - 40:41]
But Nick, there's, there's one crucial difference, which is that the ais, they don't have a logical representation of the world, right? So when, when the AI beats the best human being in the world at chess, it doesn't know it's playing chess. It's just pattern matching. Now you might say, yeah, but it plays chess really, really well. Doesn't need to know it's playing chess. All it needs to do is get the job done fine, but it begins to matter. When AI are shaping the world that we live in, what happens when an AI program is better at running businesses in the average board? If it doesn't have a representation of the world is operating in not a full representation of the way that we do, then it begins to be, it begins to become an issue.
Nick - [40:41 - 40:50]
But I I, I don't think that anyone does have a full representation of the world. It's not, it's not possible. Uh, do do we not have the same problem?
Chris - [40:50 - 41:22]
But you are, you are. Right. But this is really fascinating because we do hallucinate, but we hallucinate in rather predictable ways. We are capable of reasoning on our hallucination. And finally, we all hallucinate in a similar enough way that we're allowed to have a shared representation of the world. World. So you and I might have very different opinions on the room we're sat in, but we're both in agreement that we're sat in a room, we have enough of, uh, reality quote unquote in order to share a sense of what we're doing in, in able to work together.
Nick - [41:22 - 42:18]
I, yeah, there's, there's a lot. There's bits in there that I agree with. There's bits in there that I disagree with. Um, so if you look at, uh, news cycles, for example, you can have two people watching exactly the same thing and seeing two totally different or three totally different interpretations of it. Um, you know, I I, I think, and, and if you look culturally as well, um, you know, the way that different cultures have grown up around the world and what we see in one country versus another, we can have very, very different interpretations of the facts. Mm-hmm. Um, and so there, there are things that, that, and, and this is, I was gonna make this point earlier, when we talk about these bubbles and going down a track that feels very synonymous with, uh, culture, with culture and with trends and with society. And so there's all these interesting parallels. And I think the thing that I'd say over the top is that all of these limitations at the moment, and, and, you know, I'll, I'll make this this point stronger than I actually believe it, just 'cause I think it makes for an interesting conversation. But I, I've seen nothing that suggests that AI won't be able to overcome these things.
Chris - [42:18 - 42:41]
Yeah. And, and as a final comment on that, I agree with you, and I've encountered loads of those examples. Also, I would only say that perhaps it'll come to the point where we can't tell whether or not AI is actually intelligent and has representation of the world. And so it'll become to seem like a moot point because it acts as if it does, and it's convincing enough that we just can't tell. So in the end, maybe Yeah, Agree.
Nick - [42:41 - 42:58]
Sorry, agree. And then, and then my, my point is like, wed you draw the line between artificial and intelligence, like mm-hmm. Would we, would we have the level of, uh, what would the word be now, uh, introspection or, or, you know, self-awareness to be able to, to call that line. And that's really interesting.
Tom - [42:58 - 44:12]
Do you think that, um, uh, Madison and Georgia, you both work in the social media, um, world producing content for consumption by a network of individuals on social media? Do you think maybe the missing thing is that we just don't have enough of these ais? Like imagine if you had hundreds or thousands of chat GPTs that all operated in slightly different ways, or maybe they all started working the same way, but then all kind of diverged and perhaps then they have more of a sense of their own motivations, their own existence. Like, you know, MAs and Georgie, you know, that, I'm not gonna say truth, but certainly opinions emerge from society, from groups of people on social media. Um, perhaps if you just put one person in a box by themself, they would find it hard to have convictions. But if you put them in a massive group on social media, suddenly they develop this really passionate belief in, in X. Maybe it's the same with ai. Maybe if you, if there's just one and they're being fed at random stuff and they're just themselves in, if we put lots of of them, you know, together thousands of them, suddenly they would develop a bit more conviction of their, of their, you know, conclusions.
mas - [44:12 - 44:15]
Georgia, you go first. please.
Giorgia - [44:15 - 44:57]
I mean, it, it comes down to it. It is collecting intelligence at that point is I, altogether we come up with a different opinion than the single person. Yeah. I dunno if I could do that. Like, as people, we do that all the time. We're influenced by the word around us. And that's what happens on social media. If someone says share their opinion, you get three people agreeing. The fourth is likely to agree as well. Um, 'cause we, we as humans, we, we like to be surrounded by people that think the same way as us, is what Right. Chris was saying about, you know, even the algorithms work that way. If you let watch enough videos are the same, you're gonna get the same videos and you're gonna keep watching those videos because that's what that point where your world becomes, I dunno if they are.
Tom - [44:57 - 45:09]
And that's, that's how we, that's how we create brands, is by getting exactly, getting lots of people to think the same way and to create categories, which is like, Well, you create that story around whatever it is, whether it's a brand or whatever.
Giorgia - [45:09 - 45:58]
I mean, the Barbie movie is a great example of that. Everybody's obsessed with something that became, you know, all news years ago. No one plays with Barbie anymore, and now you have everybody watching wanting to watch this movie. Mm-hmm. Hmm. It's because the marketing company was brilliant, and it just brought back the nostalgia that the thing that we were all missing. And everybody's like, it is great. Like, yeah, like we, yeah, I, i no idea. I, I have enough knowledge if that will be the case with, with the bunch of chat g pt like models, if you put them all in the same room, whether they then create a more of a collective intelligence altogether. But definitely it's a, it's human nature to think that way. And, and we, you know, some people enjoy the debate. Some people just enjoy to, to agree with other people, and that's the way it is. Ma, what do you Think?
mas - [45:58 - 47:27]
Yeah, I, I, I, I look it's human behavior. Uh, before, uh, internet came and before, you know, I'm sure people were, they were raising cows and dogs and monkeys and donkeys. Uh, they were saying, you are interested in cows, therefore I'm also interested in raising cows. And they would get together. There was no ai, nobody discussed it. Ethical and understanding of where we're going. We just like-minded people. We wanna hang out together. Uh, Amazon for years have been trying to tell me people like you have bodies, there's other stuff, you know, uh, uh, most of us are sheep. Whether we all wanna be individual, you know, we follow other people. When you look at ads, the person is smiling. We've gone buy that. I don't know that person's smiling, but I've been buying Colgates for years because they've been smiling at me. So because I am a sheep, because we are all like that before AI came. So it's maybe replicated, but I also like to think I genuinely, genuinely hands-on and hands-on heart, like the thing that we have done this until now, that people can make their own decisions. Uh, you know, there's been taking a long time for us to have autopilot. There was a mention of a pilot, you know, that a human being trust the autopilot with mechanism in place, that the autopilot is not gonna freak out on us and it's being used on airplanes and so on. So, uh, there's a lot of talk about, uh, there's a lot of talk about like, ethical aspect of it, which is extremely important, I believe is extremely important. I have a different religion than most people in the uk. Uh, my last name is different than most people in the uk. mas - [47:27 - 48:34]
My hair color is different than most people here. Uh, so will that represent me? Will that understand me? When you put somebody in my religion, what, what type of photos? Bring it up and so on. So that aspect of, I, I believe us human beings, for years, we've tried to be responsible for years. We have organizations who are trying to be responsible, yet still we go to jail by the way. Uh, we still do that because that's how we are. So that aspect of it is, is, is, is important. But lemme take it down just a notch just to ground level for, for folks like me who, who pretend they understand all the technicalities of it, but then we use it and we go, oh my God, it's actually making sense for my life as a marketeer. Uh, I'm gonna give you a few tactical examples that blew my mind. But because I'm a very simple person with a few brain cells, right? You are putting together social platform and you wanna run Boolean search super advanced that trying to understand, figure out the product consumption, the persona they're talking about in a specific region. And as a, I'm just saying this as an example, and the social platform through, uh, uh, generative AI capabilities. It writes the Boolean for you.
Tom - [48:34 - 48:36]
You're talking about like social search.
mas - [48:36 - 49:11]
I'm talking about social search. For those of us who have trained teams and getting people and understanding that brand. This is pure joy. This is, this is a piece of art that you just look at it, it's taken away an hour of life. Great. Let's build on it. Then it comes to the content creation, uh, from one to 10. There are multiple steps to do that. If it can help me at this moment to generate 2, 3, 4 steps of it, I'll buy it. And it's doing that and it's getting there and the ideation of, um, you know, images and videos, which is also getting better. So I think it's interesting.
Tom - [49:11 - 49:23]
So Just wanna check, check my understanding here. You you're talking about getting a better understanding of your audience by listening to what they're saying at broad scale ab absolutely. Interpreting that. Absolutely. To know what I should produce. Okay.
mas - [49:23 - 50:49]
Absolutely. So you, you talked about languages that's gonna understand this, you know, so we're trying to think about how do we educate such an approach. Uh, there are voice of the customer or voice of the consumer is very dear to me as a marketeer. Hmm. Every time I think I'm smart, I'm really not. I write a good content. They go, oh wow. Who, uh, did somebody, a marketeer wrote that? Yes, thank you very much. I have so many experience and I think these are three keywords that is gonna wow you. And then you look at what the customer is saying, what the customer is describing, what the problem statement there, then, then you can have that education in it. So at this specific moment, at least from, from my perspective, we are really trying to look at how can simplify life ahead right now with the steps. Try to understand for me as a marketeer, what that consumer's saying, what they're doing, and then what we can do about it. I'm sure there will be multiple, uh, facets and aspects of it and so on. By the way, about the movie getting Oscars, there were many people who looked down on Netflix and it was, uh, they, they frowned upon Netflix. Winning, uh, Oscars now is a part of reality. Uh, I think many things in our lives, we've become part of reality. And I think every generation believes, uh, the next generation is, is super lazy. And it's true generative AI is going to have an impact on our lives, but make our lives simpler, uh, in, in the, in the steps that we do. And I, I generally, uh, see this happen. I mean, it's happening right now. It is not talking about days or months aways. mas - [50:49 - 50:50]
Just right now.
Tom - [50:50 - 51:06]
It'll be be very exciting to see how Nick and Rash and I, we all have five-year-olds, um, four or five year olds. Um, you know, them actually growing up by creating prompts. You know, how good they're gonna be at it in 10, 15 years time is pretty Amazing.
mas - [51:06 - 51:13]
I genuinely think they might not be. This, this is maybe crazy because I've had about 15 coffees this morning. So bear with me. My, my younger one.
Tom - [51:13 - 51:16]
Your, uh, randomness stylist turned up high Yes.
mas - [51:16 - 51:59]
It's on it. I I think maybe it's even agency. The word agency is gonna be changed into generators. You know, they're gotta be prompt generators because they're gonna be very specific on a specific discipline. I used to be part of Ogilvy group. I, I ran social for group and Ogilvy had creative, uh, content activation and so on. I believe these are, will have their own generators per discipline. It has to because, uh, we are several billion people online and we have different interests. And then each sector, just like a GP and a specialized doctor that's happening in real life. A GP cannot possibly know everything respectfully. So we need to have this specialized module that's just due to the nature of human being due to our behavior.
Tom - [51:59 - 52:14]
Um, also, it also Also makes logical sense for a human being to want to go to one AI that will give them one style of response and a different AI that will give them a different style. Well, 'cause we're used to, so a social interaction, we're not used to just dealing with a single Endpoint.
mas - [52:14 - 53:39]
Yeah. Yeah. Last but not least, and then this is my last content. Sometimes I end up in debates in generative ai. When you sit, depending on if alcohol is around or not, you know, but people trying to justify it, compare, which is fair. You compare AI and human beings and then in order to understand ai, we go into the path of understanding 4 billion plus people. So I think that angle and differentiate and take it into steps is really important because otherwise, uh, it's like, what's the meaning of life, uh, kind of concept with ai. So it's gotta be in steps that, that has been my learning try for me and my few brain cells to understand this concept that it's, it's supposed to replicate me. It certainly gives much, sometimes pretty answers me than me. I, I gotta say that grammar wise and so on. But trying to understand it, I need to break it up into, um, sub pieces and then it adds value to my life. And then I get it. Thank you. Uh, can I just, um, say something about that I, I mm-hmm. I'm, I'm more reticent. I suppose. Maybe I'm always a bit negative about these things. I'm more reticent as to whether our kids will ever bothered to learn prompt engineering. I, I think the trajectory of these things is so rapid. I would've thought that that won't even be a job in three or four years time because the models would've outgrown them. They'll be, they'll be better than requiring, you know, look, look at the difference between GP PT three and GP PT four in six months. That's crazy. The rate at which it's suddenly it's got so much better. So you think about what it's gonna be like in 3, 4, 5 years time. mas - [53:39 - 53:50]
We're not gonna be writing problems. Um, you know, Natalia was talking about multimodal models. You know, we're not gonna be, it's gonna be completely different to how we interact with it now. Yeah. But Harvard, because we're gonna be giving it. Yeah.
prash - [53:50 - 54:31]
Uh, This autumn, I mean, we don't need to talk about several years this year. Harvard University Computer Science is introducing ai, uh, lecture. So we're gonna have first day lecturing computer science, which is not human. So, and uh, as you say, it's a prompting. I fully agree. Prompt engineering is kind of this intermediate phase, uh, between us and kind of much more in intuitive kind of interactive bots, which we are talked to. We can talk about avatars and uh, it's gonna be, uh, age of, uh, you know, us talking to somebody. Uh, unreal. But, uh, not via text or prompt. It's gonna be a, I think, um, interesting. Much more e kind of removed.
Tom - [54:31 - 55:20]
Um, Inter, I'm, I'm really, I'm looking forward to this, um, the next Elder Scrolls game, which is a Bethesda, uh, like, uh, title role playing game. It's the elder scroll six. I think that's gonna have 'cause now owned by Microsoft that's gonna have each of the MPCs with their own G P T powered identity. And I'd be super interested in that. 'cause as I say, then they should, they should develop their own personalities, their own motivations go off and do things based on their own. I mean, some of 'em are gonna be evil as well. Like we, Natalia, we were talking about it. Of course, you can't have an interesting game without having some characters genuinely go about killing people for no reason. So you have to have an AI that's at least acting in an evil way. Mm-hmm. So maybe that leads me, and we're just coming to the end now, but Natalia, let me ask you one final question just to kind of send us off with, uh, should we be scared?
prash - [55:20 - 55:31]
Uh, I don't think so. No. I think we're still pretty much in control. Um, what we shouldn't be able, we shouldn't be scared. We shouldn't stay ignorant.
mas - [55:31 - 55:34]
Mm-hmm. Yeah, a hundred percent.
prash - [55:34 - 55:54]
I think we should educate ourself as much as we can on this technology. It's a technology we created. We're still pretty much in control. Why would we, we scared of something which we created and pretty much in control still. So that's my personal opinion, but I'm computer scientist. It's not necessarily their representation. It's my own bubble. And, uh, it's not necessarily representation of the, of the other layers of population.
Nick - [55:54 - 56:29]
I was just gonna say, I think that the thing that, the counterpoint to that, and I, I agree with you, I'm sort of optimistic, but to play devil's advocate, I think the thing that's so interesting, interesting about it, and I dunno whether this is this is true or whether you'd agree, but it's the first creation that we've had that um, is sort of conceptually unbounded in the sense that we don't define what it does and what it doesn't do and what it knows and what it doesn't know. And, and I think that's sort of a, a, a new thing. I can't think of another invention or another, another case where that's been true. Um, so yeah, I'm, I'm cautiously optimistic as well. I just think that there's a, that there's a, you know, yeah, there's, there's a non-zero chance that it, that it doesn't go well.
Chris - [56:29 - 57:11]
I, uh, I have to agree with Nick and only because, um, ultimately you won't be writing prompts. It'll suggest things to you unbidden based on your interests, just as the YouTube algorithm just suggests things to you. And it does it better than anyone in your family know exactly what to listen to. And the AI will just suggest knowledge and forms of knowledge and avenues of inquiry that it thinks you'll be interested in. And my reason for being concerned is only because we don't know how that will change us. And we don't know what that will do to the way human beings think, behave and act. Just as we couldn't have anticipated that social media made us much more prey to conspiracy theories, bubbles, and all of that, but Us.
prash - [57:11 - 57:30]
Yeah, exactly. But I, i still, us we still the final kind of element in this change. So it's gonna change us. So should we be scared who we will become as a consequence of this new technology? So we shouldn't be scared of technology. It's still us and we still pretty much in control. I wouldn't say you disagree with me. I think you're just confirming my point. Even stronger Well, Yes.
Tom - [57:30 - 57:34]
I think humans are capable of plenty of evil by themselves. Yes.
mas - [57:34 - 58:53]
Yeah. I, I I would say perhaps we're in control, but I don't really think we understand it. You know, I think people have tried to peak into the layers internally to try and understand what representation has it actually learned. And it's, it's incomprehensible. It's, you know, huge billions of parameters you can't interpret that you don't know, you know, people, we search that. So I think we don't necessarily understand what we've created, um, which probably should cause us to, you know, be caused for some pause and pause for thought. But I think we should also be a little bit scared Yeah. Of how humans will use it. You know, specifically states, politicians, you know, how it will influence our macro lives. These, someone talked about, you know, misinformation and propaganda earlier. I do think we should be a little bit scared. Maybe not technology. I think we should, should always scared of happy using it. How it's, how Yeah, exactly. Exactly. I a a bottle opener can open up a a and open up a bottle of wine, but you can also hurt somebody. So, you know, uh, I, I like wine by the way. Uh, so I, I agree. A human being the way they use it and the checks and balances, it's always, but that's like absolutely everything in life. Absolutely everything in life. I guess it's the scale here. That's the thing that's gonna cause us to be scared. A bottle opener might hurt one or two people. This technology could hurt. True.
Tom - [58:53 - 58:58]
We build weapons, uh, weapons On that, on that cheerful note, I'm sorry.
mas - [58:58 - 59:08]
I think we have Hey, if you wanna throw scale buddy, we throw scale We have, we have the End now. Use pump. By the way, you gave me that answer, Thank you.
Tom - [59:08 - 59:38]
Thank you. Um, we'd better, we'd better wrap it up now 'cause I know, uh, some people have uh, um, other things to get onto. So thank you so much all of you for this. I'm sure we'll do a follow up. It's been lovely to have you here. We'll speak with all of you next week just to, um, to kind of wrap up the document we are producing from this. Um, but again, and to everyone watching on YouTube, um, I hope you've enjoyed it. And thank you everyone that's been on the board, it's been lovely to see you and we'll speak to you very soon. Thank you everyone. Goodbye.