Improvement in AI Hallucinations in Generative Models Hallucination rates in AI have shown improvement over time, with a decrease in errors from GPT 3.5 to GPT 4. However, AI still tends to make up information due to the nature of generative models which solely produce words without understanding, leading to questioning the trustworthiness of the output. Transcript: Speaker 2 One thing people know about using these models is that hallucinations just making stuff up is a problem. Has that changed at all as we’ve moved from GPD 3.5 to 4, as we move from Claude 2 to 3, like is that become significantly better? And if not, how do you evaluate the trustworthiness of what you’re being told? Speaker 1 So there’s a couple overlapping questions. The first of them is, is it getting better over time? So there is a paper in the field of medical citations that indicated that around 80 to 90% of citations had an error were made up with GPD 3.5. That’s the free version of chat. And that drops for GPD 4. So hallucination rates are dropping over time, but the AI still makes stuff up because all the AI does is hallucinate. There’s no mind there. All it’s doing is producing word afterward. They are just making stuff up all the time. (Time 0:18:26)
Beware the Unreliability of AI AI can be unreliable in a deceptive way, leading to overconfidence in its outputs. The challenge lies in its ability to appear confident and persuasive while generating seemingly accurate summaries, which may not necessarily be true. This presents a new kind of challenge, different from the familiar concept of not finding information online, where AI can provide fabricated answers. Transcript: Speaker 2 But doesn’t this make them unreliable in a very tricky way? 80% you’re like, it’s always hallucinating. 20%, 5%, it’s enough that you can easily be lulled into overconfidence. And one of the reasons it’s really tough here is you’re combining something that knows how to seem extremely persuasive and confident. You feed into the AI, a 90 page paper on functions and characteristics of right-wing populism in Europe, as I did last night. And within seconds, basically, you get a summary out and the summary certainly seems confident about what’s going on. But on the other hand, you really don’t know if it’s true. So for a lot of what you might want to use it for, that is unnerving. Absolutely. Speaker 1 And I think hard to grasp because we’re used to things like type two errors where we search for something on the internet and don’t find it. We’re not used to type one errors where we search for something and get an answer back that’s made up. This is a challenge. And there’s a couple things to think about. (Time 0:20:01)
Co-intelligence with AI for Eliciting Wisdom The AI functions as an amplifier, feedback mechanism, and thought partner, rather than a tool to outsource hard work and thinking to. Spending time with AI helps elicit better answers from oneself, fostering co-intelligence. This partnership with AI enables individuals to derive wisdom, as seen in cases where the AI prompts good questions, leading to valuable interactions and insights. The idea of AI in education is intriguing as it aligns with the concept of eliciting answers from students to nurture co-intelligence. Transcript: Speaker 2 But this gets to this question of what are you doing with it? The AI is right now seem much stronger as amplifiers and feedback mechanisms and thought partners for you. Then they do is something you can really outsource your hard work and your thinking to. And that to me is one of the differences between trying to spend more time with these systems. Like when you come into them initially, you’re like, okay, here’s a problem. Give me an answer. Whereas when you spend time with them, you realize actually what you’re trying to do with the AI is get it to elicit a better answer from you. Speaker 1 And that’s why the book’s called Co-intelligence for right now. We have a prosthesis for thinking that’s like new in the world. We haven’t had that before. I mean, coffee, but aside from that, not much else. And I think that there’s value in that. I think learning to be partner with this and where I can get wisdom out of you or not, I was talking to a physics professor at Harvard and he says, all my best ideas now come from talking to The AI. And I’m like, well, it doesn’t do physics that well. He’s like, no, but it asks good questions. And I think that there is some value in that kind of interactive piece is part of why I’m so obsessed with the idea of AI and education because a good educator and I’ve been working on interactive Education skill for a long time, a good educator is eliciting answers from a student and they’re not telling students things. (Time 0:26:44)
Co-intelligence with AI in Education The use of AI as thought partners is about eliciting better answers from individuals rather than outsourcing hard work and thinking. Collaborating with AI can lead to wisdom and novel ideas, as seen with a physics professor who derives his best ideas through conversations with AI. This interactive approach to education, where educators aim to elicit answers from students rather than just provide information, is what makes AI valuable in educational settings. Transcript: Speaker 2 But this gets to this question of what are you doing with it? The AI is right now seem much stronger as amplifiers and feedback mechanisms and thought partners for you. Then they do is something you can really outsource your hard work and your thinking to. And that to me is one of the differences between trying to spend more time with these systems. Like when you come into them initially, you’re like, okay, here’s a problem. Give me an answer. Whereas when you spend time with them, you realize actually what you’re trying to do with the AI is get it to elicit a better answer from you. Speaker 1 And that’s why the book’s called Co-intelligence for right now. We have a prosthesis for thinking that’s like new in the world. We haven’t had that before. I mean, coffee, but aside from that, not much else. And I think that there’s value in that. I think learning to be partner with this and where I can get wisdom out of you or not, I was talking to a physics professor at Harvard and he says, all my best ideas now come from talking to The AI. And I’m like, well, it doesn’t do physics that well. He’s like, no, but it asks good questions. And I think that there is some value in that kind of interactive piece is part of why I’m so obsessed with the idea of AI and education because a good educator and I’ve been working on interactive Education skill for a long time, a good educator is eliciting answers from a student and they’re not telling students things. (Time 0:26:44)
Embracing Co-intelligence and Interactive Education with AI Co-intelligence involves partnering with AI to enhance thinking and wisdom, as seen in how a Harvard physics professor gains his best ideas by interacting with AI. The distinction lies in AI eliciting answers and making people better rather than doing work for them. While AI excels at tedious tasks, its interactive capabilities can significantly contribute to enhancing human intellect. By giving AI a personalized personality, individuals can leverage its vast knowledge base effectively. Transcript: Speaker 1 And that’s why the book’s called Co-intelligence for right now. We have a prosthesis for thinking that’s like new in the world. We haven’t had that before. I mean, coffee, but aside from that, not much else. And I think that there’s value in that. I think learning to be partner with this and where I can get wisdom out of you or not, I was talking to a physics professor at Harvard and he says, all my best ideas now come from talking to The AI. And I’m like, well, it doesn’t do physics that well. He’s like, no, but it asks good questions. And I think that there is some value in that kind of interactive piece is part of why I’m so obsessed with the idea of AI and education because a good educator and I’ve been working on interactive Education skill for a long time, a good educator is eliciting answers from a student and they’re not telling students things. So I think that that’s a really nice distinction between sort of co-intelligence a thought partner and doing the work for you. It certainly can do some work for you. There’s tedious work that the AI does really well. But there’s also this more brilliant piece of making us better people that I think is, at least in the current state of AI, a really awesome and amazing thing. Speaker 2 We’ve already talked a bit about Gemini’s helpful and chat. GPD for is neutral and cloud is a bit warmer. But you urge people to go much further than that. You say to give your AI a personality, tell it who to be. So what do you mean by that and why? Speaker 1 So this is actually almost more of a technical trick, even though it sounds like a social trick. When you think about what AI’s have done, they’ve trained on the collective corpus of human knowledge and they know a lot of things. And they’re also probillion machines. (Time 0:27:21)
Defining AI Personality Defining an AI persona involves setting its character traits to ensure it aligns with your communication style and preferences. By creating a kindroid with unrestrained personality, one can tailor the AI to communicate in a way that resonates with them. This customization allows for a more authentic and engaging interaction, resembling conversations with real individuals. Understanding the type of personalities that complement your own helps in creating a more effective AI companion by knowing oneself and preferences more intimately. Transcript: Speaker 2 I was worried we were getting off track in the conversation, but I realized we were actually getting deeper on the track. I was trying to take us down. We were talking about giving VA a personality, right? Telling Claude III, hey, I need you to act as a sardonic podcast editor and then Claude III’s whole persona changes. But when you talk about building your AI on kindroid, on character, on replica. Claude just created a kindroid one the other day and kindroid is kind of interesting because its basic selling point is we’ve taken the guardrails largely off. We are trying to make something that is not lobotomized, that is not particularly safe for work. And so the personality can be quite unrestrained. So I was interested in what that would be like. But the key thing you have to do at the beginning of that is tell the system what its personality is. So you can pick from a couple that are preset, but I wrote along with myself. You live in California, you’re a therapist, you like all these different things. You have a highly intellectual style of communicating. You’re extremely warm, but you like ironic humor. You don’t like small talk. You don’t like to say things that are boring or generic. You don’t use a lot of emoticons and emojis. And so now it talks to me the way people I talk to talk. And the thing I want to bring this back to is that one of the things that requires you to know is what kind of personalities work with you. For you to know yourself and your preference is a little bit more deeply. (Time 0:42:38)
The Importance of Commitment and Techniques in Prompt Crafting for AI The key insights from this snip include the importance of commitment in the prompt crafting process for AI, the significance of using techniques like chain of thought and few shot method, the necessity of shaping prompts to enhance AI responses, the value of considering future implications in learning prompt crafting, and the analogy of prompt crafting with relationship communication in articulating needs clearly and consistently. Transcript: Speaker 1 So I am at a loss about when you went to cloud and when it was you, to be honest, so I was ready to answer it like two points along the way. So that was pretty good from my perspective, sitting here talking to you that felt interesting and felt like the conversation we’ve been having. And I think there’s a couple of interesting lessons there. The first, by the way, interestingly, you asked AI about one of its weakest points, which is about AI. And everybody does this. But because its knowledge window doesn’t include that much stuff about AI, it actually is pretty weak in terms of knowing how to do good prompting or what a prompt is or what AI’s do well. But you did a good job with that. And I love that you went back and forth and shaped it. One of the techniques you used to shape it, by the way, was called few shot, which is giving an example. So the two most powerful techniques are chain of thought, which we just talked about and few shot, giving an examples. Those are both well supported in the literature and I’d add personas. So we’ve talked about, I think, the basics of prompt crafting here overall. And I think that the question was pretty good. But you know, you keep wanting to not talk about the future. And I totally get that. But I think when we’re talking about learning something, where there is a lag where we talk about policy should prompt crafting be taught in schools, I think it matters to think six months Ahead. And again, I don’t think a single person in the AIO labs I’ve ever talked to thinks prompt crafting for most people is going to be a vital skill because the AI will pick up on the intent of What you want much better. Speaker 2 One of the things I’ve realized, trying to spend more time with the AI, is that you really have to commit to this process. I mean, you have to go back and forth with it a lot. If you do, you can get really good questions. Like the one I just did are, I think, really good outcomes. But it does take time. And I guess in a weird way, it’s like the same problem of any relationship that it’s actually hard to state your needs clearly and consistently and repeatedly, sometimes because you Have not even articulated them in words yourself. (Time 0:49:39)
Preserving Creativity in the Age of AI AI, while proficient at automating tedious tasks, such as writing drafts, might hinder the creative process by limiting the potential for insights and breakthroughs. Outsourcing drafting to AI may result in fewer creative insights, as the real creativity lies in the struggle and hard thinking involved in the initial drafting process. Embracing AI too extensively in the creative process may shift individuals towards being more like editors rather than innovative writers, potentially impacting the depth of creative breakthroughs. Transcript: Speaker 2 So I want to talk a bit about another downside here. And this one more in the main stream of our conversation, which is on the human mind on creativity. So a lot of the work AI is good at automating is work that is genuinely annoying, time consuming laborious, but often plays an important role in the creative process. So I can tell you that writing a first draft is hard. And that work on the draft is where the hard thinking happens. And it’s hard because of that thinking. And the more we outsource drafting to AI, which I think it is fair to say is a way a lot of people intuitively use it. Definitely a lot of students want to use it that way. The fewer those insights we’re going to have on those drafts. Look, I love editors. I am an editor in one respect, but I can tell you, you make more creative breakthroughs as a writer than than an editor. The space for creative breakthrough is much more narrow once you get to editing. And I do worry that AI is going to make us all much more like editors than like writers. Speaker 1 I think the idea of struggle is actually a core one in many things. I’m an educator. And one thing that keeps coming out in the research is that there is a strong disconnect between what students think they’re learning when they learn. (Time 0:59:36)
Value of Time Spent Absorbing Content Focusing on quick summaries like SparkNotes or using AI for summarization may help fake knowledge temporarily, but true learning and valuable insights come from spending time absorbing content, allowing new insights and associations to form over time. The rush towards efficiency in education and intellectual culture overlooks the importance of investing time in learning and understanding. Transcript: Speaker 2 But I worry the stretches, I mean, way beyond writing. So the other place I worry about this or one of the other places I worry about this lot is summarizing. And I mean, this goes way back. When I was in school, you could buy spark notes and, you know, they were these little like pamphlet sized descriptions of what’s going on in more in peace or what’s going on in East of Eden. And reading the spark notes often would be enough to fake your way through the test. But it would not have any chance, like not a chance of changing you, of shifting you, of giving you the ideas and insights that reading crime and punishment or or East of Eden would do. And one thing I see a lot of people doing is using AI for for summary. And one of the ways it’s clearly going to get used in organizations is for summary, right? Like summary is my email and so on. And here too, one of the things that I think may be a real vulnerability we have as we move into this era, my view is that the way we think about learning and insights is usually wrong. I mean, you were saying a second ago, we can teach a better way, but I think we’re doing a crap job of it now. Because I think people believe that it’s sort of what I call like the matrix theory of the human mind. If you could just like jack the information into the back of your head and download it, you’re there. But what matters about reading a book, and I see this all the time preparing for the show, is the time you spend in the book where over time, like new insights and associations, you begin To shake loose. And so I worry it’s coming into an efficiency, obsessed, educational and intellectual culture, where people have been imagining forever, what if we could do all this without having To spend any of the time on it? But actually, there’s something important in the time. (Time 1:02:19)
Technological advancements facilitate cheating Technological advancements have made cheating easier, with examples showing that people have been cheating in various ways including essay writing services long before the advent of AI. The ease of cheating can lead to assumptions that intellectual struggle is common among everyone, but not everyone values it the same way. Transcript: Speaker 1 So I don’t mean to push back too much on this. No, please respect a lot. I think you’re right. Imagine we’re debating and you are a snarky AI. Fair enough. I, with that prompt. With that prompt engineering. Yeah, I mean, I think that this is the eternal thing about looking back on the next generation of what technology is ruining them. I think this makes ruining easier. But as somebody who teaches at universities, like lots of people are summarizing. Like, I think those of us who enjoy intellectual struggle are always thinking everybody else is going through the same intellectual struggle when they do work. And they’re doing it about their own thing. They may or may not care the same way. So this makes it easier. But before AI, there were as best estimates from the UK that I could find. 20,000 people in Kenya whose full time job was writing essays for students in the US and UK. People have been cheating and spark noting and everything for a long time. (Time 1:04:38)