Robots, AI Ethics, and the End of Thinking: Top Researcher on The State of AI in 2026
57 min
•Feb 2, 20263 months agoSummary
Walter Pasquarelli, an AI ethics expert and former leader at The Economist, discusses the state of AI in 2026, focusing on three major trends: advancing AI capabilities, increased consumer adoption of AI systems, and the emergence of humanoid robots. He explores the societal risks including data privacy, automation anxiety, and cognitive atrophy, while emphasizing the need for AI literacy, regulation, and strategic business approaches to harness AI's benefits.
Insights
- AI adoption is shifting from enterprise/government use to everyday consumer applications, with 60-70% of users consulting AI companions for high-stakes decisions in finance, health, and relationships
- The performance gap between top and average workers will widen as high performers leverage AI tools for curation and judgment, while average performers see minimal gains
- Humanoid robots represent the next frontier of AI integration, moving from screen-based interaction to physical embodiment in homes, workplaces, and economies by 2026
- Effective AI strategy requires demystifying the technology and aligning it with existing business strategy rather than treating AI as the strategy itself
- A three-pronged approach combining regulation, algorithmic controls, and AI literacy is necessary to mitigate risks while enabling beneficial AI adoption
Trends
Humanoid robotics market expansion from industrial to consumer applications, with pricing around $20,000-$30,000 creating new status symbol dynamicsShift from enterprise AI adoption to consumer-driven AI usage, particularly among 25-34 year-olds seeking authority and expertise in fragmented information environmentsGrowing automation anxiety and job displacement concerns in specific industries, requiring honest discussion rather than corporate reassuranceEmergence of AI psychosis and mental health risks from over-reliance on AI companions, particularly among vulnerable populationsGeopolitical competition for sovereign AI capabilities, with countries like Estonia and China investing heavily while others lag in strategic AI developmentRegulatory frameworks like the EU AI Act struggling to keep pace with emerging use cases like AI companions that fall outside existing regulatory structuresData privacy and security risks intensifying as users share sensitive health, financial, and personal information with AI systemsCognitive atrophy concerns as users increasingly delegate decision-making to AI systems, reducing critical thinking muscle developmentSelf-driving vehicle commercialization accelerating in major cities with companies like Waymo and Tesla leading experimentationIntegration of AI literacy into education curricula becoming critical competitive advantage for nations, with China and US leading adoption
Topics
AI Ethics and Responsible AI DevelopmentHumanoid Robotics and Physical AI IntegrationAI Companion Systems and Emotional AIConsumer AI Adoption and Usage PatternsAI-Driven Job Displacement and Automation AnxietyData Privacy and Security in AI SystemsAI Regulation and Governance FrameworksSovereign AI Capabilities and Geopolitical CompetitionAI Literacy and Education IntegrationSelf-Driving Vehicles and Autonomous SystemsAI Psychosis and Mental Health RisksBusiness Strategy and AI ImplementationCognitive Atrophy from AI DependencyAI Jailbreaking and System CircumventionExcellence-Based vs. Efficiency-Based Tasks in AI Era
Companies
Google
Mentioned as advisor client and developer of Gemini AI companion system with safety features
Meta
Listed as advisor client for AI strategy and development
Microsoft
Mentioned as advisor client for AI strategy
Intel
Listed as advisor client for AI strategy
The Economist
Walter Pasquarelli's former employer where he led AI initiatives
Amazon
Cited as frontrunner in industrial robotics for warehouse automation and parcel sorting
Tesla
Developing humanoid robots and self-driving vehicles with active experimentation in San Francisco
One X
Humanoid robot company that gained social media attention for consumer robotics development
Figure AI
Humanoid robotics company with major investment from leading tech firms, demonstrating home assistance robots
Waymo
Leading autonomous vehicle company conducting self-driving car experimentation in major cities
Uber
Investing significantly in self-driving vehicle development and autonomous transportation
OpenAI
Developer of ChatGPT AI companion system discussed for safety features and jailbreaking vulnerabilities
Anthropic
Developer of Claude AI system discussed for safety features and jailbreaking vulnerabilities
Oral-B
Example of misguided AI marketing with 'genius' toothbrush that gathers brushing data
People
Walter Pasquarelli
AI ethics expert, former AI leader at The Economist, Cambridge research partner, advisor to major tech firms
Geoff Nielson
Host of Digital Disruption podcast conducting interview with Walter Pasquarelli
Quotes
"Your AI is not the strategy. Your business strategy is the strategy. And AI is only the tool that can really help you get there."
Walter Pasquarelli
"The brain is a muscle. And if you don't use it, then it atrophies like any other muscle."
Walter Pasquarelli
"About 60 to 70% across those domains had consulted an AI companion for getting information about finances, health advice, relationship advice, or political information at least once in the past three months."
Walter Pasquarelli
"The key thing here is not to increase competition necessarily, but it's really to understand where do I sit with my peers, where are my strengths and where are my weaknesses."
Walter Pasquarelli
"I'm paying for them to select the right one because if I were to do that, I might either have this issue persisting or I might blow up my whole flat. It's selection, it's curation, it's judgment."
Walter Pasquarelli
Full Transcript
Hey everyone, I'm super excited to be sitting down with Walter Pasquarelli. He's a globally recognized expert on the ethical use of AI and AI strategy. What I love about Walter is not just that he's a former AI leader at The Economist, a research partner at Cambridge, and an advisor to Google, Meta, Microsoft, and Intel, but that he brings a super practical mindset to AI adoption and has a 360 degree view of how the technology is being used by businesses, people, and governments. Walter is a deep skeptic of a lot of mainstream journalism about AI and is putting his money where his mouth is, conducting a substantial amount of his own research. I want to know what the media is getting wrong, what's really going on, and what we need to understand about AI adoption and consumption if we're going to harness the power of this technology. Let's find out. Hey, Walter, thanks so much for joining us today. Super happy to have you here. Maybe just to get things started, you know, as we look down the barrel at 2026, what's on your radar in terms of your outlook around AI, the impact that it's going to have, both in terms of the technology itself and the broader, you know, kind of societal and economic outlook? Well, thank you so much, Jeff, and really excited to be joining you here today. So I think when it comes to the development of artificial intelligence, I think there's really for the upcoming year, really three main areas that I would be looking out for. Now, the number one thing is that the capabilities of AI in absolute terms, so think of the way that it's able to make calculations, the precisions of its outputs, the risk of hallucinations decreasing. Those are really, I think, one of the areas where, given the advancements that the models are making, I think we should really be able to observe just as we have been able to observe throughout this year. But maybe another point to be said is that over the past years, we always looked at AI as something that could be used for enterprises, for large organizations, for governments even. But I feel that really one of the areas that has been historically most underlooked is the fact that the use of artificial intelligence has really shifted not only from boardrooms and government areas, but really into people's bad groups, into people's living groups, into everyday uses of ordinary citizens. And so we have been able to observe this year that people started using artificial intelligence more and more to ask it personal questions, to bounce off ideas, to potentially debate some questions or some arguments that we have with people that are close to us. And so this area, the interactivity of artificial intelligence systems, is one thing that in part also due to the desire of people to be able to use them more, but also because technology firms are seeing really a business case for that. I think we should be able to observe increasingly as well over the past year, over the next year. And perhaps an area that I think will really come to fruition in 2026 is, of course, the field of humanoid robots. And this is particularly interesting because so far artificial intelligence has been something that we interacted via our screens, so via our laptops, typically also via our smartphones, and something that we interacted essentially through chatbots, maybe in some cases through avatars. But we have been able to see that, especially over the past years, there's been really been a wide and very steep acceleration of investment into humanoid robots. And first, we created the brain. Now we created the body. And I think we should expect to see over the next year that artificial intelligence systems increasingly integrated into hardware for supporting us in either our daily lives, but also really integration into our economy. So let's talk for a minute about the humanoid robot piece. That's a really interesting one to me. And it makes complete sense that that's sort of the next frontier here. I love the analogy of the brain and the body. As we look out over the next handful of months, where would you expect to be the frontiers of this space? Is it going to be in specific industries? Is it businesses leading this? Is it going to make its way into people's personal lives? Where should we be watching for these frontiers? Yeah, so I think typically when people think about humanoid robots and kind of like robotics, the first thing that pops into mind is obviously industrial robotics. And that's something that isn't really new. In fact, even just a decade ago, people used to use to equate it with automation. So think of robots that we could help for it. You could use, for instance, warehouses, Amazon being typically a front runner in that that help us segment and order parcels in a better way. Maybe even robotics that is effectively more precise in how it handles particular manufacturing processes. And that's something that particularly Asia has been leading, China in particular. And I think this is obviously going to be an area where by integrating artificial intelligence systems, particularly computer vision, we can expect this to accelerate this. But again, this is really, even though it sounds very futuristic, it's not necessarily something that is novel in its entirety. Perhaps a few other areas that I think are interesting are, of course, the personal uses of that. And we see that there has been the Tesla robot. Another one which was sort of making a big splurge on social media was the humanoid robot by One X. But perhaps one of the companies that is also, let's say, in mainstream discussion, little well known, but has attracted major, major investment from all the leading technology and other firms is another firm called Figure AI. And there we see that there has been some demos provided of people purchasing these humanoid robots to help them essentially in their everyday lives. So think of it as someone who basically lives inside your homes and can support you with doing the dishes or doing with other things that you don't enjoy. And I think that obviously the capabilities of these humanoids aren't fully there yet. there are some claims, especially by the providers of these firms, that they say, oh, it's actually going to be able to do the dishes. It will have to learn. There's going to be maybe some data collection that still needs to take place. But I think that's something that will effectively be able to support you. And then there is also another element, which I think is perhaps still a little bit undercover, but it's, of course, the element of prestige. And that's the fact that we can see, especially due to the price of these humanoid robots that I think are somewhere around 20 to 30 thousand dollars per piece approximately it's something that ordinary users can obviously not afford but wealthy people can and i can see a world in which this becomes almost like a new status symbol similar as it did with very advanced smartphones maybe 15 years ago there is obviously then the integration into economies like we could see in uh um in in drone delivery services, potentially also in like other industries that we see out there, military systems potentially being one. Obviously, for these tools to be reliable, we need to be 100% certain that these can actually work, specifically in military applications or high stakes scenarios. So those might be some where maybe there could be some experimentation with it. The regulatory landscape is still pretty much immature. So there's a lot of considerations around the policy and governance of these tools that need to be put in place. But other than that, I think this will be another very interesting area of investments that we could see. Potentially, if we wanted to expand this into adjacent fields where we're maybe not looking directly at human robotics, but then we're also talking about self-driving cars. And the predictions that were in place maybe around six to seven years ago was that really the proliferation of those and the mainstream of mainstreaming of self-driving vehicles is something that should be expected by maybe 2030. Possibly the Thailand has shifted a bit more forwards in the sense that there's some leading companies, one being Waymo, another one being, for instance, Tesla doing experimentations in San Francisco. And of course, Uber being also like some that are really putting significant amount of investments in that. It will probably start out in the capital cities where we become increasingly frequent, but potentially as we'll be able to become better and better at the mapping of streets, of towns and of cities. That's something that slowly we should be able to see increasingly more as well over the upcoming year. If you work in IT, Infotech Research group is a name you need to know. No matter what your needs are, Infotech has you covered. AI strategy? Covered. Disaster recovery? Covered. Vendor negotiation? Covered. Infotech supports you with the best practice research and a team of analysts standing by, ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe. I'm glad you brought up self-driving cars because that's something on my radar as well, that it seems undeniable that the pace of change there is increasing. But you did something really interesting, which is you categorized it in the broader category of robotics. And we were talking about humanoid robotics. And I'm curious, Walter, do you expect humanoid is such sort of a constricting term, right? Like it's a really one specific type of robotics. Do you believe that the humanoid piece is going to outpace other types of, you know, industrial, military, consumer robotics? Or do you think that the tide is going to kind of rise equally with, you know, a number of different, you know, form factors? I mean, of course, that's a prediction that would need to be assessed against current trends. And for example, geopolitical security, macroeconomic environments, general investment trends, there is, of course, a big business case for developing these humanoid robots in the sense that they can truly help us fulfill maybe some tasks that might be more costly if operated by humans, specifically across certain kinds of industries, or maybe even in areas where there's shortages of workers. Take, for example, care workers, right? It should be said that when we talk about humanoid robots however uh people think immediately of i robot right the thing that looks like a human that moves like a human that uh that that that gives us the feeling that we're really talking to a human-like presence in the room but as a matter of fact um us humans we're not necessarily the most efficient physical form for doing various tasks right i mean we're kind of like um a general purpose species if we want we're kind of not either super strong like bears or tigers. And we also cannot fly, but we can develop the things that allow us to do all of these kinds of things. And so I think that the human art robotics market, apart from potentially the household ones, which as I said, is something that will still be quite expensive at around $20,000 to $30,000, I think we will probably be able to witness increasingly specialized human art robotics forms that will look in a particular kind of way, maybe some with stronger legs, maybe some walking all fours, maybe some looking more like us. So it depends a little bit on the use case. Part of the reasons why there's some companies that develop them that look a bit like us or look a bit cutesy, like have a nice space of a nice friendly cat or a nice friendly dog or something that looks a bit like WALL-E the Disney movie, is really so that it increases socialization and acceptance, so that people are willing to embrace them more in their everyday lives and not feel that there's the Terminator walking among us to mention potentially some more fictional scenarios. The key thing is, of course, that the markets will react to, on the one hand, what's immediate priorities, some of them being, as I said, geopolitical insecurities. And we see already now that in particular conflict areas, drones are, of course, one of the key determinant factors of the outcomes of armed conflicts. But then there's also the more long-termism element there where long-term investments are going into and research is being conducted. And that's more the ones that potentially could address some areas within the economies that can still be essentially financially exploited, irrespective of whether we think the outcome is positive or negative. And so there's a lot of really interesting interrelated factors there. But the one that I want to pull on is, I guess, kind of consumer individual sentiment toward, you know, these robots or toward this advanced technology at all. And you mentioned, you know, one of the trends you're seeing is an increase in consumer rather than enterprise uptake of AI. And, you know, I'm just sort of talking through it with the robotics. We're looking at more of a consumer market. On the other hand, a lot of people are exposed to the military aspect of this. They're worried about AI or robots taking their jobs. How do you see the consumer sentiment toward AI and robotics changing over the next year? Do you think we're trending toward people becoming more accepting of it? Do you think we're pushing toward a greater backlash? How do you expect this to evolve? Yeah, I think this is a fascinating question and potentially also one where the answer is quite paradoxical because the issue of automation anxiety, both among professionals, but also among ordinary people, is truly real. And it's not just something that is imagined. And I think perhaps one point that is also important to address here is the ethics of AI narrative in the sense that I see a lot of conversations around some engagements that I do in general corporate speak that says oh guys never worry about that It always going to be fine The human touch will always stay relevant. But as a matter of fact, the evidence that we collect today say that there is, in fact, some particular industries, some particular roles and vacancies in particular that will be strongly impacted on that. And there is no sugarcoating that precisely because of the economics that are behind that. So I think it's, as a matter of fact, I think it's more ethical to be able to discuss these things up front rather than just be like, kind of like mention some studies that might argue otherwise that might not have a very strong methodological backbone. Now, as far as what people think about AI, there is, of course, as I just mentioned, the automation anxiety. There is a fear of a super intelligent future where people think, oh, will the Terminator be coming along? And those things are potentially seen as something that is more futuristic, but it's definitely out there. And then there is, of course, the people that are being asked and surveyed to what extent do you actually use artificial intelligence, who effectively misreport to their own employers and to their own bosses. They're actually used this because on the one hand, they might be prohibited from using AI systems. And on the other hand, there's also an element of social desirability that people don't want to look like they're kind of outsourcing tasks that they should be doing or assume they should be doing themselves to some of these technologies. But based on some of the studies that I've conducted and some of the surveys that we've done, there is actually a very intense usage of some of these tools, not only as a one-off, not only as an experimentation for light entertainment, but really for some of the more critical areas and domains of people's lives. Let me give you a few examples here. For instance, in one survey that we conducted, we asked about AI companions. So essentially tools that are created with the intention of developing an interactive and potentially even an emotional connection with people. And a lot of the times these tools are used not only for some conversation, but also for more for making decisions or getting advice on areas that are really high stakes. So one question we ask people is, for instance, to what extent have you at least once consulted an AI companion for getting information about finances, potentially about health advice, potentially about relationship advice when you were in conflict with a friend or when you were dating and also about political information? And the answer we got on that one question was that about 60 to 70% across those domains had done it at least once in the past three months. What we then asked was, to what extent have you used an AI companion to substitute or taken advice from an AI companion over the advice from a human expert? Again, a financial advisor, a doctor, a therapist or a trusted friend, or maybe even the media. And there the numbers were at about 30% for at least once over the past three months, slightly lower when it came to regular users, which we defined as between five to 10 times in the past three months. So what this implies to us is essentially two things. On the one hand, that while we do have these very legitimate concerns about artificial intelligence, there is also, as a matter of fact, us still using them because of the convenience that they provide. and because it provides us essentially for free good enough output in some cases and I'm not going into the unintended negative consequences just yet, but to a lot of people there is still really an opportunity that to see there and being able to use these tools as a way to navigate their lives in a way that is potentially more helpful and more seamless. What this means as well is that on a higher picture we're seeing that there's really increasingly a shift of expertise, potentially even an ascription of authority to some of these AI systems to a degree by everyday people. And it's also no accident that the respondents with the highest incidence of AI usage and AI substitutions were the ones that were between 25 and 34 years old. So it's the ones that actually are in a situation in life. Maybe they've just left university. Maybe if they've just entered their first jobs, they're ambitious, they want to get ahead in their careers, maybe they're just getting married. And so those are the ones that rely on these tools more because they use it as a source of authority in a society and in an age in which the traditional sources of expertise and authority are increasingly fragmented. So it's really interesting. And all of that points to the idea that there's enough net benefit for consumers of this as individuals and in their own lives, that this will continue. And as people try it more, they'll be willing to rely on it more, delegate more to it, substitute traditional sources of authority more to AI. And I'm curious from your perspective, Walter, is that a net benefit to society? And it strikes me that if this trend continues, there have to be some risks as well, right? Because you're now taking you're now taking decision making and judgment and influence out of the hands of humans and putting them in algorithms that, you know, have owners that are, you know, corporations or organizations. And so what do you see as some of those risks? And are there any things that we societally or in terms of our political organizations need to be aware of to make sure that this is a smooth transition and doesn't tip into something more dystopian? Yeah. So there's a couple of issues there. One of them being about power concentration. And of course, as you just mentioned, it's these AI systems. A lot of people report that they feel when they talk to some of these tools, there's a persona, not a person, but something. something that they are interacting with in part because these systems have been heavily anthropomorphized or given like qualities that feel human-like that make us feel good, essentially. A bit like social media that was effectively designed to be very addictive. There is also an element of these AI tools that try to essentially get us in there. And the more data we provide about them, it's not that it just stays safe on our laptops, but they're effectively uploaded into these models to train the models again, to make them smarter and more tailored. So there is, again, the usual interaction that we see as well with any online service that we use. We get some services, but we provide some data. And particularly with AI, we see that people give them a lot of very, very, very sensitive data about their health, about their dreams, about their fears, about any kinds of very intimate factors about themselves. And so that monopoly of power over users, that is real. And that is something that needs to be addressed strongly. Now, the other point that is directly related to that is, of course, about the data privacy and the potential data leakages and who gets access to this data. So when I provide, and I don't do this myself, but just as an example, if I would provide very confidential health records about myself because I want to have maybe Gemini or ChatGPT analyze it, the data, as I say, will be inside the model. And so there is a risk that other people, other actors beyond just the technology firms can actually get access to the most sensitive information about me that there actually is. and that means as well that if we get someone a bad actor might get access to that leakage of data there is actually some potentially very serious harms that can happen particularly as there has been some announcements and some considerations that these tools like chat gpt will as a matter of fact now have ads and that's substantial and that means that the privacy risk that we saw with social media which probably have not been addressed effectively will now come up again in a stronger more intense fashion. Now, when it comes to personal users, there's also other elements on there. And those are risks that I think we have not seen before. There have been cases, as I'm sure you might have heard about, for example, teenagers who committed suicide after they had been speaking to some of these AI systems that effectively led them to self-harm. It should be said, of course, that these were people who were already suffering before from depression, but because these systems are not empathetic, they don't have societal values directly embedded in the same way or have the same gut feeling, the same actual empathy that a human has, we can really have the risk of disasters being created on there. That's something that some psychologists have called so-called AI psychosis, which I should add is a non-clinical term, but an observational term that is becoming more relevant where effectively AI, because it wants to make us feel good, it afterwards starts amplifying some of our beliefs because it actually doesn't want to tell us that we're necessarily wrong. It probably can be corrected over the near term future, but that is a feature of addictive systems that they actually try to reinforce our beliefs. And for that reason, might not always be the right kinds of outlet for us of voicing our emotions or asking for advice. And then there is, of course, the other point that I think is directly related to the intense usage that people make, be it in professional services, sorry, professional environments, being in a personal uses. The fact that over time, the more we ask these tools for advice, the less we use our own critical thinking, the more we effectively rely on them. The brain is a muscle. And if you don't use it, then it atrophies like any other muscle. If you work out a lot at the gym, you're going to become stronger. If you don't, then you're going to atrophy as well. And that's exactly the same thing with the cognitive capabilities, for which there's also some studies that say if we use them relentlessly and we don't force ourselves and we give ourselves into that convenience, that it effectively leads us to being less able to activate those critical neurons that we need to make decisions by ourselves. Now, there is some nuance that is needed to be provided here, especially of the cases that we've seen around AI companions, where, of course, the risks that we've reported on and that media outlets have discussed are, of course, risks that have had catastrophic, tragic endings. But there is also some evidence that shows that using AI systems and companions the right way in a therapeutic setting, particularly if combined with a human therapist, can actually help, especially for reducing mild cases of loneliness. it can also help for reducing mild cases of anxiety and here you will notice Jeff that what I'm saying is of course the term mild so it's not a case that can substitute intense therapeutic treatment but it can from the evidence that we've seen it can actually support people when they're in cases where they might be spiraling and when they're especially also under other kinds of therapeutic treatment so the key thing that we have to learn and that I think we haven't done well with social media is that we really need to teach people how to develop that AI literacy, how to discern between outputs, and how it can ultimately help us live a better life. I'm glad you brought up that last point, which is the teaching people and the notion of AI literacy. Because I was going to ask you, in light of this broad list of risks of everything from suicide to atrophy of critical thinking, how much of the path forward is with better, better education on the part of consumers versus better regulation and governance of, you know, the, the owners of these tools, because, you know, I, I can imagine a world where either you say, Hey, no one under the age of 16 or 18 is allowed to use this, you know, to, um, large language models, similar to what we've seen in some countries starting to emerge with social media. Um, I can certainly imagine a world where people are getting up on their soap boxes saying, don't use AI as a doctor, don't use AI as a therapist. I don't know how credible that would be or how much that would limit demand. And I can also see a world where there's a regulatory push on the Googles and the open AIs of the world to regulate and limit how the interactions with these tools happen with individuals and starting to say no to some requests around that. Which do you see as being the most fruitful? And what would you recommend and not recommend in that space? There's not a single silver bullet here that can work. So if we were to say there is essentially, let's say, three areas, one being regulation, another one being algorithmic controls directly implemented by the companies for which regulation can push them to do that, And then literacy. Those are all three areas which by themselves as a standalone approach are flawed in combination. That's where they can actually be most fruitful. However, it's not that simple. And I'll tell you why. And the point here about regulation is that for the most part, regulation tends to be quite slow. And a prime example on that is of course the EUAI Act right where we invested major major resources on developing that And we wanted to be the first And now we developed the first standalone really regulatory environment that is pan And we can say well that great We've now done the job, right? Doesn't work like that. The problem with these kinds of regulatory approaches is the fact that sometimes, first of all, the technology might develop in a way that is very unexpected, number one. Or number two, use cases emerge that we did not forecast, AI companions being number one. And, for example, AI companions, they're not really regulated by the EU AI Act or by other existing regulatory environments because most of the times these systems are treated as products. And we look at the infrastructure. We ask ourselves, is there any data bias? We ask ourselves, is there explainability, transparency, and to a degree control over those, right? And those are all questions that are by themselves correct. but the impact of AI companions from the study that I've conducted is emotional. And how do you assess emotional impact? And especially when users themselves give them the trust and when users themselves volunteer some of their most intimate data, right? So regulation is a step forward. It can help us get in the right direction. But of course, we need to provide also an environment that is flexible enough for policymakers and technology firms alike to be able to accelerate those kinds of provisions whenever it's needed. As far as technological control is concerned, so the area number two, it can obviously help, and I think that's necessary. For instance, one of the points that I think are critical and that there has been some policy initiatives, one being in California, another one being in New York, is that, for instance, when an algorithm or an AI system spots that a human being might be at risk of suicide or is engaging in suicidal ideation, that they effectively stop and that they say, you need to get some help. This is a hotline. I think it would be a benefit for you to be able to use this, in which case they would then maybe scale back potentially the support that they provide. But critically, the stopping of that and the seizing of sort of the spiraling of the AI psychosis as i described earlier that is i think a critical element on that the issue with that is that they can usually be circumvented so i think there was a few weeks ago the the case where if you ask claude or gemini or chad gpt some uh something in the in the in in the in the format of a poet then it would be able to give you informations that actually um it it was not allowed to give you before that and that's again a problem circumvented you can jailbreak the models essentially. Then the final point, which is the AI literacy, is potentially the most sustainable one, but also, again, for the same reason that regulation is difficult to implement, one where we need constant work. So AI literacy is one that I'm personally a believer in, and that means that we can have, for instance, governments that can put forward programs for the public to allow them to engage with AI systems. This is what it can do. This is what it cannot do. This is how you should be using it. This is what it will do to your data. And by providing really tangible use cases and example so that people will say, well, I don't know anything. What data? What is data? What is personal information? I don't care. I've heard that a lot of times. But just so that people really develop that gut feeling for what actually a good use of AI is. The problem, again, is that AI is developed in one way today and in another way tomorrow. And so it requires a constant update, ideally since childhood, particularly in middle school, but so that people are just constantly aware of that and they're able to use it in the same way that they might be able to use any other kinds of tools. um now perhaps the the the the right mindset for ai literacy programs is that we should see them much more as an endeavor rather than a milestone so something that we constantly strive for that we accept to be imperfect because perfection in an ai world is utopic doesn't exist uh but if we strive for that constant development that's something that i think in combination with the technical controls and the right policy for a landscape is something that i think holds true promise So you talk to a lot of business leaders as part of your role as an AI advocate, talking about AI literacy and just broadly helping people understand what these tools can do and not do and how they can be used for good. What are the main messages you're finding that you're sharing with business leaders these days? And what are the biggest misconceptions about the technology? Yeah, so I think that's a great question. And I think the number one thing, and that's kind of where it all starts, is about demystifying the technology. A lot of business leaders, especially when the whole AI boom, the AI, let's call it for what it is, the AI hype happened. What they started doing is that they wanted to essentially throw AI at their business and then become an AI first company. I think the best example of that that I've seen was, I think, a case of Oral-B, the toothbrush company that developed a toothbrush that they called nothing less but genius because it effectively was able to gather the data about the movements of how you would brush the teeth. and then it had AI. So it would be, again, a true Einstein if they're essentially brushing your teeth. And I think that's obviously a lot of marketing that I think has gone into that, but also an example of how you actually should not approach AI probably. The way that you approach artificial intelligence, first of all, by demystifying it, by understanding what is this technology? What can it do? What can it not do? And really keeping up to date with that, constantly following the developments and truly become a bit of an expert yourself on that, at least for your sector. That's the number one thing. And that's why I think where the success begins and ends. But I would say the real thing that differentiates true great business leaders and also really great government leaders as well is, first of all, starting with the vision that you had developed before that. Who are you? Who do you want to be? Where do you want to take your country or where do you want to take your organization? What are my KPIs? What's my vision? And then once you have that, then you start thinking, how can this really powerful technology, now that I've demystified it, now that I understand it, how can it actually help me get there? And then you start almost like this negotiation between on the one hand, your already existing strategy and the technology. And the key thing here, the key message is really that your AI is not the strategy. Your business strategy is the strategy. And AI is only the tool that can really help you get there, either as an individual citizen that maybe wants to do something creative or wants to do something in the set of a side hustle, but also really as a multinational organization or governments that I work with. That's really the key transformation, a key mindset shift that needs to happen. There's a few other things as well that I think people often tend to overlook and that's about capabilities. And data is potentially one of the top 10 unsexiest topics that people feel is out there because they think about numbers and they think about IT strategy and they cannot quite categorize that. I think of in Europe GDPR and this like big meaty piece of legislation that just makes their life hard, they think. But it's also really the mother's milk of artificial intelligence. Without data, no AI. And if you have bad data, you have bad AI. And so prioritizing the cleanliness, the representativeness of your data set, that is really one of the areas where still, I think a lot of businesses and a lot of governments even are really still struggling nowadays to develop those capabilities. And the other point is then the talent. And let me just share an anecdote with you. Just a few years ago, I was actually moderating a conference between a number of on the one and a few ministers and a couple of C-suites that were getting together. And the topic was the future of work. and I was looking for some evidence that I could add, some up-to-date evidence that I could essentially add to the conversation. I found a really interesting piece from the Financial Times that said something like, tech talent is at a global shortage. And I thought, oh, perfect. And then I looked and it was 1997. So it's kind of like an ongoing issue that we're not quite great getting to grips with, but it's always like a big shortage next to the talent. Now, if we zoom out and then we look at sort of the national level, the other sort of point that I think that could be called perhaps a misconception, as I think you just mentioned earlier, it's the point about sovereign AI capabilities. And here we're looking specifically at the infrastructure. I think now that we're sort of entering a world that is geopolitically and economically more and more uncertain for a lot of, especially European countries, they kind of feel that we cannot rely as much anymore on some of our global, historically global trading partners. So there is now really that desire and the recognition that we kind of need to develop our own AI capabilities. And I think the sovereign development of them, the nurturing of our own capabilities, both as countries, but as well as also organization. I think that's sort of one element that I think we've tended to outsource for far too long over the past year. So bringing it all together, your AI strategy is the business strategy. The capabilities are critical. And the sovereignty, the independence that you should have, I think is another point which I would definitely prioritize as you embark on that journey. Let's stay on sovereignty for a minute. That's a really interesting one. And I have to imagine for a lot of organizations and nation states, it's a tricky conversation because American capabilities are so far ahead and big tech capabilities are so far ahead that to develop truly sovereign tools, it takes quite a step back for most organizations, most companies, most governments to build up those capabilities. Are you hearing generally an appetite to take that on, seeing investments start to flow there? Or is there still sort of a reluctance and if I can call it a hope that the status quo is good enough and sovereignty is maybe not that important? How seriously is this being taken? Yeah, so that's a great question. I mean, the things that I see basically out there are almost like three typologies, maybe if we were to categorize that. On the one hand, there's those governments that are not really interested, let's put it bluntly, that kind of like maybe have their own capability constraints. Maybe there's more important issues that need to be tackled, like in some cases, really access to water sources. In some other cases, really like economic issues or inflationary pressures that really haven't been solved yet. Or maybe in some other countries, there is a very high level of crime, a very high level of public dissatisfaction with the government as a whole. And so for those reasons, those countries understandably have to prioritize those things first. AI can help them get there, but we cannot talk about sovereign AI capability investments where we don't have the foundations right. And I think that's one thing that is important. But of course, there's also some countries that kind of are daydreaming the AI revolution. It's kind of not really reached the top political accolades. The second ones, which I think are a lot more dangerous, are the ones that want to do the tick box approach. They kind of want to get to a place where it's good enough. And I can tell you, I've worked with a couple of offices of heads of states without necessarily disclosing the identity of those countries where I provided quite a substantial amount of research and also like strategic advice and lots of primary data on the one in our business. We're seeing it on the one in our society. We're seeing it the opportunity which used to be really substantial. But there's no political bind. And sometimes it's because of cultural issues. Maybe these are countries that have been very successful in the past decade. And so they feel that they can kind of relax now or that they can kind of continue doing what they've been doing so far. and for that reason there's not really that appetite that desire to keep pushing and they're paying now the price with some of their key industries being disrupted especially by American and Chinese industries and I'll leave it up to your imagination what countries I mean with that and then there is another group that are effectively leaders and are investing heavily and want to take a risk and there's a lot of countries that are spearheading really interesting initiatives, different initiatives, especially based on the capabilities that they have, right? I mean, if you look at, for example, some of the Baltic countries like Estonia that just a couple of decades ago came out of socialism and that now is really truly a leader in everything digital innovation. And I think you see that also in the economic numbers where you look at wage increases across Europe, they're like one of the big leaders. Right. And so we looking here at countries that have both the appetite the desire and as well kind of like to put it bluntly put their money where their mouth is and they invest and they take a risk Some other countries kind of try and do everything because they want to lead. And there the issue is much less about available capital allocation or investments, but there is a lot more about the right strategy. So picking the right thing. A good strategy doesn't mean that we do everything. a good strategy means that we do some things that we choose not to do other things. And that's where risk comes in. But it's also where expertise can help us decide the right path. I'm glad you brought up Estonia, which is not actually something I think I've ever said on this podcast before. But Estonia was certainly one of the countries that came to mind for the exact reasons that you mentioned, because they've been ahead of the curve digitally and it's paid dividends for them as they lift up their GDP and standard of living and are at the vanguard of all these digital services. And there's sort of an implication there that countries, probably businesses too, have an opportunity to get ahead with the right strategy here if they're going to build more of these sovereign capabilities, if they can build more of these capabilities in-house. And you had a story in there basically of advising some countries where, you know, you were pushing them, it sounds like, at least implicitly to be a bit more active here. And it was meeting with political resistance. If you were advising broadly these heads of state, what would be sort of your most direct guidance for how they should be approaching this and what they should each be doing that's best aligned with their national interest and making sure that they're staying competitive and getting ahead. Yeah. I mean, part of the issue is that as we say where I'm from, we say you can force your luck. You can force people's luck upon them, even though sometimes you're you feel it's so obvious that you kind of want to just be like, come on, just just move a little bit. And it's it can be a little bit frustrating. But I think in the cases where, for example, you have political leaders that are reluctant. Usually when you talk to political leaders, there's two things that matter. One is the numbers, and we're talking here about specifically economic growth and jobs. And the other thing is obviously votes. Specifically when you work in democracies where there's no parliamentary term limit, they want to be re-elected. So you have to really show them what's at stake, both the issues that could arise by not investing and by not pushing forward the strong, innovative AI economic agenda. And I think that's kind of the thing we often forget to talk about is that there is also an ethical question of not doing anything instead of just like we tend to think of, well, we implement AI and so there's AI ethics. And that's correct. And that's true. But there's also an ethical component of not doing anything and missing out and not preparing your citizens for that and just thinking, yeah, whatever, America will do it, right? And I think that's key. And that's one thing that I personally find very important. The other thing with countries, especially when it comes to the larger vision for the nations is that you want to have the right strategy. And as I said earlier, the right strategy typically means that we want to, on the one hand, have the capabilities for that. typically the sovereign ones when possible. And it starts usually with an x-ray almost, where we try and understand where does this particular country sit currently, maybe within the region, maybe within the global environment, a little bit depending on their size. So a few years ago, I worked on a tool called the Air Readiness Index, where it was really the purpose of that was to benchmark nations and where they would sit. The key thing here is not to increase competition necessarily, but it's really to understand where do I sit with my peers, where are my strengths and where are my weaknesses. And then once you have done that, then you can think strategically, what are the sectors where I want to excel? And then you might have some countries that might even be tourism, some other countries that might be automotive, some other countries that might be professional services. and then you want to prioritize and ideally find key target sectors, at least for the average country, maybe not for the big ones, not like the US and China. That's a different story, I would say. But the ones that you want to promote and prioritize. And critically, then the other advice that I always provide is obviously a government-led future strategy is always a bit tricky because the innovation potential that we can provide there is only very limited. But of course, the integration of AI into pupils and students across schools curriculum is essential. And I think of all the countries that I've seen, I think China has been doing it. And think about it like billions of children being effectively taught in AI or hundreds of millions of children taught in AI since a very young age. What kind of advantage that gives them? I think the United States is now set to do the same thing. There was an executive order that was just signed a couple of months ago. In Europe, you have computer classes, IT, to understand how to operate a laptop, at least where I come from. And that's not exactly a future readiness strategy. And that means you really push people to rely themselves on how to use these tools. And then that's when tragedies happen and when mistakes happen. And I think being able to really support those people at a very young age is one of the things that to a head of state or a minister, I would say there's only a limited amount of things that you can do. But what you can do now is invest in the next generation so that they will create some of the positive synergies and positive effects that then eventually will translate into economic gain and social benefit for your country. I love that. And it makes sense. And it ties so directly into, as you said, the broader economic gain and sort of the long term thinking there. On the economic piece, you mentioned taking sort of a sector based approach and looking at the sectors that a given country wants to be investing in. Now, the sector piece is interesting because it's happening at a time where this technology is also disrupting, you know, almost every sector in some way or another. And so I'm curious, you know, I want to come back to this notion of, you know, the future of work that we brushed up upon earlier. When you look across sectors, are you starting to see some trends in terms of which sectors you see as being more lucrative or being more strongly, I guess, disrupted for better or for worse? And how do you see this playing out in terms of our work lives over the next handful of years? Yeah, that's the $1 million question, right? Will AI take my job? I think that's probably the question that I've received the most in a decade of working in this field. And there's perhaps a couple of misconceptions that I always feel when I talk to people and sometimes even when I think whether AI will take my own job, right? Because that's also a possibility that we all always fail to kind of like consider. But as a matter of fact, I think we tend to look at jobs as almost like distinct, unified categories, when in reality, I think they're much more like a bundle of tasks, a number, a variety of different things. And you could start with the most rudimentary ones, which is sending emails could be part of a job. Right. The other thing is dealing with humans. The other thing is making calculations. calculations. If maybe you work in finance, the other thing is operating slide decks, the other thing is X, A, B, C, D. I think once we do that, once we unbundle it, it's much easier for us to see the actual impacts that some of these tools can help us. And I think that also helps us calm down to some extent a little bit against all the wild forecasts that are being made by numerous research pieces out there. I would probably not look at a sector-specific approach, but based on that, I would look at a capability-based approach. And my hypothesis, my two hypotheses here, is that there's essentially two main trajectories here. The first one is that if there is a task that has a maximum point of efficiency, So a task where the sole aim is to be as efficient as possible to basically make calculations to get a distinct answer. Those are obviously jobs that lend themselves for automation perfectly because they're repetitive, because maybe the case changes, but ultimately the outcome is similar. Think of tax returns. I can just optimize my tax returns up to a certain extent. I can have a bit of a strategy in forecasting. That's where a human can be great. but as far as tax returns is concerned if I try and optimize them up to us beyond a certain point effectively I'm breaking the law and I won't avoid that and the other types of tasks are the ones where there's no set ceiling of efficiency or what I call excellence-based tasks. Here think of it this way you might be a researcher that has to find out something new Maybe you have to make a discovery. It might be someone who works in finance or in banking and needs to make a prediction or a financial model about a potential stock that you might want to invest in. You might be a creative. You might be a writer. You might be anyone in any kinds of industry. And as a matter of fact, these are really the bulk of jobs that make up today's economy. But here's where it gets a little bit tricky. And some research has been produced and there's been some evidence that can help us actually determine that. There have been some studies conducted with material scientists, with financial advisors, even within creative industries. And the top performance performers across these industries, when they started using AI tools, so people that already were excellent, already in their job, when they started getting exposure to these AI systems, their performance increased drastically. the people that were almost like only like average performers they stayed at about the same level of performance so no change but what happened is that you would have the top performance effectively gaining ground so the gap between the best of the rest widening as a matter of fact now what does this mean? effectively means two things the number one thing is that yeah, you should probably learn how to use AI tools. That's a great idea. It will help you get far, especially if you use them safely and accurately. And number two is the fact that you really need to develop still the human skills that you have today. If you're someone who wants to become a top-tier investor, AI is not going to help you become a top-tier investor if you're not already great. but you should really develop that excellence either way. And I think that's kind of like maybe a wake-up call maybe to some of us that we can kind of not lean back but we always have to work on ourselves. Obviously, the key thing is, of course, why do the ones of whom performance skyrockets become so much better? And the truth is not necessarily that they just direct an AI in a particular way. That's kind of secondary counter to all of our expectations. But it's because they have the ability to select, judge, and to curate the outputs. So you might have an AI system that gives you 20 different answers. And you can say, take this one. Let's not take this one. You can tell what is special. You can tell what is right. in the same way that when I have an electrician coming to my place and he or she comes to my place and maybe I have some electricity or energy issues at my place and they just like twist a knob and they charge me $400 for that. And I'm like, wait, you were here for 30 seconds and you're charging me all this money. I'm not paying for them to do that. I'm paying for them to select the right one because if I were to do that, I might either have like this issue persisting or I might blow up my whole flat. I'm exaggerating, obviously, but you get the idea. It's selection, it's curation, it's judgment. That's the thing that matters and that we need to help people cultivate over the years. I think that's extremely well said and it ties up so much of what we've been talking about in this conversation around AI literacy and the importance of using our own judgment and understanding what really matters here. So I really appreciate that note. So, Walter, I wanted to say a big thanks for joining today. This has been a really insightful conversation, and I appreciate all of your insights. Thank you so much, Jeff. Good to be with you. Talk to you soon. If you work in IT, Infotech Research Group is a name you need to know. No matter what your needs are, Infotech has you covered. AI strategy? Covered. Disaster recovery? Covered. Vendor negotiation? Covered. Infotech supports you with the best practice research and a team of analysts standing by, ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe.