AI Is Killing Free Will: Ex-Twitter Ethical AI Lead Explains How to Protect Yourself
75 min
•Apr 13, 20266 days agoSummary
Dr. Ramon Choudhury, former AI Ethics lead at Twitter and Accenture, discusses how AI consolidates power and erodes human agency. She argues that public distrust in AI stems not from misunderstanding the technology, but from justified concerns about corporate control, data exploitation, and the deliberate anthropomorphization of AI systems to obscure accountability.
Insights
- Public rejection of AI is rational and informed—surveys show 74% of Americans rank AI lower than ice in favorability, driven by distrust in institutions rather than the technology itself
- Anthropomorphization of AI (making it 'feel' and 'think') is a deliberate design choice that obscures corporate responsibility and makes consumers fear the technology rather than question the power structures behind it
- The definition of AGI has been deliberately redefined from sentience to 'automation of all tasks of economic value' to hide profit motives behind a narrative of human progress
- Consumer data collected innocuously (Pokemon Go, 23andMe) is being weaponized years later with technologies that didn't exist at collection time, creating a surveillance infrastructure
- Discernment and agency are the two critical skills consumers need—the ability to understand what's good/bad output and to exercise choice over which technologies to adopt
Trends
Younger generations actively building friction into their digital lives by moving from algorithmic feeds to curated private groups (Signal, Telegram, WhatsApp)Citizen-led skepticism of AI and tech CEOs spreading via memes and social media faster than expert critique, democratizing AI literacyAgentic AI tools failing to meet adoption expectations despite hype—every CEO claiming workforce replacement has quietly rolled back those claimsLegal and policy infrastructure lagging dangerously behind technology; weak regulations (like NY hiring algorithm law) create false compliance and dump responsibility on consumersHorizontal consolidation by tech leaders (Worldcoin, data center investments, rare minerals) designed to own both the problem and the solutionShift from 'move fast and break things' to 'wait out public concern'—but AI's speed and real-world impact (warfare, genocide, surveillance) make this strategy less viableEmergence of independent evaluator networks and privacy-focused tools as alternative infrastructure to corporate-controlled AI ecosystemsExpertise becoming more valuable, not less—junior roles being automated away while senior discernment becomes critical bottleneckSpeculation-driven hype (Polymarket, prediction markets) becoming a symptom of unsustainable AI valuation cycles
Topics
AI Ethics and Responsible AIData Privacy and Surveillance CapitalismAlgorithmic Bias and FairnessAI Anthropomorphization and Narrative ControlConsumer Agency and Digital RightsAI Regulation and Policy GapsFuture of Work and Job AutomationAgentic AI and Autonomous SystemsTech CEO AccountabilityBiometric Data and WorldcoinSocial Media Algorithms and Content ModerationAI Auditing and EvaluationGenerative AI Hype CyclesTech Abolitionism vs. Pragmatic EngagementCollective Movements vs. Individual Heroes
Companies
Twitter
Choudhury led the ML Ethics, Transparency & Accountability team; praised for owning mistakes and responsive leadershi...
Accenture
Choudhury was a leader of AI Ethics; mentioned as her previous employer before Twitter
OpenAI
Criticized for defining AGI as 'automation of all tasks of economic value' to hide profit motives; Sam Altman funding...
Anthropic
Called out for anthropomorphizing AI language and creating teams to study AI 'welfare,' described as performative the...
Google
Mentioned for Maps data collection strategy of waiting out public concern; historical examples of AI ethics issues
Meta
Zuckerberg coined 'privacy paradox' to justify data collection despite public desire for privacy
Niantic
Pokemon Go developer; sold location data used for surveillance mapping, weaponizing innocent gameplay data
23andMe
Genetic data company that went bankrupt and was bought by private equity; created Spotify playlists based on DNA
Worldcoin
Sam Altman's biometric identification project designed to own the privacy/verification layer of AI-driven future
Klarna
CEO claimed AI agents would replace workforce; quietly rolled back expectations after initial hype
Salesforce
CEO made similar workforce replacement claims about AI agents that were not met in practice
Perplexity
AI search tool Choudhury uses for agentic capabilities; built on REST APIs, not proprietary MCP servers
Reddit
CEO proactively hiring young people despite AI automation, framing it as solving a real business problem
Microsoft
Mentioned as having responsible AI teams, though less prominent than Google's efforts
TikTok
Discussed as platform where AI skepticism spreads via memes; algorithmic manipulation concerns raised
X (formerly Twitter)
Choudhury refuses to use platform after Musk acquisition; represents loss of independent discourse space
People
Dr. Ramon Choudhury
Expert on AI ethics, fired for advocating accountability; founded Human Intelligence nonprofit for independent AI eva...
Geoff Nielson
Podcast host conducting interview with Choudhury on AI ethics and corporate accountability
Parag Agrawal
Responsive to algorithmic bias criticism on image cropping; example of accountable tech leadership
Dantley Davis
Engaged openly with algorithmic bias concerns and committed to fixing image cropping algorithm
Elon Musk
Example of CEO using obfuscation and belittlement; lost lawsuits against Twitter and Tesla employees
Sam Altman
Funding Worldcoin to own biometric verification layer; redefining AGI for profit motives
Mark Zuckerberg
Coined 'privacy paradox' to justify data collection despite public privacy concerns
Shoshana Zuboff
Wrote 'The Age of Surveillance Capitalism'; Choudhury recommends her work for understanding Silicon Valley strategy
Karen Hao
Author of 'Empire of AI'; example of awareness-raising that empowers rather than alienates readers
Rebecca Solnit
Wrote 'When the Hero is the Problem'; argues collective movements, not individual heroes, drive change
Eric Van Nielsen
Published empirical evidence on how AI integrates into work and the continued need for expertise
Quotes
"I play this innocent game in like 2016. That data lives forever and ever. And over a decade later, it's being used with a technology that didn't exist at the time for an incomprehensible evil."
Dr. Ramon Choudhury•Early in episode
"A consolidation of power, lack of agency, which technically are two things that are really one thing, right? So fewer and fewer people hold more and more power and we have less and less say about what's getting built and how it's being built."
Dr. Ramon Choudhury•Opening discussion
"I am not willing to give up my personal liberty and the rights of the people around me so that I can have a calendar agent."
Dr. Ramon Choudhury•Mid-episode
"The intent of using language that is humanizing of AI, like AI models are built as a design decision for it to speak to you and say things like I feel, I think, I'm sorry, I understand. One does not do any of those things, right? That is a specific design decision to anthropomorphize the technology."
Dr. Ramon Choudhury•On AI anthropomorphization
"It's literally like how do you want to use this versus how am I expected to use this? That's agency, right?"
Dr. Ramon Choudhury•Late episode on consumer choice
Full Transcript
I play the Zenith in-game in like 2016. That data lives forever and ever and over a decade later it's being used with a technology that didn't exist at the time for an incomprehensible evil. Hey everyone. I'm super excited to be talking to Dr. Ramon Choudhury. She's a former leader of AI Ethics at Accenture and Twitter. And recognized by publications like Time and Forbes as an absolute leading voice in how we use AI. Look, we all have our concerns with Big Tech, but she has actually been in charge of trying to make AI companies more accountable and been fired for it, which I think is a badge of honor. I really want to know what her biggest concerns around AI and Big Tech are right now. What stories about AI we need to reject and if there's a responsible way to use this technology at all. It should be an amazing conversation. Let's jump in. Thanks so much for joining today. Really, really excited to have you. And maybe just to kick things off, I wanted to ask a broad question just around, you know, what concerns you most around the state of AI right now? A consolidation of power, lack of agency, which technically are two things that are really one thing, right? So fewer and fewer people hold more and more power and we have less and less say about what's getting built and how it's being built and what it's being used for. So when you say lack of agency, you mean as kind of consumers or users of AI are, you know, staking this or ability to direct it? Exactly. And, you know, to be very explicit with it, it is overwhelmingly clear that people do not want AI in many of their consumer goods and products. They do not trust it. They understand what the technology is being used for and other use cases. They understand how their data is being used in ways that they have not approved of. So it's not really a disagreement with the tech fundamental technology. It's a disagreement with the power structures, right? So I think recently there's a poll everyone's talking about where I think it was like, what, 74, like some like really high percentage of people, you know, ranked the use of AI very, very low aligned with people sentiments on ice. So that's been the running joke in tech that like, wow, we actually hate AI more than we dislike ice just like as a population in America or like right around on par. And, you know, there's no love lost between the average American and ice. So and this is just one in a series of many surveys that have been going on for years and years. And just to point to another one, there's been a Pew site that's been ongoing and every year for the past few years, Americans trust in AI systems has declined. And more and more people say that it will bring more harm than do good, which is like the very explicit thing they are responding to that like more and more people believe every year that the technology will do more harm than good. So yeah, when I say agency, it is very clear that people don't want it. And yet all we are seeing are new AI launches. Well, and that feels like, you know, especially that Pew survey, it feels like to me more of an indictment of the power structure and of big tech than of the technology itself, right? It's saying we don't trust the institutions that are behind this. Yeah, that is exactly correct. And again, people are very clear as to why and that is exactly why. Sometimes CEOs interpreted and Silicon Valley certain certainly interprets interprets it as, oh, the average people doesn't understand what AI is capable of. I think they do understand what AI is capable of. And they're willing to say, yeah, this is like a cool toy and maybe you can do some impressive things. I am not willing to give up my personal liberty and the rights of the people around me so that I can have a calendar agent, you know. So so with that in mind and the big tech perspective, I mean, one of the things that's interesting here is there's just there's so much noise. There's so many voices. There's so many conflicting narratives about, you know, what AI can do, what it can't do, what the future looks like, how it's going to impact, you know, people's lives and their livelihoods and very different incentives from the actors behind some of these voices. If they're trying to get you to adopt the tool or trying to sell their own services and you and I both exist, you know, within this ecosystem to some degree. But I'm curious if there's any particular narratives that you're hearing pushed by the creators of AI that you think are dangerous and that you specifically want to call out that we need to reject? Yeah, the big one really is well, there's two. One is just the general anthropomorphism of the technology. And frankly, I see that coming more from the quote unquote good guys, i.e. anthropic, than I hear it coming from open AI. And I, you know, I coined a phrase years ago in the days of narrow AI called it moral outsourcing. And we're seeing moral outsourcing at play, right? The intent of using language that is humanizing of AI, like AI models are built as a design decision for it to speak to you and say things like I feel, I think, I'm sorry, I understand. One does not do any of those things, right? That is a specific design decision to anthropomorphize the technology. Number one, it, it then it alienates us, right, from connecting this tool as something that somebody has built, and it makes us think fearful of it because we think that it is this big scary super intelligent thing. But then also importantly for these companies, when, you know, an AI system goes wrong, they can conveniently say in all the headlines say AI model erases company database versus saying this product failed, which is how it would be stated in just about any other use case. If you had, you know, a server and it caught fire, you wouldn't say server deletes data by spontaneously combusting. Like it sounds so dumb, but that's how we talk about AI agents that take action that have done things. Again, we blame this technology as simply executing a command and maybe the command was poorly specified or we haven't been able to like ring fence bad decisions. But, and again, like Anthropic is particularly guilty of saying things like intent manipulation, you know, they have set up an entire team to look at the welfare of the AI itself, which is mind boggling to me at all of this is theater, it's theater. Yeah. Well, and it's interesting that Anthropic is doing it, you know, of all people. And as you said, I mean, I think it's at least from where we're sitting right now. Yeah. I think the good guy moniker probably does apply more to them than everybody else. But, but why is that? This is sorry, not why is it a good guy moniker? Why are they doing all these performative actions around AI? And I guess, you know, I had a slightly different perspective because when I see all this anthropomorphization of AI, to me, the motive is extremely clear. It's just to make it to influence people's behavior and make it more engaging and try to just get people using the technology for longer. So, but why are good guys falling into this trap? Good guys. Yeah. I mean, I couldn't go good guys, right? Which I, I to use, I think it serves them very well, which goes to kind of the second thing that I think is the most dangerous thing being pushed. And it's actually a book project that I'm working on and something I've just gotten really interested in, which is this idea of intelligence, right? So they want us to believe that this thing that exhibits signs of sentience and will, according to their words, also is smarter and better than us at all of the things. And we'll take over everything that we are doing. And really what is what is hidden under that narrative is, you know, the slippery slope definition of AGI. If you ask the average person on the street what they think artificial general intelligence is, they'll probably point to a movie like Terminator or her and be like, Oh my God, what artificial general intelligence is, is this AI system that's able to interact just like a person. But then for those of us, which, and I think you've seen this too, there's been a slippery slope of what that's been defined as. And now it is only defined in economic terms. Like OpenAI was calling it the automation of all tasks of economic value, right? And why? Why, why identify? Because they want to make money. So it hides this profit, you know, this like profiteering perspective. And it makes them seem like they are pursuing this noble mission for humanity and humanity's growth rather than saying, Oh no, we're just trying to automate the work people are doing so we can further consolidate wealth and power amongst ourselves. If you work in IT, InfoTech Research Group is a name you need to know. No matter what your needs are, InfoTech has you covered. AI strategy, covered. Disaster recovery, covered. Vendor negotiation, covered. InfoTech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe. Maybe I'm just closer to it. But to me, it's extremely obvious that the motive is exclusively economic, right? It's just how can you own this platform under what everybody is doing and say, Hey, you know, why would you hire employees when you could hire this employee replacement that we own and see it's the same? Right. And there's no to them. It's like there's no messiness, right? There's no human messiness involved. Yeah, like they still can be such a weird place. But like tech people hate humanity. They hate dealing with people. And like some of it's, you know, kind of funny. I used to joke that, like, especially in the 2010, like the the the 2000s and the 2010s, a lot of the startups that were built were kind of built to like avoid dealing with messy human things like I don't want to cook food. I'm going to get delivery. I don't want to drive myself somewhere. I don't want to do my laundry. So like that kind of startup of like automating the mundane human things. It's even in the language, right? The currently bio biohacking is sort of a big thing. I mean, it's a big thing kind of everywhere, but transhumanism, it all originates out of Silicon Valley. And like, what is it? What is it specifically saying? Right. It's specifically saying that like your human functions of like sleep as an example, aging or just being tired. These are inefficiencies. Like it's bad to be a human. And like, can you get rid of that? So a lot of this in your 100% correct is trying to create this AI workforce is like, God, I don't want to deal with like pregnant women and someone's got to pick up their kids and like somebody who has a cold. Like it's annoying. Right. People's feelings are annoying. You know, rather than seeing a lot of human messiness as a way of bringing value, which by the way, it is right. The institutional knowledge is a thing. There's a reason why you can't just like fire someone who's been in a company for 20 years or place them with a kid fresh out of a PhD program and assume they're the same. And it's going to be the same and we're seeing the same for agents as well. So I want to push on all of that a little bit. And by the way, I agree with everything you just said. The piece I want to push on is there's a lot of emphasis there on the the supplier side or the provider side of all these tools versus the consumer side. And one of the things that's, you know, troubling, troubling is the word I'll choose is that there does seem to be a lot of widespread adoption, you know, both of AI. But, you know, when you talk about all these like friction reducing apps that take away, you know, the humanity and things, they've seen widespread adoption, right? Like that demand is there. And so I'm curious, what if any onus you put on, you know, the consumer of these and, you know, I don't know, like what does that ecosystem look like for you in a better world? Yeah. Yeah. I love that you're asking that question. Because I think a failure state is saying like the company has to do all of the things and because they push back and say, well, the market doesn't say it's right. Like Zuckerberg used to call it the privacy paradox. I think it was coined by him or by meta at least where like, you know, they're like, OK, well, all of you advocates say people want privacy, but I overwhelmingly see that when I try to get people to use privacy, they don't want them. They just want to be like, yep, permissions moving on, right? So there's two things. One is I actually think younger generations, like are building in friction because they kind of want it. I think we we are starting to kind of come full circle where I can just anecdotally tell you about a lot of like tick tock influences, etc. I've seen their methods shifting over time from volume and easy to access to be more curated. So think of it as the difference of difference between like how a lot of people are moving off of major social media platforms and moving into lots of like signal groups or telegram groups or WhatsApp groups. It's kind of like and that is more friction, right? I can sit there and passively just scroll and look at stuff that's, you know, feeding me junk food or I can be in like 30 different signal and WhatsApp groups, which is way messier, but more rich. And I think we are kind of seeing people are understanding the value of that. But to your point, like traditionally people just want things to make their lives easier. In many cases, there and I do think there's an onus on the consumer. I think many of us have been saying for years and years and years, you know, like protect your privacy. You never know what your data is going to be used for. And the pushback was always I have nothing to hide. I think, you know, we're in the finding out phase of all of that where it's like, oh, you thought you had nothing to hide when you're playing Pokemon Go. But guess what? When you played Pokemon Go, that is now being used to generate the surveillance state. And it is a clear line. It's on abstraction. So I think, you know, people are people have been hearing this story for years for whom, again, saying like you should protect your data was an abstraction or now seeing how their data is being used because the lines are being drawn. I'm curious to see, you know, what sort of consumer protections pop up, which by the way, I'll also add one more thing. That's also why they're all trying to consolidate their power, right? They are they are trying to own the horizontal, you know, what like Sam Altman funding, you know, Worldcoin was not born out of some like, you know, good for humanity. And, you know, because he realized that the technology that he has helped ushered in will completely erode trust because the ability to create realistic deep fakes will get to the point where you will need biometric identification. So it's just the idea of Worldcoin and making people scan their biometrics with him collecting a database so that when he needs to own the privacy part of things, he gets to own that too. It's a horizontal that he's trying to build. Right. He gets to profit off solving the problem that he himself. Correct. That he created, right? Like coming and going. One also the investment in data centers, investment in minerals. One of the theories, I don't know how valid this theory is about like, you know, trying to get Greenland was that there were specific rare minerals to be mined there that a bunch of the tech CEOs had invested in the startup that specifically was going to be based. So again, like I don't want to like go into like red strings, but it's really hard not to in this space because many of our red strings have proven to be true. Yeah. Well, and and, you know, I've said in a lot of these conversations, we're not a political podcast, we're technology podcast, but the line seems to get blurry and blurrier these days. You can't. Right. Like you truly cannot, you know, like one of our one of our most infamous CEOs, you know, Elon Musk held a pretty prominent position in the current administration. We know like they don't divorce technology and politics. Why should we? Yeah. I think that's well said. So I want to come back to this, the consumer side and people may be not appreciating that, as you said, it's not abstractions. A lot of their decisions that we as consumers are making are being, they're creating the society that we have and that we're going to have. And so if I can frame it up this way for you, if you were going to have a message to consumers saying basically, you know, wake up that, you know, this is what you need to know, what would be kind of your, you know, pithy message around what you want people to be focusing on right now? Gosh, that's a good question. You know, maybe it's just something very basic about just app hygiene and data privacy, like just go into your phone, go into all of your settings and just maximize all of your protections. You know, if you can afford it, you know, buy, you know, get a VPN service, you know, like in our house, we have something called a piehole, which like plays a raspberry pie tool that blocks all the ads coming in. Like if you have the capability, just do it. Maybe I guess like my pithy way of saying it is explore the world of products, tools and services that are now exploding around protecting your privacy, security and data, because there actually are a surprising number of things that have come up, especially in the last few years. Some of it requires a bit of tech skill. A lot of it does not. You just have to, it's not going to be, you know, you just have to be able to do that. It's not going to be number one in the app store, because these are not massive big funded companies. It's literally like two friends who felt passionate about data protection, literally, you know, or it comes out of the raspberry pie community, which is largely open source and, you know, built on subreddits, you know, so I guess I'd say that like explore the world of things that have popped up to help you protect your privacy. It's not, it is on you, but you don't have to do it alone. So just along those lines, I want to go back to kind of painting the picture of the importance of this. And, you know, you made an offhand comment about, you know, building the surveillance state, but I want to, you know, just kind of, as I said, paint the picture of why is this so important? Why is it so important to protect your privacy and your data? What are some of the, you know, malicious things that these organizations can be doing with it? And how do you see it kind of playing out for people in a negative way? Yeah. And again, I think people are have already seen so many examples play out. I was reminded of one of the very first talks I ever gave back in like 2017 about like some AI and ethics. And it was that survey monkey. And this is in Atlanta. And I don't know how we got here in the Q&A, but somebody mentioned something like 23 and me. And I'm like, Oh, God, never use those. Like, you know, and this is again, 20, 70 fast forward some years later, they're bankrupt and they're bought by private equity. And who knows how private equity is now going to use people's genetic data, right? And even, even during their existence, they did, you know, questionable things like make a Spotify playlist based on your DNA. Truly they did. I'm like, what? And so they were trying to market it, right? So we have, we have seen this. And the latest is, you know, people pointing out, there's this like TikTok trend. There are two TikTok trends, like sorry to be like so chronically online, but it's just interesting. I will take a step back and say, one of the most interesting things to me is that, you know, a lot of this narrative is being pushed by regular people to regular people. What I love is that it is not coming from me, the quote unquote expert lecturing people from, you know, my space of like, I do this all day every day. What I love is seeing just regular people saying, ah, you know, this seems suspicious. I'm so, I'm so glad to see that because they're stating it in a way where it's like, Hey, I'm a normal guy and I'm telling you a normal person to do this, right? There are so many memes that are like, you know, you 10 years ago or something like that. And people have pointed out that that can be used to train databases, like whether it is or it isn't. But you know, the one I alluded to earlier, very specifically, many of us, myself included, in literally a time so long ago in our heads that it seems like a different millennia played Pokemon Go. And it was actually a very beautiful thing. I loved Pokemon Go. What I loved was seeing how many families were out and kids were, you know, engaging with people in this very friendly way. And, you know, even I would have said, Oh, it's just a game. Like I used it. Well, hey, cool. Now, if now we find that that Niantic has sold all of that data, that data is being used to literally, you know, map out surveillance in the United States. So you played an innocent game. I play this innocent game in like 2016. That data lives forever and ever. And over a decade later, it's being used with a technology that didn't exist at the time for an incomprehensible evil. And those are not like exaggerating, exaggerated words. It's such a sad development because I have the same kind of, you know, fond memories as you do. Like I was out there years ago catching Pokemon. And it was like, in some ways, it was like a golden age. Like it just felt so carefree. It was amazing. Yeah. And to then... It truly, yes. Yeah. Yeah. I was just going to say, like the fact that it's been weaponized is really, really depressing. But I want to come back to these TikTok trends you're talking about. And specifically, I want to ask you about TikTok because TikTok is not a neutral player in this, right? It is not a neutral player. They have an algorithm. They push content. There is, you know, not even to get into the geopolitics of TikTok. But I'm curious, your posture around a tool like TikTok, whether it's TikTok, whether it's, you know, some of its competitors, do you recommend people use it? Should they not use it? Can they use it ethically or in an informed way? What should our relationship be with some of these tools? Yeah. Yeah. And I think what you're pointing at is like the exact manifestation of like just lacking agency. So I struggle in general with my relationship with social media, like in general. So I, you know, back in the beautiful days of Twitter, was just hyper online. The entire field of responsible AI was built by people snarking on Twitter. That's how we all met each other. And not just snarking. That's how we read each other's papers. That's how we interacted, right? Like we were all... There were not a lot of us that did this work. You know, when I first got started in 2017, there's still not even a lot of us. Everyone's everywhere. It was beautiful. And I moved from being hyper online or hyper on Twitter to just not having a social media presence at all. I don't even really post on LinkedIn. But the reality is, so much of the public discourse does happen on social media platforms. You know, at the same time, I do see how, again, like now that I've been, I would say I've been like pretty much offline, other than like very short stints on like LinkedIn or like playing around with TikTok. Since, you know, my team and I all got fired. Like for me, it was a principled stance that I'm not going to be on X. Why would I be there and, you know, have my data and my attention support this platform? You know, so like I am of two minds. Like it's hard for me to answer your question because like you're hearing my struggle in real time, right? Where I'm like, okay, there's this need as a professional, especially a public professional to be on these environments, be on this platform. On the other end, like we also know just to bring in another narrative that a lot of the news media is now captured by billionaires. And we actually consider a lot of media to be untrustworthy. And social media has traditionally proven a place where you can try to find more objective sources, right? So it's like all of these moving parts together. One is like lack of trust in centralized media institutions. Not that social media isn't that, but it's another version of that, right? So there's that there is like what we know to be this necessity of being online to just be aware of what's happening in the world or maybe even engage with your community of practice. And then there's this like evil that we know exists, right? And how do you how do you reconcile the three? I wish I had Manseqs, I don't like, I wish I could just say it is unethical to use these platforms. Well, I can say that. But then I cannot in good faith say, therefore, don't use any of them ever. Because I do think it's fair to say that, you know, if you're a professional, you know, for example, my community now exists on LinkedIn, and I don't really go on LinkedIn very much. Is that detrimental to me? Maybe. I don't know. Because I can't measure the opportunity cost, right? I have no idea what the opportunity cost is if I was chronically posting, you know, would I be interacting with more people? Would I have more knowledge? Would more consulting or speaking opportunities come my way? Maybe probably, right? I don't, I don't know. But yeah, it's kind of a convoluted answer to your question. Because like I said, you're hearing real time what goes on in my head constantly whenever someone sends me a LinkedIn post or whatever. Well, and what I did hear and it made me reflect on is just how much of a bummer it is that a lot of these platforms used to be a place for independent discourse, not owned by, you know, megacorps, and they've been absorbed. And it feels like, you know, I'd like to think a few years from now, we'll have some next wave somewhere else where we can have these discussions that's independent again, because it just feels like we're in between and it's been usurped from us, right? I hope so. And actually, I think there is a way that, you know, one can use AI tools in the way. So like, I've been playing with perplexity computer lately. And I would say the joy that it brings to me is very similar to like when I was in high school, and the internet was like kind of this thing we were all learning. And like, if you like remember who you were at the time that the internet kind of became a thing. And if you were interested in it, you're like, wow, I can learn anything. I can meet anyone like as like suspicious as all that is, right? But there was a genuine like, purity to it, like you said, and also like a joy to knowing that like information access that your fingertips and, you know, I have felt like that playing with some of the agentic AI apps. And I've done, I like spent a couple of days, like just like setting up dumb things, right? Like, send me a daily email with all of the ebooks on sale for Kindle and the genres that I like. And like, that has brought me so much joy. I probably spend more on ebooks now because that thing is actually particularly good. But it is that there is this like, way of using these tools where it's not about them telling you what to think or feel or what you need, but how we are then building, which is again, like kind of the mindset of the early internet. I think the difference here is the internet was free for us. Now we have to pay for tokens, which that's the part where it's like, Oh, but you have actually robbed us of like a public good. And like, that's how it's different. But like the, the mindset and the feeling I have is similar. But again, like another very difficult thing for me to reconcile. No, no. And, and, you know, I appreciate that it's not clean and it's not as simple as just don't use these tools, you know, given your perspective and your experience. And, you know, again, I'm just, I'm just kind of reflecting on, you know, what you were saying about that. And to me, one of the other differences in my mind, and I'm curious on your thoughts is that the internet was, how do I want to frame this? Like the internet was more neutral in the sense that you could go on it and you really had full agency or close to it of what you're looking for. And these platforms and these algorithms in some way take that agency away from you because they are pushing you towards specific experiences. They want you to consume in a certain pattern. They want you to consume certain stuff. And so it feels like you have to be a lot more intentional about how you use these tools. If you're going to maintain that agency. Yeah. And, and to some extent, it's exhausting because you, to your point, constantly have to pay attention to like, am I being manipulated in some way? Right? Is this information real? And I think an analog way of thinking about our semi analog way is like how we don't just look at the New York Times and Washington Post and say, oh, that's the news. We're like, oh, that's the thing that Bezos owns. So of course, they feel like this about this. And let me go online and find if three other sources are talking about some, we now have a lot more work we have to do. We cannot approach it as innocently. And to your point, you know, one of the things that came up in discussion in Twitter amongst Twitter leadership, and I can't really name names was kind of like the sentiment that it was unfair that social media companies were under all of this scrutiny to do content moderation because we never content moderated the internet. And just to give an example, like Nazis are allowed to have websites, but Nazis are supposed to be banned on social media. So like not to say that anybody wants Nazis, but like, but to say that like the, the rules seem to have applied differently because, but again, I think it's because the internet was born as this tool of free access to information that nobody owned and nobody paid for, or at least people do pay for it. But like, not in the way that like we are directly putting dollars into specifically accessing information. I wanted to go back to this, you know, this experience you had at Twitter. And for listeners who don't know, you were a leader on the machine machine learning ethics, transparency, and accountability team at Twitter. And I'll, I'll, you know, ask you this in a deliberately broad way. But, you know, can you tell me a little bit about kind of your reflections on that time and you know, what you were trying to achieve and what it, what it taught you? Yeah. So it's worth also thinking through like structurally where I was in the company. There are, there have been many teams that do this kind of work, you know, sort of infamously or famously at places like Google, there's still some at Microsoft, etc. My team at Twitter, at least for its time was very unique. I was an engineering director, and I sat on this team called Cortex, which is basically, if you know, the structure of Twitter, it's where all of the machine learning and AI services were offered. So there are teams that own products, like a product could be like who to follow, right? And the, the core tools that they use to build who to follow came from the team that I was on. Why is that important? It is important. And that's why I actually in some ways I took the job is because I was in the room with people building it. So I didn't have to ask permission. My colleagues and my peers were not, you know, policy and there's something wrong with like policy, etc. But you're not in product. And to me, if you're in product, if you're in that room, then you have access and privileges that people on other kinds of teams have to fight for, right? So if you're a responsible AI team is a peer research team, you don't own product and you don't influence product. Or if you want to, you have to fight for it to happen. Whereas I am just in the room when there was an engineering meeting, I'm just in that room, because these are my peers. That's really important to like shaping how the tool is being built from design. So I've loved it. And also the other thing I'll say about Twitter is somehow it kept that, weird, very millennial 2000 startup-y vibe, you know, like it's so corny, but like in the best way possible, it was a very corny company. But I loved it. And I one thing I will say about a lot of Twitter employees, most Twitter employees, like we knew we had a really difficult task, and we knew we weren't going to get it right. And one of the things I loved about Twitter is they owned their mistakes, right? I think that's kind of what Twitter was famous for is when Twitter would go down, they're like, sorry, like we done messed up. And that was and I loved that. And around when I was interviewing was when there was, you know, like, like citizen data science around potential algorithmic bias in the image cropping algorithm. And if you remember this, but basically people realized that it seemed like the Twitter image cropping algorithm was cropping out darker skinned people. And you know, it had already by then been demonstrably shown that these models underperform for people who are higher on the Fitzpatrick scale, so darker skin tone. So instead of doing what a lot of other companies do, which is sort of high behind PR narratives, you had Dantley and Parag. So Parag was on the CTO, Dantley was then had a product, I believe, hopping in and being like, Hey, guys, what are you seeing? Like, can you tell us about it? Like, I want to learn, you know, and they were very open, they were very responsive to criticism. And they promised to do something about it. And then they did, which is again, so rare in general, and so rare these days. And more and more, as like we have become, there's like now this elite ruling godlike class of AI CEOs, I can't imagine these people having the kinds of interactions that Parag and Dantley had, like four years ago, there's I just, I don't, I don't see them doing that in an honest and open way. I could see them snarking. I could see them belittling people on social media. I don't think I can imagine them coming in and earnestly asking about a product from trying to learn how to fix it. Yeah. Yeah, sorry. The reason I'm just like pausing and reflecting on that is I feel like it's like you can draw an arrow from that statement right back to the start of this conversation about what's going on in big tech and also about, you know, the market, sorry, the market in terms of consumers, not the market in terms of investors, that's a whole other story that we can or cannot talk about. But this distrust in the products and in the tech is the leadership style. Like it feels like leaders are deliberately putting up those walls in a way that maybe they didn't 10 years ago. And I don't know, is that going to hurt them in the longer term? Is it going to pay off? It's kind of fascinating. Yeah, I think that sort of dodging and obfuscation does not in the long term benefit any CEO. I think maybe in the short term, it helps them because they get to avoid problems. You know, one of the best books, I would argue, probably the best book in the space to understand, like this whole, like you said, a comes full circle is Shoshana Zuboff's The Age of Surveillance Capitalism, which like you really, like you could literally only, it's a very big book, it's like literally three and just thick, but you could just read the first chapter. And what she does beautifully is outline the strategy and the economic model of Silicon Valley, right? And part of that strategy is like playing this waiting game, right? Waiting until we as a public get exhausted with a topic. And you know, the example she gives something that I personally had forgotten, which is that when Google Maps first came out, people were up in arms about it. They were really upset and people were protesting, they were stopping the cars, they were building higher fences. And Google did not say or do anything about it. They just kept their mouth shut. And what they waited for was for the momentum to die down. And I don't think any of us question, you know, the Google recording cars that we see driving around for maps anymore. And like that really made me pause and reflect on actually how good that strategy can be. But I think again, AI is just, it is just so different. It's again, so much less abstract. It is especially with generative AI, it's in our hands, it's in our faces. We're seeing it play out real time. We're seeing it play out in a genocide. We're seeing it play out in the field of war. We're seeing it play out, you know, with protests, right? And again, like the technological and the political are the same thing now. That it's, I don't think that strategy works. I don't, I think people are too smart now. I think there was a lot of naivety that we are now past. And I'm glad, I'm glad people are not naive about it anymore. But what they have now removed is our ability to make decisions, right? So one of the, one of the other studies of the many studies that point out how people don't want AI products is that I think one consumer study showed that product labels that said they had AI were purchased 74% less. And then if they drop the label, so now they just dropped the label, they don't drop the AI, right? So like the, they take the wrong thing away. So there's a few different avenues I want to take here. But maybe let's start with, with us as consumers, like if we want to be more ethical about it, if we want to, you know, be building a better future here, like what do we do? Like what's our, what's our kind of imperative? And how should we be thinking just about how we interact with these products? And I mean, you said it yourself, like it's, it's not clean. It's not, it's not necessarily just, you know, throw out your phone. It's not. And I think those are like very trite things to say. And they're often born of an immense amount of privilege. Like I realize that I have an immense amount of privilege being able to just not be online. Like I literally think about this morning, actually I was talking to my friend about this morning, my friend is a doctor. And she said to me, you know, Roman, I wish I had the energy to go build a brand. Why does a medical professional need to build a brand? They do now, right? Medical professionals feel that they, because she is one of the dying breed of, you know, small business owners, she has her own practice. And she now has to think not just about taking care of patients, but about building an online brand, because that is what drives patients to your door, which is by the way, I think ridiculous. I think as a ridiculous state of affairs. So I was reflecting on like how I am privileged that I don't have to be chronically online and thinking about building a brand in order to feel like I can get ahead, right? So like, so what can, what can people do? Which was, which is always a great question. I think number one is one of the things about not being chronically online, even though I'm somewhat, somewhat like, like I use social media as a way of understanding like social movements and how people are thinking about things. So it is an observational tool. I think one of the things I've realized from not being like on Twitter all the time is that is how much we get caught up in like local maxima and minima. And what I mean specifically is that like, there are attention cycles that are very short and seem incredibly consequential at the time that you then realize really didn't matter. I know there's a lot of stories that I miss and a lot of like main characters online that I miss, but actually it doesn't impact my life. So I think one is just like not being fooled by the moment, like the local, the local minima and seeing the big picture. I will also add that by the way, a lot of this very aggressive do it now or, or, or else narratives about AI are actually meant to make you not plan long term, not like think deeply. It's meant to make you run around scared and not be strategic. So, you know, my advice in general, but also as a way to technology is like, think strategically about what will serve you not and don't make fear based decisions. What do I mean by that? Like my, my, my intent of like using computer, for example, which is an agenda tool built by perplexity was very intentional. And I have been experimenting with it to think about what I want to use it for not use it for versus the hype around open claw a few weeks ago, which was to me insane, that people were like, I'm going to give it access to my bank accounts, and it's going to bet on poly market for me. I'm like, why don't you build something dumb? Like, as I said, a daily email with Kindle recommendations before you go giving your bank account information to it, right? So like, you need to figure out what your relationship with this technology will be. So yeah, I think that's my big advice is like think strategically think about how it serves you versus it being based on FOMO or fear, or I'm going to lose my job or whatever else story that is being pushed to make us too scared to ask questions. I really like that. And it ties into something that you said earlier, which, you know, I was a little bit surprised by, but, you know, made me happy, which is you said that there's, when you use some of these tools, there's, there's joy in it for you, right? Like you're actually able to use these in a deliberate way and, and find joy. And so is that, is that basically your advice around this? Is the secret to finding joy in this stuff is being strategic, being intentional and starting with how do I want to use this versus, you know, how am I expected to use this maybe? Yes, I think you framed it absolutely perfectly. It's, it's literally like how, how do, and this goes, just goes back to agency, right? And so much of what I've done over the past few years, and I would even argue maybe like this has been the arc of my career. It was the topic of my TED Talk for sure is how do we give people agency? Because with agency, we make choice. When we make choices, we're actually happier with the outcomes. I think part of this, like the satisfaction people feel maybe even isn't how the tool is performing, but the fact that nobody bothered to ask us, nobody bothered to say, do you want this? You know, like I, I don't have kids, but I imagine parents trying to mediate technology and their children are irritated by the technology, not necessarily because they don't like technology, but because it was, they were given no option. And now they have all this responsibility for something that they didn't choose to do. And you know, and a good, you know, that's a good way to think about it, right? It's not that people don't want responsibility. It's that we want to make a choice to have to have a responsibility, right? If you think of like, let's say a hobby that you have, like people who are, let's say like marathon runners, they will wake up at absolutely wild times and go run for 13 miles. Why, but if I were to say, hey, you have to wake up for work at 430 AM tomorrow, they would be irritated to do the same. Why? Because they'd say why I did not make that choice. So part of the, to your point, part of me finding joy in using sort of more hands on agentic tools is that I get to decide what it's being used for. I get to decide what problem it's solving. And by the way, I'm starting to see that arc with more of the tech tools that are being, I think before they were trying to tell us what we wanted to use it for. And a great example, by the way, are calendaring apps or any sort of like AI assistant type apps that really don't do very well because they are so prescriptive. They make these broad assumptions of what you want. So I personally have never found one that works for me because I travel a lot. So if I had a calendar, then like what happens is I end up with calls at two in the morning, because unless I'm constantly going in and updating what times, which is a lot of work for me, then it's just not going to work. So they have not thought through things like that, or at least not given something like myself the tools to do that easily. So again, it's sort of this like generic prescribed use case being shoved on us. And that's what people are rejecting. And what does it look like to just like, let go of some of the power and maybe let us do stuff? Like let us drive the car for a bit. It's really interesting. And I'm just, I'm absorbing so much here and I'm thinking about it. And I'm trying to zoom out and think about what this cycle looks like. And the fact that we've got the backlash, we've got this big push by big tech. And the phrase that came to mind, and I haven't thought about this before, but it's good branding, is that it feels like there's a war on agency, like a war on human agency of like, no, you don't think about it, you let us think about it for you, which is in direct opposition with us feeling a sense of, you know, joy or prior accomplishment. Absolutely. Absolutely. So if we, if we agree that that's true, you know, how optimistic are you that this is getting better versus worse? Like, where is this going? Is, do you think we're going to get an inflection points? And it's going to be similar to what you said about groups on Twitter. And people are going to say, no, this is bullshit. I'm taking back my own agency and kind of forcing organizations to come with them. Or, you know, is it going to be the opposite? And we just as people go more toward that like, wally future of just, you know, turn off my brain or secret options? Yeah, I mean, I think my answer that would like change at an hourly base, depending on like what new what fresh nonsense I may have seen online or what's going on. Here's what I want to believe. And actually, I do believe I want like, and it's, I think I, again, back to these like constant battles I have in my own head, I constantly wonder if like me existing in this space is actually a bad thing of people like myself, right? Because, you know, like, people like me are not tech abolitionists, even if they use the language of tech abolitionism, right? Because to truly be a tech abolitionist, like that means you're just literally disengaged from all of it. And often I do wonder, like, what would I do if I just literally just completely actually did throw my phone into the ocean? Like that is a lifestyle of change I've seriously considered, right? And I wonder if the existence of people like myself is futile, right? Because we're just, we're making it seem we're giving window dressing when there's a fundamental problem, right? So the reason I still do this every day is actually, I do believe in the human condition and that the that human beings want and need things like agency and ownership. And at some point, we'll fight for it and we'll make it happen, whether they're fighting for it with their dollars, whether they're fighting for it, you know, in Congress, by trying to pass bills, you know, you see more and more young people running for office and running for office on specifically tech platforms. Like it is very fascinating to me, I live in Texas, that we can have candidates in Texas that can run on issues of like data privacy, and the population, their constituency understands what it means. I think that's a great, I think that's a great thing. So I do believe it. And, you know, the thing is the wheels of democracy move more slowly than the wheels of autocracy, right? So one example I use constantly actually is Elon Musk. And it's unfortunate that, you know, that man has actually failed at many, many things, many of the things he has tried to do, much of the harm he has tried to inflict, he has actually failed that. And the problem is, when he does it, he does it quickly, he fires half of Twitter illegally, he, you know, cops into the US government, hires a bunch of babies, and they wreak havoc and happens like overnight very fast, right? But the wheels of democracy do turn because guess what, he lost the lawsuit against Twitter employees, he also lost the lawsuit against Tesla employees when he did something, something very, very similar. You know, Doge, a judge just ruled that many of the actions that Doge did were incorrect and, you know, not actually permissible because they were never congressionally approved. The problem is that took two years to happen, right? The Twitter lawsuit took three years to happen. And the wheels of democracy move slowly. So maybe it is not in this like sort of social media driven, short attention span world, back to like, let's think of the meta picture and the big picture strategy. If you're paying attention to the big picture, you'll see that like a lot of these things are not particularly successful over time. In the immediate, they seem very successful. So yeah, I think that people, and I, I, not even want, I need to believe, right? I actually literally need to believe because that is what keeps me going at this job. I need to believe that people are going to do the right thing. Well, and, you know, your, your case for optimism is pretty compelling. And I, I like your, your point that it requires zooming out sometimes and it requires getting beyond, you know, the media cycle. And I mean, the media and their complicitness in this is a whole, is a whole other conversation. And I mean, like, you know, I should acknowledge again that like, even us by having this conversation, we are part in some way of that landscape. But, but it's a compelling case for optimism that, that there's just the stories you don't hear. And, you know, what, what that means in terms of, you know, safeguards to, you know, what we want in our society. Yeah. And, and again, it's just like, I have not seen in the almost 10 years, I've been at this job, people become smarter and demand more and demand better. You know, you know, just again, anecdotally, one of the, this is before I even worked in response boys field didn't even exist. I remember a while ago, like Google had done this thing in that view where they took these mosquitoes and they had synthesized something to help prevent, I want to say West Nile virus. And they just sort of released these mosquitoes after just injecting them with thing. And at the time, this was like in like peak tech optimism, everyone's like, wow, Google amazing. And I'm like, did this good FDA approval? You know, and like just like thinking through like, but again, the predominant narrative was just so optimistic, like, oh my God, Google, they're going to cure West Nile virus by stopping mosquitoes. I don't think that narrative would fly today. Right. You know, I think today people would be like, excuse me, why is Google like doing biological experiments on people? You know, you could not ask those questions 10 years ago that you can, you know, I will more than sorry, I'm dating myself more than 10 years ago, that you can ask that I love that I'm, I am always happy to see like citizen movement. So there's this article by Rebecca Solnit that I absolutely love and it's actually one of I I was probably one of her least known articles. And I love her reading reading Rebecca Solnit because she straddles that kind of critique and optimism that I think we need. And this, this article is called when the hero is the problem. And the purpose of the article is to talk about how we really want to show this like individual hero and all of Silicon Valley is built on this, the child genius who dropped out of Harvard and single handedly builds whatever. Right. And what she points out is actually the reality is that most most progressive movements, most positive for humanity movements were built by collectives. And what she reflects on is how hard it is for her to pitch a story or write a book, because we as a society are so enamored of the hero, but she's like, you know, there, there is no one person that solves a problem, collectives solve problems. And I like to think about that when I like to think about what does it look like to push back against centralization of power, it is a collective movement. So we're not going to have a hero, we're not going to have a single person, what we'll have is a lot of people just getting really fed up and just stopping. And maybe they'll stop in their own little way, but like that will mean something when it's all summed up and all put together. So I'm going to ask you a question that might be unfair. And so feel free to answer it as you see fit. But in this landscape of AI and consolidation of power and big tech and, you know, collective movements and responses to this stuff, what, what is the role of the AI ethicist or of responsible AI? Where do you see yourself and your mission into fitting in this broader picture and how much of it is it, you know, with consumers or organizations and, you know, how do you, how do you, you know, roll that boulder uphill or, you know, something slightly more positive to make sure that, you know, we're contributing for good here? Yeah, I mean, I don't think that's an unfair question. I think it's a great question. I think there's sort of two questions in your question. One is like, what is the AI ethicist role? And then the second one was like, like, what am I like, what is my what do I see as my role? I think those are two, I think about the second one constantly. What is the AI ethicist role? I think what I like is that, you know, people have fallen into different categories, right? I think there are the people and a lot of people fall in the space where it's about like informing people, right? I do think there is still a role of constantly informing. I think informing can be a double-edged sword. I think, you know, based like critique without any path forward is actually alienating and disempowering. And I think there are people in the responsible AI community that I wish would learn that. You know, I'm trying to be like, very careful with my words here. But, you know, I think there is a, like, yes, we should be raising awareness, but like people cannot walk away feeling hopeless, right? And I think there are people who do that amazing, Karen Howe, it just comes to mind immediately, right? Like, she, and she as a journalist comes in as somebody who is good at telling a story, explaining things, that's why Empire of AI is so powerful. But what's great is you don't leave that book feeling disempowered, right? She focuses on positive movement. So that's a good example of awareness raising. Second is there are kind of the builders, and I put myself in kind of the builders category. And builders, by the way, are not just tech people. There's a lot of lawyers who are builders. One thing I love is seeing like this legal community popping up. And then, you know, if I could, you know, go back in time one degree, maybe I would think of getting would be a law degree, because I think tech law is one of the most fascinating places to be. And there's so much ground to cover with like rights and protections and to be informed and capable in the space and one of the most powerful tools you can have. So I love seeing privacy professionals like legal tech people popping up to say like, these are your rights or they're going to advocate for rights. You know, the third group of people within the builders are, you know, auditors and people who are making tools, right? And that is also a very, very powerful space to be in. So what I, so now we're on the like, what do you, Ramon, see yourself doing? So one of the purposes of human intelligence, the nonprofit was to build a community of practice of independent evaluators. And one of the things that the problem that I want to tackle that I've been tackling for the past two years is, you know, how do we get, again, back to agency? How do we get, you know, people who are lived, experienced experts or experts that are not in, you know, influenced by tech companies, i.e. literally paid for on, you know, on a tech company's dollar to do this work, and not just interested in this work, like legally protected, you know, certified, that it's a viable it needs to be a profession, professions all out of the sky. They, they happen because there are certain things that enable it. So what I'm working on now with the public benefit corporation of human intelligence is the infrastructure to do that work. How does somebody, let's say, you know, you're interested in that to being an evaluator, maybe you even have some consulting work, how do you do this work efficiently? How do you do it well? So I see it as like, problems to tackle. The other part I'll add is, you know, there's so much like symbiosis between all of these people, right? So the tech legal people know that they're not technologists, and then they'll go to people like me to say, Hey, if we're trying to write a law that says your model should be audited, what can what can we and can we not ask for companies to give and do it? So it's like, we're all calibrating. So I think there's, there's, it's just such a deep and I have not even tackled like cybersecurity, right, which also is intersecting with the space immensely, which I would argue is probably one of the most lucrative and future proof fields to go into the other puddle add, by the way, with all the, all of the hysteria and concern at the moment about the future of work. I see all of these jobs ranging from cybersecurity all the way to like informing people about it, as these are like AI proof jobs, like AI cannot come to do this work because it fundamentally needs human judgment. It needs collaboration, it needs synthesis, it needs historical understanding, it needs so much stuff that AI literally cannot, cannot do. We've, we've sort of backed it. There's, there's so many threads I want to pull on there. So thank you, Ramon, for that, for that comprehensive answer. But we kind of backed into the future of work there. You know, cybersecurity is kind of interesting in some ways. You know, I work with a lot of IT professionals and some aspects of cybersecurity are actually the first to be automated because you're doing, you know, kind of automated threat detection, which is very different from the judgment piece about designing cybersecurity work. But, but maybe again, just more broadly, you know, can you give me any sort of, you know, from your perspective, just view on the future of work and what it's going to look like over the next handful of years as some of these, you know, AI powered tools, kind of, I was going to say infiltrate more workplaces, maybe that's, that's too judgmental a word. But, but as we start to, you know, rewire organizations with AI. Yeah. So actually just did this, this show called Open to Debate on this topic specifically. So something I actually have been thinking a lot about and it sparked me to think more about even just the idea of human intelligence and what does it mean to be an intelligent person. I find it fascinating from like a philosophical perspective and a practical perspective. So there's like, just to go into the practical one, like, so much of this is the hype cycle at play. And the hype cycle is meant to be disempowering, right? Again, you know, hopping in and saying all jobs will be automated in the next 18 months is such a ridiculous and unfounded thing to say. And like, but then why, right? Because it, you cannot, you cannot adjust your life to something that is going to completely decimate it. It's as if like, you know, somebody pointed out that there is a meteor about to hit the planet earth in the next day, and we're all going to die. Like there's nothing for you to do, right? There's nothing you cannot plan and execute on a timeframe that is that short, right? But instead, like, but that's not the reality. The reality that we are seeing that, and again, like, this is why this like getting caught up in the local maxima or minima is dangerous, is if you're caught up in that story, then you're missing the real story, which is actually that, you know, we are experiencing a lost generation of young people who cannot get jobs, entry level jobs are increasingly harder to get, especially in fields like programming, right? So what, so, but that, annoyingly, is a problem that you can actually tackle, right? If you have like, narrowed your scope to people graduating with certain kinds of degrees who are trying to enter the job market, that is something maybe we can build something around. But if we are then all going bananas, because we think none of us will have a job, people are going into self preservation mode, why would somebody as a senior engineer try to build, you know, opportunities for young people entering the job market, if you as a senior engineer are being told that you're going to be out of a job, right? So it is, this is like a perfect example of how it's disempowering. So like, one, I do not think jobs will all disappear. But two, I think that there are certain fields that are being automated away right now. I think it is happening slowly, but surely. And I think that three, because this time frame is not like happening tomorrow, we can actually plan for it. Again, if we like, take a look at the long term picture, we actually can plan for the next three to five years on, you know, how are the kids who are sophomores in college today, going to be able to be successful in the job market, right? That is a different question from saying, how can a kid who's graduating this year be successful in the job market? And again, these are actually problems we can tackle, right? So one thing, you know, I love is that the CEO of Reddit has said, actually, we're leaning into hiring young people, because, you know, we think we can figure out roles for them. Amazing. But the fact that he even pointed out, and is thinking of the problem to be solved as young people needing jobs, right? That's a framing that not everybody understands right now. The other thing I'll add is there's also increasing empirical evidence, which I'm glad to see on how work is being integrated and how it's successful and also because there's so many papers coming out. Eric Van Nielsen has had quite a few. There's like others that have come out of like Harvard and MIT. And like, generally, the trend that we're kind of seeing is that you still need expertise. And maybe if there's another theme other than agency as whole conversation that I'd like to bring up, it's discernment. I think that, you know, having the ability to discern good and bad output, appropriate and appropriate uses of technology actually requires expertise. So it's almost like counterfactual, right? Where, or not counterfactual, yeah, it is kind of factual, right? Where like the existence of this technology actually means you need more experts, but it's automating away the junior level roles that would then allow someone to become an expert, which by the way, a lot of senior engineering people are saying, they're like, well, you need to hire junior people, even if an AI can do their job, because I'm not going to be here forever. And this AI tool is not as great as you think it is. And you need my level of discernment to understand what is good and bad and what is right and wrong. So that that and I would apply that to social media. I would apply that to our consumption of these tools. I think other than agency, my other word, big word of the day is discernment. I really like that word too. And I like the kind of the framing of why it's useful and why, you know, being an expert is, you know, and in some ways has more value than what's, you know, obvious or being communicated in some of these headlines. I want to come back to something, you know, one of the themes here about by framing these problems better, we can actually tackle them. We can come up with a plan, we can do something about it. I want to push on the word we. Who do you see as being kind of the key actors in the we? Is it, you know, is it business leaders? Is it political? Is it just us as everyday consumers? Who holds the power here? And, you know, do you have any specific advice to the people who you think have the most outsized rules in, you know, correcting some of this? Yeah, that's a great question. And we sort of use very broadly, but you're absolutely correct to say, well, who is we in different situations? I think who, like, it depends on what we are talking about, right? So there is a we that is the average consumer when it comes to, like, picking and choosing whether or not you want to download an app or, you know, buy the new Alexa or, you know, enable Siri on your iPhone. Like, there is a we that is the average consumer. And I'm talking about actually all the things that I don't do. I've never bought an Alexa, don't plan on buying it. Siri has turned off on my iPhone. Like, this is actually as a consumer, like, I did it on my parents phone, you know, like, I can do that. These are, these are right. These are capabilities you have as a person. Then there's this we that is like the technologist or the person, the room building the product or the tool. You know, that is a very different we. And, like, as that individual, you have a lot more agency and ownership of what's being built, how it's being used, and again, like, how the problem is being defined. And then the third is, like, there is a role for politicals. Actually, I think a lot of this, like, question framing, frankly, does lie in, you know, in the hands of policymakers, not just Congress, but also, like, state and local policymakers, you know, whoever, like, whoever is like, for example, the state, the state lead for the Department of Education deciding on how AI should be, could be a school board, right? There are these political decision makers that are, you know, a framing questions, I want to say framing questions. This could even be what I said earlier, like, we want to write a bill on auditing algorithms. What should that bill be? And just as an example, the current New York state law on hiring on requirements for hiring algorithms is so weak that a kid with an Excel tool could, like, pass it, right? So back when that law first came out, everybody was excited about the idea of it, but it got so watered down that actually it just became legal theater. And actually worse, it created more responsibility for an individual to have to push back and say, actually, this algorithm did discriminate against me, even though it technically, quote unquote, passed the rules of this law. When I had been asked afterwards to help companies audit these algorithms, I would literally turn it down and say, you don't need me, you literally need an Excel sheet and a bunch of data from your database and you can, you can pass this, right? So framing the question, I think is one of the most important tools for policymakers to do. They need to frame the right question the right way and then be able to frame the answer because bad policy in many cases is actually like the example I gave is worse than having no policy at all because bad policy then dumps the responsibility on us as the consumer. So whereas before, let's say if you're in New York State and you thought you're being discriminated against by an algorithm, you know, there's a particular kind of like, you know, onus you have of like having a court case and hiring a lawyer. Well, now your lawyer has another job, which is to say that the law itself did not adequately protect you, which is much, much harder to fight against than just saying a discriminatory algorithm being used by a company. Right. And as I think about that too, it thematically, it seems to be tying back to that piece about discernment, right, and being able to frame these things correctly and understand it. And so I'm curious, you know, you talked about the importance of framing, but are there any, you know, specific guiding principles you can share about how to best do that when we're talking about, you know, confronting this technology and its usage? Yeah. And the lens we haven't gone to yet, by the way, that's on my mind is also with business leaders, how they should be adopting it, how they should be thinking about it. And so, you know, what kind of guiding principles do you have for them or for any of the other kind of key actors in this space? Yeah, I mean, for business leaders, it's actually always been kind of the same boring advice. It's just like, does this technology actually solve a problem for you? Like a business need versus just being kind of a cool, like demonstrably so. So it's not like, yes, I would love to get rid of all of my employees because benefits are expensive, they're Frivoline Juice AI agents, like, is an agent better than a person? By the way, every CEO that was out there, like C of Klarna, Salesforce, all crowing about how many people are in, they all have rolled their ass. Every single one, a single one has met the expectations that they were yelling about when agents were first coming out. My engineering lead just sent me an article today that actually MCP servers are being quiet, was like a little bit in the weeds, but like MCP server, MCP servers are supposed to be like the next big thing and a genetic infrastructure and everybody was told got to learn MCPs, like you got to understand it, otherwise you're going to be left behind. Funny how they and it actually comes from Proplexity Computer, it's actually just built on REST APIs, like it's very simply built, you did not need this. So like just to point out a couple of things, you know, at a, let's say a business leader level, Agenda AI is going to take care of my workforce, what doesn't, it's not demonstrably proven, it can do that. So before you're jumping into it, is it solving an actual problem? Is it capable of solving the problem? Does it solve that problem better than something analog, right? So the MCP example is actually it didn't solve a problem better than something analog and actually probably introduced a whole raft of new problems. Because as I was talking to people about it, they're like, how do you handle versioning with an MCP server, right? If an MCP server is meant to like orchestrate a whole bunch of tools, what happens if like one, one, like, you know, there's one tool in this chain that now has like some specific thing that's changed about and then everything is a cascading failure, right? Like it's less observable than your very traditional, like REST APIs talking to your product and building a thing. Anyway, so like, those are, those are kind of my, it is a very boring answer, but it is like actually the thing people don't do. No, I appreciate that. And the fact that people aren't doing it makes it that much more important, right? Like if it's, yeah. Yeah. Yeah, no, I think that again, like just sort of for the general, I think people, people now increasingly understand, right, how much of the stuff is smoke and mirrors, but I think there was a belief that the, there's so many words that are thrown around that are actually Silicon Valley business jargon that to the average person sounds like something real and tangible that just like made up math. So like valuation is a great example, like value. Oh my God, that's company is worth a billion dollars. It's not like the average person thinks in of this like money in money out send. So like, wow, if my local bakery were worth a billion dollars, that means it's selling a billion dollars of cookies. That is not what that means in Silicon Valley. It means that like, it's like the price of art, you know, this painting is only worth a million dollars, someone's willing to pay a million dollars or intrinsically worth that much. I think people are starting to realize this when they're saying that some company has this valuation of a map or this technology is capable of something doesn't actually mean it can do the thing or it's worth that much. It's a speculative discussion. There's a lot of, a lot of like speculation being sold, which, you know, morbidly, like one of my latest morbid interests is just like following the rise of poly market and sort of this legalized social gambling that's happening in this kind of like a sick symptom of this like very speculative hype driven world that like, it's like an oraborious, it's like this hype is eating itself, you know, and I see poly market is like that or Kalashnik, I'm not going to like discriminate, they're both awful. They're both this manifestation of like the snake eating itself. Yeah, it's like meta hype in some way. It's like the hype of the hype. Can we bet on the hype? Like it's just- Correct. Yeah, it's like, when we all looked at NFTs and left, you know, this is just like NFT as an entire completely gazillion dollar industry that is being integrated into every aspect of our lives. These NFTs like we're stuck in a corner and the true believers got to go do what they wanted to do, which by the way, you know, the metaverse recently shut down a billion dollars, a billion with a B. I don't think all of AI governance has spent a billion dollars in like its existence. Yeah. Well, and like who saw that coming? Like everyone, I think. Only everybody, you know? Correct. Yeah. So I want to be conscious at the time here as well. And I know, I know, Ramon, we've covered an awful lot of ground here. Any kind of final thoughts you want to leave listeners with before we wrap up? Yeah, I just want to go back to kind of these two themes that have clearly come up in this conversation. One is agency and two is discernment. And I think that applies to like actually every aspect of our lives because every aspect of our lives currently is digitally mediated. You know, it is now our responsibility as a consumer to exercise those things, right? Figure out, you know, what I do and don't have agency over and execute that agency and learn about how you can execute your agency. And two is like learn discernment, like what is good, what is bad, what is positive, what is negative. And again, like what I have seen very positively of the last 10 years in this field is that people have become more discerning about certain things. And then the agency part is the ability to execute on that discernment. Yeah. Well, and that's exciting that it feels like there's some green shoots of more discernment and people, you know, kind of fighting back against, you know, the powers that be here. Yeah, I think so. Like I said, just like as a cultural observer of social media, some of my favorite accounts are just like regular people. There's one that's actually now become copied by multiple different accounts. And it's, you know, this guy, and he looks a particular way. And he's like quoting, quoting one crazy thing a tech CEO said every day in 2026, actually three or four similar accounts. And every day it's like some fresh nonsense. But I love that it's just like some influencer meme talking to regular people. It's again, it's not me. It's not an expert. It's just like it's literally an influencer meme. And I love that. I love that that exists because that's what changes the collective psyche. Well, and can I, I just had that out of brainwave while you were talking. And I feel like it ties back into something you said a lot earlier. It feels like SNARK is actually a very powerful weapon against, you know, the manipulators of power and the rotors of our agency. Oh, I mean, like, it's hard to tell. I was born in 1980s. I'm like a millennial Gen X person. We like live and die by SNARK. Like 100%, there's no better way to disarm somebody with a very fragile ego than to be snarky. I love that. I love that. In defense of SNARK. That's awesome. Ramon, I wanted to say such a big thank you for coming on. This has been such an interesting conversation. I know you've given me a ton to think about. So I really appreciated all your insights. Thank you so much. It's been an absolute pleasure to chat with you. If you work in IT, InfoTech Research Group is a name you need to know. No matter what your needs are, InfoTech has you covered. AI strategy, covered. Disaster recovery, covered. Vendor negotiation, covered. InfoTech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe.