Stare Down the Bull with Susan Hunt

Building AI That Actually Works: A Conversation with Joseph Hackman EP 5

28 min
Dec 8, 20254 months ago
Listen to Episode
Summary

Joseph Hackman, founder and CEO of Permanence, discusses his 20+ year AI journey from coding at age six through roles at Intel, ASAP, and Attentive, to building an AI coding agent that solves complete software engineering tasks to human quality. He explains Permanence's approach to AI adoption, maintenance automation, and his realistic five-year outlook on AI advancement.

Insights
  • Enterprise AI adoption fails when tools require human coaching; successful AI must deliver complete, verified solutions autonomously without human intervention in the workflow
  • The maintenance burden in enterprise software is a massive cost center that engineers avoid, making it ideal for AI automation and freeing resources for feature development
  • AI advancement may be slowing in core science, but real-world impact will accelerate as companies build practical experiences and domain-specific layers on top of foundational models
  • Quality is the primary differentiator between AI companies that truly understand their domain versus those capitalizing on hype; demos are easy, controlling AI systems at scale is hard
  • Talent attraction in AI startups depends on direct customer impact loops—engineers prefer solving real problems with immediate feedback over publishing papers or building research labs
Trends
Enterprise AI tools shifting from human-in-the-loop assistance to autonomous agents that complete full tasks without supervisionMaintenance and technical debt becoming a strategic competitive advantage area for AI automation rather than feature developmentAI hallucination and reliability issues persisting despite scale improvements; fine-tuning and domain expertise becoming critical differentiatorsConsolidation of AI tooling around existing developer workflows (GitHub, IDEs) rather than standalone applicationsDomain-specific AI expertise becoming more valuable than general-purpose AI knowledge for enterprise adoptionSecurity scanning integration with AI code remediation as a near-term, high-ROI enterprise use caseShift from AI as primary driver to AI as primitive/infrastructure layer in broader software experiencesVenture capital accessibility lowering barriers to entry but not execution ability in AI startups
Topics
AI Code Generation and Autonomous Software EngineeringEnterprise AI Adoption and User Experience DesignTechnical Debt and Maintenance AutomationAI Hallucination and Reliability in Production SystemsDomain-Specific AI vs. General-Purpose AI ModelsSecurity Scanning Integration with AI RemediationTransformer Models and Language Model ScalingAI Talent Attraction and Team BuildingCustomer-Centric AI Product DevelopmentAI Safety and Quality AssuranceFive-Year AI Technology RoadmapEvaluating AI Vendors and Cutting Through HypeNeural Networks in Natural Language ProcessingLegacy System Modernization with AIAI Agent Architecture and Autonomous Task Completion
Companies
Permanence
Joseph Hackman's current company; AI coding agent that autonomously solves software engineering tasks to human qualit...
Intel
Hackman's first major role as technical lead for machine learning; semiconductor company that sponsored his graduate ...
ASAP
Customer service AI company where Hackman built the ML engineering team; focused on chat data and neural language models
Attentive
Company where Hackman served as head of AI before founding Permanence
Nuance
Company Hackman left due to perceived missed R&D opportunities in neural networks and language processing
Salesforce
Company Susan Hunt worked at before joining ASAP around 2016
Columbia University
Institution where Hackman taught data center systems as faculty
OpenAI
Referenced for GPT-2, GPT-3, and GPT-3.5 models that advanced language model capabilities beyond expectations
People
Joseph Hackman
Founder and CEO of Permanence; former ML technical lead at Intel, ASAP, and Attentive; 20+ year AI expert and episode...
Susan Hunt
Host of Stare Down the Bull podcast; former Salesforce executive who joined ASAP and knows Hackman professionally
Quotes
"We are not a research lab at Permanence. We are laser focused on taking all of this technology that exists and making lives better with it."
Joseph Hackman
"If you have a human that's like overseeing it, you haven't achieved the complete solution yet."
Joseph Hackman
"It's easier than it has ever been before to build a demo. But it is harder than ever before to actually control an AI system."
Joseph Hackman
"Quality is kind of the way that you tease out this actual difficulty."
Joseph Hackman
"The emotional feedback is pretty instantaneous. And I think that real application is what set us apart."
Joseph Hackman
Full Transcript
This is Stare Down the Bull. I'm your host, Susan Hunt. Around here, we tackle the hard stuff, how to lead with clarity, leveraging AI, and turn strategy into real results. My guest today brings a powerful perspective on how to do exactly that. Let's dive in. Welcome to Stare Down the Bull. I'm Susan Hunt and I'm your host. Let's dig in. Today, I'm thrilled to introduce Joseph Hackman, founder and CEO of Permanence and a true software engineer extraordinaire. Joseph was the technical lead for machine learning at Intel, later built the AI team at ASAP, was the head of AI at Attentive, and has since gone on to found Permanence. He also shared his experience as faculty at Columbia University teaching data center systems. Welcome, Joseph. Thank you. So nice to be here. Good to be here, too, with you. You've been on this AI journey longer than most. I'd love for you to give our audience a little bit of insight into what you've accomplished, what your journey was like. I know you have told me a story that you started coding when you were six years old, which I love this story and love to hear more about it. it goes directly to the idea that having availability access to different things ends up creating great tech leaders. Yeah, access, I think, does matter. I was really lucky to have access to a computer. And my brother, who's three years older than me, was able to teach me how to code way back when I was six. Impressive enough that he learned without any of these skills, any additional coaching. But I started when I was six, I was writing in basic, came with every DOS or Windows computer, pretty good IDE, actually a great way to learn programming, I think, but very self-contained. And I kind of picked up new languages. The internet was kind of just getting started, but there were the people who were on the internet, largely software engineers. So a lot of the content that was available really early on was about programming. And that's kind of what got me into AI. I got really interested in chatbots and kind of this idea of these competitions to see who could build software that would pass the Turing test for the longest amount of time or something like that. And I started doing what you would now call AI in playing with language models that were open source, tinkering with them for a junior high school science fair project. back in 2001. And I've just kind of maintained an interest in it since then. Kept coding all the way through elementary school, junior high school, high school, went to college for computer hardware engineering, and also Chinese, and thought that a lot of the problems that I had in studying Chinese could be solved with a computer. In particular, I was really interested in this case of trying to determine whether a word in Chinese text was a native Chinese word or a loan word. Basically, should it be spelled? Should you try to sound it phonetically? Or should you look it up in a dictionary? And I, in college, had like a very, very rudimentary solution for this. It was pretty bad. But it got me really thinking about kind of how the state of AI was evolving. There's the so-called AI winter. So actually most of the time that I was a kid, the field of AI, particularly neural AI, wasn't really moving, especially in natural language processing, computational linguistics. And I was really excited out of undergrad to kind of get my dream job at Intel. I was a hardware guy by training, but I wrote a lot of software. And so Intel was like the place to be. They were kind of the number one semiconductor manufacturer in the world at the time. And they very generously sent me to grad school. And I got to make a much better version of my Chinese, you know, is it a transliteration program using statistical methods. But it also brought me kind of up to speed on what had happened in all of the world of neural networks over the past 20 years. And it was right at the time I was going to grad school. I started in 2013, which was like right after the Word2Vec paper had come out as a big turning point. And I was leaving grad school in 2015. There's another major paper in neural language processing, neural machine translation was coming out at that time. So I kind of happened to be into grad school at like the time when neural networks were kind of making their return in language processing. And right after neural networks, the convolutional neural networks had made a big splash in image processing. That's right. At the same time that I was leaving Nuance, actually, because I thought that they were missing an opportunity in R&D. And I've seen a lot of guys like you coming into the market with brand new systems, not having to deal with legacy garbage, building new applications that were really interesting and fast moving. And a company that should have really been able to be on the cutting edge of things kind of let that go. So you were exactly the kind of person I was afraid of at that time. And it's why. And I'm glad I was right about it. It also happens to be right around the same time that we met, right? Yeah. Yes. So I moved to New York and left Intel and joined ASAP. I thought they were in a position to generate a lot of really interesting data. So they were in customer service. Yep. And kind of the thesis was that the volume in the customer service business would move from phone to chat. And that kind of was my personal interest was this chat data. And if you had a bunch of data that was in the right format, this chat format, and also, hey, suddenly, there's this kind of, it seems nascent revolution in neural language models, where you want to be as the place that has the data So you know that why I joined ASAP And when you joined like early 2016 maybe Somewhere around there, yeah. I came out of Salesforce at that time because I had already left Nuance for a couple of years. Yes. So yeah, it seemed like the place to be. And I think it was a pretty good bet. It was a pretty good bet. It was fun. And you built an incredible team, which is now your founding permanence engineering team. which in my opinion is second to none in the industry. You want to tell us a little bit? I'm very flattered. It's true. It's very true. I really would like to understand how, A, you attract the talent, and then you also, they follow you. So that says something about your leadership and their desire to be with a startup that they believe in, because honestly, all of you could go to any company that you wanted to in AI right now. So talk to me a little bit about that little team of yours. So I was leading machine learning engineering. So my task was to take the latest out of research, both inside and outside the company, and turn it into products that deliver value for customers. And I think that is kind of the seed of how I built the organization and the team. I really like making customers successful. That's the joy of the job. You sit down with somebody, you hear about their problems and you're like, I can fix that. And then you come back a week later and you fixed it. And they're very happy. And obviously you're very directly changing at least a very tiny portion of the world. And that emotional feedback is pretty instantaneous. And I think that real application is what set us apart. So there were a lot of AI labs that were springing up at this time or had sprung up or even older. And they were starting to recruit reasonably aggressively, but there was often a huge disconnect. Even at like really big companies or like FAANG organizations, there was a huge disconnect between research and production. And for us, we were a startup, we were moving very quickly. And we would say, hey, there's hundreds of millions of end customers. You can make their lives better directly. We'll put you directly with the leadership of these companies. You can talk to them. You can hear the problems. You can hear what needs to be fixed. You can fix it. You can have them directly tell you, yep, that's better. That is a much better loop than submitting papers for publication. right? You don't know if you're actually making real lives better or not. That was kind of our key advantage was saying, if you work here, you will get to be on the cutting edge and also you'll be having a real impact in the world. And that's kind of how I continue to think about this thing. We are not a research lab at Permanence. We are laser focused on taking all of this technology that exists and making lives better with it. So I can't say we're not going to be the most famous for having super high impact publications. That's not the thing. The thing is you will get to sit down with engineers and they will tell you that their life is miserable and here's why their life is miserable. And then you get to go back two weeks later and say, hey, look at this thing I built using the technology at permanence. Does this improve your life? And they tell you it improves their life. That's kind of the addictive loop that we provide. And that's not for everybody. What I would say is the people who are on this team who decided to be at ASAP and then follow me to permanence, they're the people who really love that work. They love getting into the nitty gritty and producing that change and that value. The thing that's really interesting about the way that you go after a space is that you solve a specific problem. There's so much hype around AI. There's been a lot of AI failures in enterprise. You actually came in, looked at a problem that could be solved that's really going to be impactful, not just to enterprise, but kind of across the board with all companies. And you went after that. Can you just tell us a little bit about the product that you've developed and how some of our customers, your customers are using them? I think it's really key when you're kind of developing any technology to think about where you're going. And in my mind, the way that you get where you're going is you build one complete aspect of your vision. Yeah. You don't want to tell people your vision and narration. What you want to do is say, here, this is, you can actually touch it. You can feel it. It's a different mode of interacting with the software. And I think that kind of sets us apart from the other tools in our space. So we do AI for software engineering. There are a lot of tools that do AI for software engineering. And what you're seeing is in the average tool, the way that the human interacts with the tool is changing very quickly. People started out doing type ahead. And now they're doing broader. You have an agent kind of running alongside you in your IDE. I think the people who are the best are actually plugging directly into the workflows where the engineers already are. So probably GitHub. So we started from the very beginning saying, if you can do type ahead in an IDE, that gets you 0% of the way there, basically. Our vision for the product was that you don't need to tell it to do anything. You have an AI agent that finds work and solves entire tasks to human quality so that you as a human are not coaching the AI. You're not driving the AI. You really shouldn't even be having to check the work. We've never shipped a bug. You should only be getting completely correct work. And so rather than saying, we'll do everything a little bit, we'll say, hey, what is a subsection of the world where we can achieve this standard? And then we'll say, this is how the world should operate. Every year, we're going to add more and more and more and more things that we can do to this standard. That's how we expand. What do you think the top three value props are for an enterprise for using permanence? How would you describe that? So I think the top one is quality and what that unlocks for you. So right now enterprises are kind of caught in this squeeze Costs are going up And maintenance is a very very very high portion of your cost basis And so if you're in this space, you have a couple options. One, you can still do all of the maintenance work that you have to do. But this means that you don't have as much money left over to build delightful products and features. Or you let quality slip and you keep building products and features. The second one actually kind of mortgages your business because you keep building more projects and features. That means you have even more maintenance work to do and that maintenance work is not getting done. And so... And that becomes a security risk a lot of times, correct? That can explode into a security risk. Yeah. People like when the software that they use acts in the way that they expect it to act and they can trust it. And so you really can't put off maintenance forever. And so what we're saying is engineers don't like doing that work, right? It's very easy as an engineering leader to say, yeah, let's just defer maintenance, right? Just keep deferring maintenance. But eventually it'll be very hard for engineers to work in that code base where that has a lot of deferred maintenance. And so what we say is it's boring work. Engineers don't want to do it. And the same reasons that engineers don't want to do it actually make it better for AI. It's kind of samey. It's like pretty, it's boring, menial work. That's actually more of a reason to have AI do it. And so by doing that work, we free up all of this resource for you to build new products and features and actually accelerate that. So not only do you have a lot more resource to do products and features, because that maintenance burden is paid down, also those engineers get to work a lot faster because they don't keep stubbing their toes on maintenance backlog items. So I think that's problem one. Problem two, that we solve really well is AI adoption. So one of the things that's been really kind of funny to hear a lot from software executives is that these tools are sold as kind of such a great thing for software engineers. And they keep hearing about like in startup land, they're having these great results, people are able to build demos faster than ever. Why in enterprise software land are they not? Why aren't engineers clamoring to pick these things up? And when you talk to the engineers, they're having a completely different experience, right? Yeah, you can build a demo faster than you ever could before. That adoption is great. But trying to get an AI to solve a real problem in enterprise is way harder than just doing it yourself. It's like you have a super junior employee who really kind of doesn't understand anything, but also doesn't learn over time, right? With a human, you accept that, yeah, if you hire somebody and it's their first day on the job, it is going to be way harder to teach them to do a thing than to do it yourself. But you're making an investment in the workforce of your company. With AI, that's not happening, right? It doesn't feel good for humans. And so I think it's kind of very obvious once you actually dig in, why engineers aren't actually that excited to use a lot of AI tools. Some of them are pretty good. Type Ahead is, I think, better than it was before. But we are not achieving what we should be achieving in AI. And so how Permanence thinks about this differently is that we're not asking you to do anything. I don't think humans should be using AI, right? AI should just be doing the work. That's it. If you have a human that's like overseeing it, you haven't achieved the complete solution yet. And so for us, a human doesn't interact with the permanence AI coder until the work is completely done. And the AI coder says, here's what I did. Here's why it's important. Here's how I can prove that I'm right. And then a human just goes, yep. And so now humans doing their same workflow, it's the exact same workflow you would do if you were reviewing a human's work. They don't change anything. And suddenly now they've adopted AI and their performance metrics go through the roof because it took them five minutes to accept the work that would have taken a human eight hours or 10 if they were coaching an AI. Yeah. This seems like a logical segue to be utilized after you use one of the scanners, the security scanners. Are you seeing that collaboration happen a lot in the market with your technology? Yeah, it's kind of a bread and butter. What's great about having security scanners is that they provide a ready set list of tasks. And we're pretty battle hardened on improving security scan results. Because everybody tends to use the same few security scanners. The things that the security scanners find tend to be in a kind of closed set. But we've seen a large part of kind of the security universe. And we know that the AI coder performs well on these tasks. So that's part of how we have our ship a bug. I think the team is incredible at Permanence. I think the product set that you've developed is really, really a useful tool, a useful way to use AI in a smart, smart way that's going to bring a lot of efficiencies and accuracy to enterprise and others. How do you see AI in the next five years? I know your view is a little bit different than others. So can you just share a little bit about how you see it evolving over the next five years? Is it going to kill us all or will we just find efficient ways to use it? I think it's pretty unlikely that it kills us all, which is good. It's very, very, very hard to guess. In the past, some of the guesses I've made about AI have been correct and some of them have not. So kind of for context setting, I don't organize my photos anymore. I haven't for about 15 years, a little bit over 15 years. Even at that time, you know, I was reading the literature, image recognition was getting pretty good. And I realized that in a couple of years, it would be easy to build a tool that would determine like what's every photo that has my wife in it. And that is exactly what happened. Yes. Right. it's come standard on every iPhone. You don't need to do anything as a customer to get this experience. What I did not expect actually was so much of the advancement in language models I mean it was kind of my area There was a lot of thought about what you use for training data And so either you use kind of like small very domain specific training data sets And I think we were in 2019 even with more advanced training methodologies like transformers, we were kind of starting to tap out on what you can do there. And then you had, well, train on everything you possibly can. And there are a lot of problems with training on potentially dishonest data or training on like low quality data that needed to get solved in order to make this possible. And that's kind of what underpins GPT. And totally honestly, once that worked, once they got that working, which is about GPT-2, I kind of figured that it was going to slow down because you were training on all the data that you could have. You wouldn't really get much more from scale, but it turned out They were not using enough parameters to actually capture that. And so GPT-3 and GPT-3.5 were like huge advancements over that. That was actually really shocking how it kind of went from being very, very slow over a 20-year period to the last two years. It feels like everything had changed in AI. I know that's not how it was, but that's how it felt. It felt really fast all of a sudden. Yeah, I thought we were more training data constrained than compute constrained. And that was a... And so, one, honestly, I don't really know what's going to happen. I don't have a super strong thesis. I do think that a lot of the core problems in these models are going to stick around. So AIs have a thing called hallucination. They create something that isn't real. This is showing up a lot in terms of like a ton of different tools, site, legal cases, and they're making cases up, right? It's sound plausible, but they're not real. Nothing is really changing in the science right now that will prevent that thing from happening. The aspiration is with enough fine-tuning data, this will be attenuated, it will be decreased. But currently, there's no real belief, at least in my mind, that there's something on the horizon where suddenly this is going to change. I think we will probably get substantially stronger models that can solve harder tasks. I think the amount, this memory problem, which talks about how previous logic from a model gets compressed so that you can have a model reasoning over a larger token horizon or a larger time horizon. I think that is going to get substantially better. And so agents will do less really nonsensical things, but I don't know that there is going to be as huge a shift in the actual underlying science of AI in the next five years, as there was in the past five years. But the world is behind. I do think the experience that you have as a human being operating in the real world is going to change really dramatically as people start working these models into everything that they do. So that's going to change. And I think what's going to enable that is actually the science slowing down a little bit. Because once the science slows down, it's worth building experiences and building layers on top of it that stick around. And I think that's ultimately what's going to happen. Every company is going to be building experiences on top of these models where the model functions more as a primitive than the thing in the driver's seat. and AI is already touching a lot of our lives, but we'll soon be touching much more of it. And I can't fully predict what that's going to look like. Just one last question. Is there any advice that you can give to enterprises to kind of decipher the AI hype from what is actually real? Is there anything that you can say that can kind of help them cut through the noise? Because there's so much noise in the industry. I always tell them, look at who is actually running the company, who are the resources that they have within their company? Are they people that understand AI the way that the permanence people do? If the answer to those questions is no, I would take a step back and maybe work with a company that really understands things. What is your perspective on that one? I think that's fair. I would expect, if you're going to pick a partner for this kind of AI transformation or to buy an AI product from, they should either know ai really well or they should know some specific domain really well because adapting ai to to individual domains is i think where the world is going realistically the best are the people who can do both so what you want to see is that there are human beings that you could reasonably hold to account about both the ai side and the domain specific side. And so if you go and you have a bad experience, and none of these people are AI experts or domain experts, and they're like, what do you expect? We're not AI experts, we're not domain experts. It's kind of on you. And I unfortunately am seeing that a lot. It's in a lot of ways easier than ever to buy a domain name and get venture capital. And that is not necessarily tied to execution ability. Agreed. What I would say in general, from kind of a black box perspective, where you're not looking at the company is to look at the quality. I think that's kind of the primary differentiator between people who really know what they're doing and people who don't. Because it's easier than it has ever been before to build a demo. But it is harder than ever before to actually control an AI system. And I think quality is kind of the way that you tease out this actual difficulty. Yeah. And this is also what makes me so proud about our quality record. Yes. Thank you, Joseph. Thank you for being on Stare Down the Bull today. Thank you so much. It's a great conversation. If you need more information, you can contact permanence.ai to contact Joseph or get more details on the company. Thank you. Thank you so much. Thank you for listening to Stare Down the Bull. If today's conversations spark new ideas, share it with a colleague and keep the momentum going. The future belongs to leaders who act, so subscribe and join us again next week. This podcast is part of the Sound Advice FM network. Sound Advice FM, women's voices amplified.