Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

324 | Elizabeth Mynatt on Universities and the Importance of Basic Research

74 min
Aug 11, 20258 months ago
Listen to Episode
Summary

Elizabeth Mynatt, Dean of Computer Science at Northeastern University, discusses how the partnership between universities, government, and industry has driven technological innovation since the 1950s. The episode explores how basic research in academia—often without immediate commercial applications—provides the foundation for breakthrough technologies, using examples from AI, RFID tags, and dairy farming automation.

Insights
  • Universities serve a unique role in the innovation ecosystem by pursuing long-term, high-risk research that industry won't fund, keeping fields alive during 'winter' periods when commercial viability is unclear
  • The dual business model of universities (research + education) is economically efficient because it attracts talented researchers willing to work for lower salaries in exchange for creative freedom and mission-driven work
  • Federal investment in basic research generates returns measured in the billions-to-trillions ratio, but this success is being undermined by proposed funding cuts that risk losing talent to international competitors
  • Human-centered computing reveals that understanding people's behavior and needs is often harder than solving the technical problem, requiring interdisciplinary collaboration with psychologists, sociologists, and ethicists
  • Technologies often find unexpected applications decades after their invention—RFID tags developed for WWII aircraft identification became dominant in dairy farm management through university research
Trends
Shift toward interdisciplinary research and combined majors (50%+ of students at Northeastern pursue dual majors combining CS with psychology, biology, ethics, law)Growing emphasis on ethical frameworks and responsible AI in computer science education, driven by student demand after witnessing real-world harms from tech platformsUniversities increasingly facilitating faculty mobility between academia and industry to bridge research-to-product gaps and bring real-world constraints back to campusRegulatory and business model misalignment delaying healthcare technology adoption despite technical readiness—regulatory approval and reimbursement models lag innovationOpen science and preprint publishing gaining momentum post-pandemic, reducing journal paywall barriers and accelerating knowledge dissemination across research communitiesSmaller, domain-specific AI models emerging as next research wave while industry focuses on large generative models, positioning universities to lead next innovation cyclePublic good technologies (assistive tech, aging-in-place solutions) remaining underfunded by industry despite broad societal benefits, requiring sustained government and university investmentPrivacy and simplicity emerging as potential competitive advantages in consumer tech, though adoption remains slow due to entrenched data-collection business models
Topics
University-Industry-Government Innovation PartnershipsBasic Research Funding and ROI MeasurementHuman-Centered Computing and DesignAI Ethics and Responsible Technology DevelopmentAging-in-Place Technologies and Assistive ComputingInternet of Things Security and PrivacyInterdisciplinary Research and EducationTechnology Transfer from Academia to IndustryFederal Science Funding and NSF PolicyAI Winters and Long-Term Research SustainabilityRFID Technology ApplicationsOpen Science Publishing ModelsRegulatory Barriers to Healthcare InnovationData Privacy and Consumer TechnologyTalent Retention in Academic Research
Companies
Google
Cited as example of university research (Stanford) directly leading to commercial innovation; also mentioned as emplo...
Meta
Mentioned as source of faculty joining Northeastern; represents industry competition for academic talent
Amazon
Referenced as example of e-commerce enabled by university-developed number theory and cryptography research
Xerox PARC
Discussed as premier research lab where Mynatt worked; example of industry research environment vs. academic freedom
Bell Labs
Mentioned as competitor to Xerox PARC as one of world's best research labs in the 1990s
NVIDIA
Referenced as example of companies profiting from AI gold rush by selling hardware infrastructure
WeTransfer
Cited as example of companies unilaterally changing terms to train AI on user data without consent
IBM
Mentioned as company that rebranded ubiquitous computing research as 'pervasive computing' in the 1990s
Stanford University
Referenced for foundational internet research that led to Google's creation; example of university-to-startup pipeline
Cornell University
Cited for developing RFID tag technology for dairy farm applications, demonstrating technology transfer from military...
Northeastern University
Mynatt's current institution; example of R1 research university with strong industry partnerships and co-op programs
Georgia Tech
Mynatt's previous institution for 25 years; example of research university managing dual education and research missions
National Science Foundation
Primary federal funding agency for basic research; discussed as critical to innovation ecosystem and subject of propo...
ACM (Association for Computing Machinery)
Professional organization moving toward open publishing models to reduce journal paywall barriers
People
Elizabeth Mynatt
Guest discussing university research partnerships, human-centered computing, and threats to federal research funding
Sean Carroll
Podcast host conducting interview on research funding and innovation ecosystems
Vannevar Bush
WWII scientific advisor who established the foundational framework for government-funded university research in the 1...
Fred Brooks
Famous software engineer and virtual reality researcher; quoted on user behavior and technology design principles
Srinivas Panchanathan
Previous NSF director who promoted 'missing millions' initiative to distribute research funding beyond top 10 univers...
John Sealy Brown
Researcher cited for concept of innovation occurring in 'white space' between disciplines
Quotes
"People is always the hardest part of the equation. Everyone assumes technology is the hardest part. But understanding what people will do and why they will do it is really challenging."
Elizabeth Mynatt
"Universities are these places where people get exposed to research. They get exposed to concepts in their classes. They work in laboratories that are essentially time machines into the future."
Elizabeth Mynatt
"The billion to the trillion is the point. It is strangely economically effective because of universities working with this dual business model."
Elizabeth Mynatt
"We are really at a dramatic point of failure in the system. We've got this flourishing forest and we're clear-cutting it with the myth that something else is going to grow and it's not going to grow."
Elizabeth Mynatt
"Never underestimate the laziness of the user."
Fred Brooks
Full Transcript
Hello everyone, welcome to the Mindscape podcast. I'm your host Sean Carroll. A couple months ago I did a bonus episode that you might remember on science funding and I was a little worried when I did the episode in response of course to proposed cuts in science funding that the government has put forward. A little worried that it might seem a little dry, a little inside baseball you know, we were talking about indirect costs, overhead, how different ways of applying for grants played out in the academic environment and so forth, but in fact I got a lot of positive feedback about that episode. People explained that they knew nothing about this, you know, they'd never heard of any of this stuff and it turns out that it's really important. I guess maybe I shouldn't have been surprised because we academics as much as I love them, my fellow compatriots in academia, we love doing our research, we love doing our work. Some of us love taking that work that we do and explaining it to broader audiences, but mostly we're trained to sit in the office or the lab and do our thing and talk to our colleagues. We're not very good at explaining ourselves to the broader world and it continues the assault on the research infrastructure that has been built up here in the United States and the world. So maybe it is useful to talk about how academic research works and its relationship to the important technological breakthroughs that we all benefit from. There's a very clear story where you see the end result of a certain research tradition, you might see the last bit of it and the last bit of it that leads to some important technological innovation is often carried out in the context of private industry, corporations, who want to make money off of something. But there's very often a long lead up to that where important basic research was being done within academia. AI, for example, AI is something that is a big deal right now. We're in the go-go days of AI, but it's an old tradition. AI has been actively explored since at least the 1960s and since the 1960s, it has gone through at least two different episodes of what are called AI winters. Periods of time, one in the 1960s, another one roughly in the 80s, where people were giving up. We're like, nah, this isn't gonna work. This artificial intelligence thing, it'll never happen. Maybe, arguably, if it were left to the profit motive, people would have stopped studying AI entirely. But academics are not as driven by the profit motive. They can keep a field alive, keep thinking about it, keep trying to make breakthroughs without worrying about the next quarterly report for their company. So not only can academics do more sort of blue sky research, be more optimistic, look for long shots that may or may not pay off, they can also talk to each other and they can give away what they learn, you know, for free, roughly speaking, in ways that would be much harder at least to do in the private sector. So today's guest, Elizabeth Minot, is a dean of computer science at Northeastern University. And we're going to talk about two things. First, we're going to talk about her own research, which is in human facing computing, how to make the ubiquity of computers around us more palatable to people, more user friendly, less invasive and more helpful overall. But then the other thing is that she's been very active in explicating this, the ways, I suppose, in which academia has partnered with industry and with the government to generate like an hugely effective technological innovation machine. And government and industry and academia all play a crucially important role in this story. Her own expertise is in computer science, so that'll be most of the examples that we use. But you could tell exactly the same story about physics, about chemistry, about biology, about medicine, about energy, about a whole bunch of things. And there's enough, Beth is very, very good at coming up with examples that are very vivid. So even though it's still a little inside baseball trying to figure out how to justify the system that we have for academic research, it's still also very illuminating about the history and the prospects for the future about how this research is going to go forward. You know, we're not done yet. We haven't come up with all of the technological innovations we have yet to go. We still need a healthy collaboration between these different sectors. It's been super duper successful. We're both a little worried that we've lost sight of that success and are undermining our own successful strategies. But at the end of the day, we choose to be optimistic that we're going to keep it going, or at least we have the opportunities to keep it going and continue discovering wonderful new things, both about our universe and about how to build things that help our everyday lives get a little bit better than they were before. So let's go. Elizabeth, my name. Welcome to the Mindscape podcast. Thank you. Glad to be here. So we're going to talk about these big picture questions about how research done in universities trickles down, as it were, to industry in our everyday lives and so forth. But of course, for my podcast, I would like to start talking about your own research. You're an active, I guess, computer scientist is the right word. Computer scientist, yes. Right, set of words? Okay, good. And I love some of the jargon that goes along with your research activities. One of them is the idea of human centered computing. So tell us a little bit more about this, especially since you've been doing this for much longer than the last few years we've had this AI resurgence and people have strong feelings about that. But you've been thinking about this for much longer than that. Much longer back when back during the AI winter when we didn't call it AI. So human centered computing is a term, I guess it's about 20 years old and it coincided when computing technology was moving out of workplaces, you know, official areas of business and more into everyday lives. And as it was getting closer and closer to mere mortals, then the question was, well, instead of contorting the human being to how they would use computers, how do we actually start designing computers and computing systems such that people would actually want to use them? So my PhD studies was in the forerunner of this, which was human computer interaction. So it really started with the psychology human factors of how people interacted with screens. And then the social impacts of these technologies became more obvious. And so human centered computing has the psychology built into it, but also more the social sciences. So we'll, we will do ethnographic work. We will do qualitative research. We will understand, you know, deep ethical and privacy tradeoffs in how computing shows up in everyday life. And that is the motivation for now a pretty robust area called human centered computing. And are you directly collaborating with psychologists, sociologists, etc. Every day. Is there this is a completely unfair question. Is there any way to summarize what we've learned from thinking about this? Is there anything non-intuitive that we figured out? Well, I think the first non-intuitive thing is I think people is always the hardest part of the equation. Everyone assumes technology is the hardest part. But understanding what people will do and why they will do it is really challenging. And, you know, we get it right sometimes, we get it wrong sometimes. I think one of the things that we got wrong was understanding how deeply personal mobile computing and cell phones turned out to be. Oh, OK. When we were looking at them, I worked at ZRX Park a long time ago. That was back when we called it ubiquitous computing. And when we were when when we were looking at these little hand sized devices back around the time the Palm Pallet was coming out, we really saw these as interchangeable. They were like electronic post-it notes, digital post-it notes. And they would they would be laying around like we really thought of them like paper and pencil. And so we imagine hundreds of them in the environment, but we never really thought of them as so deeply personal as the way they landed up becoming. And because these devices are deeply personal, people will share information and use them in ways that no one really predicted back in the 90s. Did you have a Palm Pallet? I did have a Palm Pallet. I had one. I can't say I ever used it that much. I'm sure that at least half the audience has no idea what we're talking about right now. My favorite story from so this was the ZRX tab. So again, about Palm Pallet size, so a little bigger than your typical smartphone now. And it had a great gestural language, pen language to interact with it. Just awesome. And people immediately started putting little apps on it. And so a calendar was one of the first apps. You could look at it for your calendar. And then the calendar had a little alarm to remind you that you had to go to a meeting. And so socially, everybody knew that sound. So of course, the first app that someone just kind of added into the mix was a quick gesture to make that sound. So if you were in a boring meeting, you could make the gesture. The sound would go off. You would look surprised. I'm sorry, I have to get to this meeting and you would gracefully exit out of the room. Who says human ingenuity is dead? Yeah, definitely opens up new opportunities. I mean, is there any sense in retrospect that we should have seen the ubiquity of cell phones, pocket little computers? So I always tell people I was setting my status back in 1993. We used a little app. I think it was just called M-Bone from Multimedia backbone. And you could use it to chat over the Internet. It was a big deal back at the time. And I would use it to chat with my lab mates. And we had it just because it was a cheap way to always just talk to each other as we we had all disappeared into summer internships. So we were all working at different companies in the Bay Area. And so we would just chat with each other. But what we found was that we could reset the name, you know, the listing of who we were in the app. And so we just naturally started sharing our status. It's like, Beth, I'm out to lunch. Beth, I'm looking for something to do this evening. Who's interested? So as soon as those little baby affordances were there on the Internet, people started doing that. And so Facebook was, you know, many, many years later after that. But it was just this natural inclination. As soon as you let people connect to each other over these types of technologies, they will find really interesting ways to do so. I've definitely told the story several times before, but in the very early days of web browsers, right, of Mosaic and Netscape, my friends will tell you that I was proselytizing to them. I'm like, this is going to be really, really big, this worldwide web thing. And they had no idea what it was. And I couldn't explain what it was going to be used for. You know, I would say things like, but look, you can you could order a pizza on using this technology. And they would just say, I can order a pizza already. I don't see what the advantage here is. So I could kind of dimly see the importance of it, but articulating how it comes to be is a very special skill. I was I was right, but nevertheless felt embarrassed. I was in a National Academies meeting looking at the future of broadband. And so this was all about arguing like the size of the pipe for downstream content versus the size of the pipe for upstream content. And watching Seinfeld was the killer. The killer app that was that was really like the focus of the committee was really looking at it from a cable broadcasting point of view. And I was in their plane and most of them didn't have real internet connections. And I'm in their plane with the internet just as a normal user. And I'm like, people are sharing news. People are sharing baby pictures when babies are born. And they looked at me like I was an alien with three heads. And I said, you know, people are going to start getting their news from their social feeds as opposed to traditional news organizations. And they all but kicked me out of the room for being so heretical and so ridiculous. Because why would anyone listen to, you know, a bunch of random social feeds for their news when they could go to, you know, the major channels or the major newspapers? But I get the feeling nowadays that we've seen obvious successes with the internet, with Facebook, with social media. It's almost like it's the other way around. But like that people make these dramatic proclamations about how this is going to change our lives that maybe you have to learn to draw back a little bit. It's not always, you know, obviously virtual reality is one that I still believe in going forward. But more than once we've been told any day now that's going to be the next big thing. Any day now, the metaverse, we're all going to walk around with these headsets on to great science fiction. But, you know, part of what we'll talk about today in the study we did is sometimes it is decades in the making in terms of understanding when these technologies will have such an impact. And oftentimes it will be a different application of that technology that really takes hold. So VR incredibly powerful now in design. Automotive design relies on it day in and day out. But it's just not the sexy consumer walking around the streets version that we hear about in the media. Well, does your experience with handheld devices, cell phones and so forth give you any insight into what to expect down the road in terms of augmented reality, headwear or even brain computer interfaces that we all are going to have someday? So Fred Brooks, a famous scientist, software engineer who worked in virtual reality, said never underestimate the laziness of the user. Good. And there's a story of this tale where he walked into this fancy VR environment and you could walk around the environment and interact with things. And the first thing he requested was a stool to sit on and that he would sit there and he would revolve the environment around him. So I think a number of these technologies with BCI and all of this, they're actually just more work than they're worth it. OK. And people are, you know, you can get little bits of information via your texting or whatever media app that you're on. And that's enough for people to do what they want to do. And if you're asking them to do more than that, no. So one of the most difficult things to forecast, if that's what we're in the business of doing, is time scales, right? Like maybe something will happen. But to say this can happen next year versus 10 years or 100 years from now is really, really hard. Is that something we're getting any better at? We're there. There are projects now that are trying a machine learning approach to predicting when technology innovations will or when research will have an impact. I think one of the things that we've seen is those predictions tend to have a very narrow funnel. So they look at a roadmap trajectory of, for example, you know, the accuracy of a particular AI model. OK. But what they need to also take into account are all the contextual factors, right? So the business readiness and the availability of hardware and, you know, even the particular regulatory side of things. And it's really difficult to predict when every when the stars align. Yeah. And that's that's the challenge. Healthcare is one of those fields that, in some sense, is frustrating because there's so many innovations on the technology side. But because the regulatory and the business side hasn't caught up to that, many things that are possible never make it out into our day to day experience. So it's it's it's complex predictions are very complicated. Especially about the future. Yes. But but healthcare is an important one. And you already mentioned this phrase, ubiquitous computing. My impression is that one of the things that you and your people have been working on is how to help people who might be medical patients or just elderly people, how to design their homes so they can live more independently. So the longest thread in my work and it's it's changed names. It was ubiquitous computing in the 90s and then it became pervasive computing. IBM rebranded it at that point in time and then it became the Internet of Things. And now I'm not actually sure what it's called anymore. But those are at least three names. And the gist is if you have technology out in the world out in your environment and for me in particular in people's homes. How can technology provide support and services awareness. In my case so that older adults can age with more independence, sustained quality of life in contrast to moving to assistive care or other types of more institutional settings. And it is a it's a wonderful mix of we can see the potential right so human behavior tends to show up in step functions right you seem to have like. Perfectly reasonable control of your life and then things fall apart. And in reality it's actually a long slow little decline. But we compensate so well for that. That it mask what's actually happening. So there's a lot of things in the environment that could notice for example cognition slowly declining. Okay. And then how do you build supports or services around that. How do you for example flag that this person is going to be more vulnerable to fraud and scams. And maybe maybe turn the knob a little bit of a little less privacy and a bit more control by family members or other institutions to keep a tighter reign on things. Something you wouldn't do to someone in their 30s. But now and there's you know now in their 70s it's like okay you know there's there's more risk here. So that's that human center computing approach right the technology can monitor the technology can provide services and supports. But when and how to deploy them is deeply a human question. Well and this is where the human side does come in. I can only think of my mom who is an older human being who lives very successfully and independently but is of a certain age. And the apartment building she lives in number one you can't take the elevator without your cell phone and an app on it. And number two you can't pay your rent without a different app on your cell phone. And at a certain age like you don't want to do that. You don't want to live like that. It's making your life much harder. I'm wondering if there's a balance between the enthusiasm the enthusiasm of trying to help people with new technology versus adapting to the fact that not everyone always wants it there. Right. And this is one of my longstanding rants is even age proofing technology. So online banking online services right there. Great. But we can we can measure the psychological impact the cognitive impact every time a new update comes out. And so my mom does online banking. I'm doing more and more of it for because I don't know about every quarter they went out a new interface and everything is different to her. And the you know the climbing that hill again to learn yet a different way to pay my bills online. You know it's just too much. Right. And so it's you know technology can make a difference. But our model of how we view technology innovations out into the world which is mostly based on 20 year olds is not the right model for how we would design successful solutions for older adults. So it's a real challenge to line line up those stars. Yeah. And I'm wondering like when we get to the nitty gritty how does an effort like that play out. I mean are you sort of exhorting corporations or institutions to adopt better practices or are they coming to you trying to figure out what the better practices are. You know is it sort of natural or do you have to really nudge them. It is a little bit of both. There is definitely exhorting and I won't. I can't say that that's terribly successful. There is ever so often a company will pop up and say health care older adults huge market potential. Right. How do we do this. And there's you know a few out there right now that are constantly advertising themselves to me. So you do get kind of the the commercial innovation waves that look at this. But health care is in some sense the most complicated because it is deeply regulated. And so for example if if having an extra set of technologies in the home would be better preventative care for an older adult. Right. Who's paying for that. Where does that come from. And where does the technical support come from to keep things up and running. So it's as I said it's tantalizingly close. You can see right we have occupational therapist we have even independent living environments that where these technologies can make a difference. But it's lining up the business model alongside the technology capabilities and that long term technology support because. Yeah having to use a cell phone to access the elevator seems like a bit much. It does seem like a bit much well. And it's apart from elderly or medical uses the Internet of Things I think in a lot of people's minds has has grown a little out of control. Right. Like where you can't buy a toaster without it being Wi-Fi connected. There's this really separate incentives and I think we'll get to this later. The corporate world has a different set of incentives than either the consumers or the academic world might have. Right. Like they want to collect data on us. They're putting AI and other Wi-Fi connectivity into everything. Not everyone wants it. Is this something that is a research program for people like you. Very much so. There's the old saying right you nothing's free if you think it's free you're the product. And it's your data. And I think it I am hopeful perhaps forever the optimist that some corporations will start to realize that simplicity and privacy is a selling point. As opposed to you know at some point we've gleaned as much data as we can out of these systems. But my mom was having to purchase a new washing machine this week. And my goal was like the simplest thing. Like can I please have a knob like no apps no Wi-Fi right. Just put your clothes in and be able to watch them. So this this exhortation are showing business models where simplicity and privacy start to rule the day is is something that I'm hopeful about. And in the meantime folks here at Northeastern University and others are doing quite a bit of work in understanding kind of the the challenges around Internet of Things. And it's in some sense a weakest link challenge from a security and privacy point of view. So you have a bunch of things in your home all Internet of Things all Internet connected they're all behaving and you get one bad actor in the home. And it's grabbing information from everything else and then broadcasting out to the world like you know just one bad actor breaks the security and privacy safety net. So lots of work to be done. Lots of recent work that we've done on how all the signals that your car your automobile is sending out about you. It's also now a Internet of Things place that has tons of information that is being broadcast. I get the impression that there's a certain amount of exhaustion that sets in on the part of the user to try to keep up their vigilance about this. I don't know if you if you saw literally just this morning I was reading about we transfer. Did you read about that. So we transfer is a way that you can send big files over the Internet right. And so they just sneaked a new paragraph into their agreement that says oh by the way we can train our A.I.s on any file that you transfer using we transfer. So it's not even like we can train A.I.s on books or public things like your private conversations that you send or a video or whatever is now there. It's all now fodder. Yeah. And so and people are outraged by this. But it's some kind of collective action problem. I mean do you work at the level of government and legislation to try to say help people. We do. So faculty from my university. So I'm Dean of the Corey College of Computer Sciences at Northeastern University and our faculty part of what they will do under. I guess you would call it a sabbatical is we've had folks working in the White House with the previous administration. We've had faculty working in the Senate and faculty working at DOJ because it is this intersection of business models regulation and technology. And it is it does feel exhausting because you think you've got everything set and then like something some new term of use you know sneaks in. So you have to put regulation forward. And that's why I'm very distressed over this. These conversations around no regulation around A.I. Because we had to be able to innovate as quickly as possible. Like are you kidding me. Because there are there are so many violations violations of trust and privacy and security that are possible with these technologies. And we have we have certainly shown that in decades of innovation up until now we have been able to to manage that with regulation in place. And we can do so again. And at the level of the researchers and the people building things I realized that you one of your initiatives at Northeastern has been to push the idea of ethical computing or. You know at least train people that there is an ethical dimension not just a puzzle solving dimension to this kind of work. Yes. So when we were talking about human senator computing it's it's not just the psychology and the human factors but it is the social sciences including ethics and philosophy. And that that is important to not only expose that to for example every single student that comes through our program. But then also to foreground that in our research. So one example from my work in aging in place. You know I joke like we can keep older adults safe right you just wrap them in bubble wrap and don't let them do anything right. You can create a prison of a home that's perfectly safe but no one would want to live there. And so they're ethical tradeoffs of risk. There's ethical tradeoffs of autonomy. There's ethical tradeoffs of well who's in control of this information and how these services work because it probably is the kids of an older adult that are setting things up and paying for things. OK well who's the user that is it the older adult who's being has this in their home or is it the person who's paying the bills. And then well what about you know the insurance companies or the medical industry right. So there's huge tradeoffs that have to be made. And so having a background of ethical frameworks and foregrounding that into technology innovation. I feel should be a requirement of every student studying in this field. Do you find that the students agree with you about that. Are they happy to have this kind of training or do they think it's a distraction. It has really shifted when I started teaching ethics and technology again probably around the 90s turn of the century. The students hated it. They were complete technology determinism. It was technology is neutral. It's just what people do with the technology. I just do the coding that someone else's problem. And I kind of went and was teaching other classes and then came back to this class a decade later. And at this point in time I now Facebook has happened and so many things have happened and the students are like yet no technology is a hot mess. Well give us give us tools and guidance. And I think that's that's part of the challenges. What are the tools because you know you can have a a heady understanding of this. But then you still sit down and to write code. Right. And there's a big gap there. And so how did the tools for example help you make sense of the data that you're collecting and that you're relying on. And how do they foreground those conversations in the software engineering and the design processes. So that gap between avoiding quote unintended consequences and actually creating technologies that gap is pretty big. So that's something that we work on every day is closing that gap. It's good. I like I always like to hear that you know the students are a little bit more aware these days than they were in the past which makes perfect sense. You know you're exposed to more things and you discover them. But it's a good segue into the other thing I want to talk about with you which is you're talking about research being done at universities. You're talking about training students. There's a partnership involved between universities and industry and government to sort of do all the different aspects of technological development. And you're the I guess the lead author on this report from a couple years ago from the National Academies on how that happens. So before we get in the details what's your big picture view of this. Is it a healthy set of relationships between universities industries government. Is it working up until a few months ago. Yes. Yes. You're going to say. Yeah. So it's a twenty nineteen report. Long title information technology innovation resurgence confluence and continuing impact. And I'm the lead author was the chair of the National Academy study. We finished it during the pandemic. Great pandemic project. And it's part of a series of reports that have tackled certain myths out there about how technology has an economic impact in the US. And it really points to this amazing partnership this ecosystem that has evolved since the 1950s of universities federal government and industry working together for really significant technical impact that we've seen over the decades in the United States. And so what are your favorite examples of universities doing research without necessarily any obvious technological or application and the centered aims that nevertheless turned into something crucially important. So. So there are some things that are home runs right. You know the report looks at right there are these folks who had this research at Stanford around you know kind of at the edge of the Internet. And then you know a few years later Google pops out the other end. Literally walking down the road. Very few things are that clean cut but sometimes there is something you know quite quite direct. One of the ones that I have enjoyed talking to folks about is most folks would not think of dairy farms as a major area of technical innovation. But they are the success of dairy farms in the US is strongly tied. So back in the 50s to the 70s there was this new technology called radio frequency identification. It had been used in World War Two and it was you know a transponder and then the other aircraft would chirp back and it was a friend or foe. Okay. Right. So wartime use and somehow some researchers at Cornell said this could be useful for cows. And they developed and miniaturized the technology to the point that RFID tags for a long time were just known as cow tags because that turned into the dominant commercial use. Each cow has its own little RFID tag and it would get tailored tailored feed based on monitoring milk production. Very very straightforward data science problem at that point. And no one I'm sure in the creation of friend and foe technology that was being done in World War Two thought that these tags would show up now pervasive across dairy farms. And that was enough of a push to keep that technology going because now it's in every single inventory and you know every it's pervasive now ubiquitous. When I when I use these tags at Xerox Park we call them cow tags because that was and we were coming up with other things we could do with cow tags. And so now you have cows wandering around with these tags and of milking machines are all they're just robots. Right. That's what they are now. And so again you have decades of people creating robots that can you know open doors or manipulate objects. None of those researchers were thinking about the robot blatching onto a cow to milk the cow. But you have all of these investments in the robotic milking machines and what has been incredibly helpful for the dairy industry which has had huge labor shortages is they can continue to be economically self sufficient with now this this very healthy business ecosystem of robotic milking machines. And funny enough the robotic milking machines rely on the tags of cow wonders up it knows which cow it is essentially knows how the cow likes to be milked. And can monitor and of course the robots clean and and do everything with that. I didn't know this about cows cows like to be milked about every 12 hours. It's kind of hard to have a labor force. Yeah. That wants to be there for the 7am milking and the 7pm milking if you're lucky at this. The cows actually just wander in to be milked on their own schedule. Robots take care of it. They do all the monitoring cleaning. You get better production safer faster. All of these things through cow tags and robots. And who knew that all of those decades of investment in these areas would be a huge boon to the American dairy farm. And it's a good example of like two stages of innovation. You had to invent it for whatever purpose. And then someone else had to say, oh, we can use this for a completely different purpose. And that's the magic of universities. Universities sit at these intellectual cross streets. Right. Someone is doing something military funding, I'm sure. And there someone is learning about it in the class and that university Cornell happens to have an extension program that works with local industry. So that's Darry's. And so it wanders through the university long enough for someone to say, hey, I could do something with that. And then that becomes cow tags. I'm being trained in another university starting to learn about this idea of ubiquitous computing. But I'm really interested about homes and families. So I start taking cow tags and start putting them on different objects in the home and see what we could do with them there. So universities are these places where people get exposed to research. They get exposed to concepts in their classes. They work in laboratories that are essentially time machines into the future. And then they graduate out and go into all sorts of industries going, hey, did you ever think about using cow tags? I almost have difficulty asking the right questions for this kind of topic because it's so obvious to me that this is important and helpful and useful. But maybe I look out at the world and it's not obvious to everybody else. But one of the ways, and I'm sort of partly paraphrasing what you just said, but maybe you can expand on it. One of the ways in which universities are different than industry is that the goal at the university is to create new knowledge and then get credit for that. Whereas the goal in industry is to make some money. And so sharing your ideas back and forth is a virtue in one of those contexts and maybe dangerous in the other one. Exactly. And that's when I refer to this report. It really is combating these myths. Right. And so one of the myths is the Silicon Valley garage, right? That innovations pop up when some industry folks get like-minded about something and then out poof comes a new idea. And the report looks at that that garage was just the last 10 minutes of years of work. And the people in the garage got trained in the universities and got exposed to a whole bunch of crazy ideas. And then they have the kind of the know-how and the insight to come together and make a business proposition. And then they have a very narrow, appropriately kind of path to follow of bringing something to market and having to return on that investment and making a profit. Universities are the opposite of that, right? They are rewarded for, you know, thinking big, thinking long horizons, doing something that is foundational, doing something that is creative, right? We get rewarded on creating something people hadn't done in the past, right? And we get rewarded on how much other people read about it, right? So we, you know, it only counts if you get it published and other people are reading your publications and citing your publications. So we have an entirely different economic model of why we do what we do. And universities are strangely highly efficient because you don't think of universities as an efficiency oriented place. Universities are highly efficient because we have these dual business models sitting right next to each other. We have the business model of creative research, right, that convinces people to work long hours at much lower salaries than they could get in industry, right? Because they're committed to the mission. And then the second business model is education, where people pay us to come hang out and learn things from us and get exposed to new ideas. So you put these two business models next to each other and that has been the foundation, you know, the modern American university since the fifties has been the foundation for massive innovation and economic growth in the country. It's an interesting subject because the dual business model idea. Universities provide education. They also provide research. Those aren't obviously compatible with each other in some sense, right? I mean, it seems to have worked, but sometimes I think like I wrote a blog post years ago, the purpose of Harvard is not to educate people. I mean, the students are great. You know, we love having students, but kind of we're here to get our research done is the attitude of many faculty members. I mean, do you think that this this gluing together of these two different missions is long-serm sustainable and compatible? I mean, I know that increasingly a lot of universities have their tenured faculty doing research and they teach using adjuncts or lecturers and things like that. And I worry that that's a fissure that is going to grow over time. It has certainly been a set of tensions that universities have had to manage. And I think part of this it comes from my 25 years at Georgia Tech, Georgia Institute of Technology. And now I've been here three and a half years at Northeastern University. We're very much a R1 university, right? Doing top tier research with a lot of students, thousands of students coming through. So, you know, Harvard is, you know, at the end of one very long tail, right? And, you know, other universities that may not be, you know, R1 or may have very little teaching colleges and such are at the other end of the spectrum. But the heart of the curve, right, is the research university that is public facing in its educational mission. And what we've especially seen with the National Science Foundation over the past decade or more has been to lean into that. So, Paunch, the previous NSF director, would talk about the missing millions and really pushed that research investments went out across the United States, not just at the top ten universities. But it was getting out everywhere because that magic of educating students in research, bringing that as close together as possible, you know, as you know, he would just say that's where the magic happens. And do we do enough, do we do a good enough job in universities at letting different parts of the faculty talk to each other, letting faculty and students talk to each other? I mean, there's a siloing effect in academia, right, where we're in our own departments and judged by our own criteria sometimes. I think universities have, especially in the past decade or more, really leaned into that interdisciplinary, interdisciplinary, interdisciplinary research in a discipline education is John Sealy Brown would say, where the white space is, right? Where, where kind of where the interest is by the students. So, for example, at Northeastern University, over 50% of my students are in a combined major. They're majoring with computer science plus psychology or biology or game design or law or ethics. Right. So that's part of what we're seeing in, you know, the market is students are more and more interested in how these combine and universities may be slow. But they also, you know, respond to that it is the where the disciplines cross that things get really exciting. So I think most universities have lowered the barriers or increased the reward function for interdisciplinary research. I mean, I remember at Georgia Tech, my previous institution, we saw the inflection point when at one point, historically, most grants were single investigator grants and they were kind of heads down domain focus. And we saw it in the data in the 90s when it had flipped to multi investigator, multi discipline focused. And so there has been a back and forth between the funding agencies and the research community to move us in that direction. And the other thing I guess the universities are really good at is taking time, doing things that you don't have an obvious payoff right away, right? The blue sky, the basic research stuff. How I mean, how I'm spoiled because or at least I'm biased, I suppose I should say, because I'm a early universe, fundamental physics, physicist and also a philosopher like nothing I do has any application technologically whatsoever. So of course, I think this is great. From a more objective point of view, how important is that basic pie in the sky blue sky research kind of aspect of things? It is amazing how incredibly important that is. Part of what we look at in the report, we call it the pattern resurgence, right? That there'll be a flurry of activity in the area and then there's no commercial application or it becomes out of favor. But there's always someone who's just continuing to work on that, whether it's different vaccine techniques, which we saw during the pandemic. Neural networks, right? This is the heyday of generative AI right now. Longstanding ideas, right? There's a reason we refer to AI winters and summers because sometimes the winters were long and there was no money in the industry very little because there was no immediate business application. But those stubborn university researchers just dug in and said, no, no, no. The brain is a network, right? The world is based on networks, right? These networks will work and it turned out that we needed enough data and enough computational horsepower to put these things together to now see what looks like it feels almost instantaneous, like it just appeared. What just appeared was decades, decades and decades in the making. Modern exchange of information on the internet, the foundations of e-commerce, right? Number theory, right? Just again, people working on the mathematical foundations of information science, they weren't thinking about Amazon, right? Amazon didn't exist, right? They weren't thinking about any of that, but they were creating the theoretical foundations, which now leads to secure e-commerce that people use day in and day out. I love that point that you made in the report about areas of research that were exciting and then sort of lay fallow for a while and then turned... They were just ahead of their time, right? Mm-hmm. And that sounds great, but then part of me worries, how do you distinguish that between that and a failed research program whose advocates are just holding on because that's what they've been doing their whole careers. So the good news is that there's natural limiters in the system, right? And so if you're working in an area and it's just not gaining traction, right? At some point, it's like, all right, I have to keep publishing and I have to be able to attract students because that's my currency and people will shift into areas where they can continue to participate in the economic system at the university, publishing and training students. So there's just... there's natural forcing functions that, you know, an area may go very quiet and sometimes it's just you look at it in a different way. Virtualization was a technology that was initially about just supporting multiple jobs running on the same machine, you know, so we're back at punch cards, right? And virtualization went fallow for a while because punch cards disappeared and at some point a new group in Stanford looked up and said, hey, we would like to be able to share compute for a completely different reason to run different operating systems. And then that very quickly turned itself into the basis of cloud computing, right? But you had to have a university kind of looking at, all right, what's a new way to use this other thing that we have on our bench and to play with it in a new way because there was no way that industry would say virtualization. Well, that's dead. That's a dead technology now, right? It got reinvented and it got reinvented because people could ask those creative ideas before it could become commercially viable. I guess there's still this problem though. I don't... this is not a rhetorical question. I truly don't know the answer to these questions. How do you decide how to allocate grant money when one proposal is right in the middle of an exciting renaissance and the other one is like, okay, you've been doing this for 20 years and hasn't paid off yet? Like, I'm in favor of both, but deciding who gets how much is really hard. It is a really hard question and there is no way that one can argue that the system would ever be perfectly optimal, right? You know, the minority report of research, right? That you can you can forecast ahead of time. So what you do is you start to balance across other criteria, right? So I talked about Pont and the missing millions balancing the criteria of making sure that different universities had a healthy amount of research, right? Because that would just provide benefits into the state and into their local communities. We have programs that emphasize junior researchers, right? We have all of my faculty, my junior faculty are about to submit their NSF career proposals. This is a program that is just for pre-tenure junior faculty. So we're optimizing because we want to make sure young talent gets funding and not every one of those gets accepted by FAR. They get three shots at it three years in a row, but it nevertheless prioritizes growing junior researcher. Then we have funding programs that prioritize different types of societal impact. So my work with older adults is an AI Institute funded by the NSF, but it was chosen because we were very explicit about having an impact for older adults and aging in place. And that's why it was funded. And there are other AI institutes, for example, looking at agriculture or looking at education. So at this point, you're managing a portfolio, right? Fenture capitalists are managing a portfolio and they're taking risk. And over time, you understand how to lay down different types of bets and how to make sure you have a mature portfolio. I think that's a very good analogy there. But so this reminds me, I meant to ask you a little bit ago because you brought up the 1950s as this system has been going on since the 1950s, but it hasn't been going on since the 1700s, right? The idea that government spends a huge amount of money at universities to do research is not obvious or necessary. It's something we invented in the 20th century. Can you tell us a little about the history of that idea? Yep. Vannevar Bush. I think most people just argue about how to pronounce his name. But Vannevar Bush was a scientific advisor during the war and famously wrote a number of articles coming at the close of World War II. And he was fascinated by how technology had turned out to be so incredibly important in the success of the war. Cryptography, targeting, data analytics around a weaponry and such. But he was a renaissance kind of guy and was interested in how can technology not just be about wars, but how can it seem so powerful, its capabilities. What could it do in terms of the rest of American society? And so he laid down kind of the rules of how this would work. It's really interesting that we're following a rule book from the 50s that said, right, we're going to push this out to universities. Universities are conveniently located around the country. We're very interested in the education of our workforce. And so we're going to put those double missions next to each other. And then we're going to have basic rules for mission funded agencies as well as basic science of how we would set national priorities, but we would be hands off enough to let the system work. And especially with NSF, it was baked in from the very beginning that this peer review function that a bunch of scientists would get together in a room and argue with each other about what was worth investing in, that that was actually part of the secret sauce. And those discussions of the act of writing proposals and reviewing proposals and arguing about proposals, that is part of the research process. Because it allows people to, you may not get funded the first time with your proposal, but you get really good feedback that helps you think about it differently. And then the proposal gets better. So that's all baked in much more so than someone coming in and saying, all right, I have this much money, I'm going to give some to Caltech and some to MIT and some to Chicago or CMU or Johns Hopkins. And just go do good things. The peer review process is part of how research gets better. I have this, I'm glad you said that out loud. And now we have it on the podcast because I have this online discussion with people all the time. Why don't we switch away from a model where the government pays for everything to private funding for things? And it's just private funding is great when it's there. It's so much less reliable. And I think you accurately made the point, it's so much less objective, right? Private funders have their favorite things to fund. It's not like the scientific community is taking a bunch of money and deciding what to do with it, which I think is ultimately a much better model. Now, there's so many, and I hear this especially today because people point to, well, look how much money industry is investing, right? Surely they can take it from here. In the AI gold rush that we're living in right now. And I always joke, remember the people who made money in the gold rush were the people who sold the shovels? So this explains NVIDIA. So when there's a gold rush, like sell the hardware, industry is very narrow and very short term in terms of what they can invest in. And so the money isn't there in terms of the scale. But more importantly, the breadth and the community ownership of what things to invest in just needs to occur. And one of the things that I always point to in this is that universities will poke at and embrace questions that industry never will. And that is an important part of the discussion. So two examples. One is in the AI space, right? So we have folks doing research about all the ways that AI fails, right? How is it leaking information from a privacy perspective? How are they not secure data poisoning attacks, right? How is censorship, how does censorship work in the different models, right? So these are things that even if industry was working on them, they're not publishing them. And if you're not publishing, then the rest of the world can't learn and improve, right? So there is just an entire space of the public good of these technologies that universities will tackle, which you're never going to convince is part of the ROI for industry. And in that public good space, there's also questions. There's also things that we do that are just not... We live in a capitalist society, right? But we can't view everything from just economic ROI and profit, right? So for example, universities have a long track record, I've worked in this space on technologies for people with disabilities, assistive technologies, right? These are technologies that enable people who are blind or who have hearing deficits or mobility challenges to continue to work and to continue to engage with society in a meaningful way, right? That research has come out of universities and that with a little bit of regulation, so ADA Section 508, others has made a profound difference for the lives of those billions of people in the United States, right? That market is never big enough to exist all by itself, right? But this notion of public good technologies and the kinds of research that people do, that's also a natural part of what attracts university researchers and fuels their creativity. And there is a huge avalanche of evidence that says when you land up doing something really interesting for someone with special needs, you land up creating benefits for everyone. So it works out that way, it takes time, but it's never going to be something you would see coming straight, solely out of industry. So this creativity, this public good, this long range perspective, and this messy ecosystem of it going through peer review and through open discourse is the magic that they figured out in the 50s and it has been the American playbooks since then. Well, let me ask about, you know, in that theme of open discourse, it does seem that there is a place for improvement in the university research system, which is publishing. We have a model where a lot of scientific research is done and then published and it's immediately put behind a pretty severe paywall that people can't get to. What are your feelings about trying to incentivize researchers to make their own research products a little bit more publicly available? I think researchers very much want to do that. It's the economics of the system that they're in, so journal publishing and such. What we have seen though is those walls are coming down. The pandemic did quite a bit for this because there was an understanding that this was an all hands on deck global emergency and we need to get results out there as quickly as we could. This pre-publishing in my field, it's an archive, you're getting results out there and people are learning from those results in an open platform. You're not going to get the genie back in the model after that. What we've seen is more and more researchers, and so ACM, which is association for computing machinery, so the major professional organization for my faculty, is moving to these open models. They're having to change their business model to say, okay, we're going to get revenue in other ways, but that serve the community. Open and transparent publishing is becoming a greater value that's appreciated. I mean, I've certainly heard claims that are music to my ears, so I believe them, that say that money spent on university research comes back as an economic impact many times over. Am I just hearing what I want to hear or is that something that we can actually accurately measure? No, we can measure it. It gets messy because, for example, let's go back to my dairy farms. If I try to create an economic value for affordable safe milk in the US and then I say, okay, well, how much does computing have to do with it versus all of the other aspects of dairy farming? That's a complicated equation. The good news is that the scale in almost all of these settings is so dramatic that I have joked I've written the equation a small number of dollars here, add time, becomes a large number of dollars on the other side. Every time we've looked at this, it's been so annual investments, low billions, look at annual revenue, low trillions. The billion to the trillion is the point. It is strangely economically effective because of universities working with this dual business model. If you had to take my faculty and just put them in industry to do research, well, most of them wouldn't be willing, but you would have to pay them twice as much and they would be segmented out into very specific industries such as those insights would not cross pollinate in the ways that they do. You do notice sometimes very clever researchers in industry are willing to make less money and move into academia just because they have more freedom to both work on buying this kind of things but also talk to various people and talk about their research products. Oh, very much. I was that way. I was at Xerox Park for three years and Xerox Park in the 90s only competed with Bell Labs from the perspective of one of the best research labs in the world. But at some point, I wanted to work on how people interacted with these everyday technologies and I ran out of ways to twist it into the pretzel of the document company. It just wasn't possible. I love my Xerox friends. I see this with my faculty. We have an amazing faculty member joining us in the fall who's coming from Metta and she's like, no, I want to go back to the foundations. I want to go back to creative scholarship. At Northeastern, we actually facilitate for our faculty to go back and forth. I have faculty at Google right now. We joke because our students all do co-ops and that the faculty go do a co-op. Our faculty will go on a leave and they'll work in a company or a startup and they'll do that for few years and then they'll come back. The best faculty understand the magic of doing that. They can take research results and then the rubber hits the road in terms of making it into an actual product. Then they learn a whole bunch of new hard questions and then they come back and work on it with their students. But having the interactions with the students is really key. That dual purpose and that educational mission is just where they thrive. And as much as I love the universities, there's absolutely a place for industry in this ecosystem. I'm a little worried that a lot of universities are very excited about AI right now, but AI is at the sort of shipping product stage and it's hard for universities to keep up with the financial resources of the biggest companies that are doing this work. Yeah, and universities are very quickly learning to like, it's like surfing, right? So AI is its own tidal wave right now in the current versions of generative AI. And so universities or researchers are already figuring out like to pick up the next wave. And so the next wave is around these smaller models. It's more efficient models. It's models that are going to be applied into very specific domains where universities can do this cross-disciplinary partnerships. And my bet is that a large part of that impact of that tidal wave we're watching right now is actually going to be the next generation of these systems. But universities can go ahead and start working on the next wave while industry is writing that really, really tall tidal wave right now. Well, we would hope that is true. I guess for my wind-up last question, you alluded to the fact that all of this is a different conversation today than it was six months ago. What are the prospects that you see? I mean, I know that there's a lot of back and forth in government where there's like a bill proposed and the committees change it. And I have no way of keeping up with what the budgets are right now. But the uncertainty certainly isn't helping anybody. No, it's not. And thank you for probably the most important question, even though maybe the most depressing answer. We are really at a dramatic point of failure in the system. The one of the versions out there is 60% cut in the National Science Foundation, including the future of AI and the future of these human-centered technologies. And we're already seeing the immediate impact. So people who are leaving the country, maybe is it in droves yet? I'm not sure, but it's significant. And they're not coming back. They're going to set up their research operation somewhere else. And inertia is difficult. They're not going to suddenly flock back to the US. It's not frictionless. Yes. And so we're already seeing the immediate impact. And to me, it feels like we've got this flourishing forest and we're clear-cutting it with the myth that something else is going to grow and it's not going to grow. And you don't get a forest back just because an administration changes. So we have been working this really magical, powerful combination since the 50s. And we're just knee-capping it at a time when we look at AI and all of the opportunities and dangers that it poses, this is not the time to knee-cap the research ecosystem. We kind of need it more than ever. But we're doing dramatic damage to it unless the American public stands up and says, no, we have to keep investing in the future. And that's what the federal investments and research are all about. When you talk to people in legislatures and bureaucracies and institutions, are they sympathetic to the message? Are they getting it? Have we done an insufficiently good job of making these points as strongly as we could? I think we're getting better, but it's been insufficient because we've been running on our own success for a little while. That's part of why I talk about losing talent because that is a message that rings true in Washington. It's also part of why I talk about cows because most folks, when they hear computer science research, they think meta, Google, Amazon, and they're fine. Why do we need to put money into them? And I'm like, no, it's actually about farms and healthcare and safe automobiles and preventing cancer deaths. That's where we're working. And that's where those federal investment dollars are going. And I think we haven't done a good enough job of pointing to that ubiquitous impact that we have. Well, I feel the need after that to at least give you a chance to end on an optimistic note, which I always like to do. You've been optimistic throughout the whole podcast, so it feels a little bit unfair. But what are you most excited about going forward in these partnerships? So I love, I've been dean here at Northeastern for three and a half years and just adore working with my faculty and our students. They are so university researchers, they are just so courageous. And I may lose a couple to international competition, I hope not. But what I'm really hearing is they come to me and say, Beth, you hired me because this is what the difference I want to make in the world, right? Help me be able to make that difference despite everything that is going on. So their commitment and passion is there. The insights and the work that they're doing is amazing. And this is, you know, it's not just my faculty, this is replicated all over the US, right? So, the talent and the passion and willpower is there. We just have to not get in our own way and kneecap ourselves. It's such a critical time. You know, I gave you the opportunity to say something optimistic and I think you did a great job at it. Elizabeth, thank you so much for being in the Binescape podcast. Really enjoyed it. Thank you.