• Founders Project
  • Posts
  • How to Fund Your Business Without Debt Part 2 | Interview with Robert Murphy

How to Fund Your Business Without Debt Part 2 | Interview with Robert Murphy

Part 2 of the interview with Robert Murphy on ways to fund your startup or business

How to Fund Your Business Without Debt Part 2 | Interview with Robert Murphy

Thumbnail of headshots of Nicolae Cretu, Felicia Moraru, and Robert Murphy on gradient pink background

Thumbnail of Nicolae Cretu, Felicia Moraru, and Robert Murphy

Dr. Robert P. Murphy is a leading American economist in the tradition of the Austrian School, who holds his PhD in economics from New York University. He has taught at Hillsdale College and Texas Tech. Dr. Murphy has authored several economics books for the layperson, while his technical papers concentrate on money, banking, capital, and interest theory.

He’s currently part of the Infineo team as Chief Economist. In this interview we discussed about the economics of Whole Life insurance and how it could apply to founders, business people, and professionals.


Dr. Murphy has also interviewed with notable public figures, such as Jordan Peterson, and also us, Felicia and Nick from Founders Project about implications and insights from economics. He’s also the host of the Bob Murphy Show.

Enjoy!

We first met Dr. Murphy at the Mises Institute in Auburn, Alabama, in the summer of 2022. We were deeply engaged in studying Austrian Economics, and since then, we have specifically explored how it applies to finance and entrepreneurship. Needless to say, we were very excited to have this interview.

Part 2:

Actually, most people don't appreciate GPT-4 enough because we interviewed founders who are trying to harness the power of AI in, let's say, the legacy, boring, and unsexy industries, such as naval shipping, because wherever you have massive unstructured data that needs to be structured, AI can already do that at this point. It's basically a massive productivity boost. If we're talking about selling software, for example, there's essentially a human employee who takes a piece of software that improves his productivity by 1000 to 3000 percent. We have a lot of startups that are, at this point, trying to sell not just software but the workforce itself. And we've actually seen lots of progress in industries like sales or SEO. There are guys who are trying to create a digital workforce, and what we're afraid of is that, as it usually happens, politicians will not interpret these results properly in the short term (let's say, loss of labor). And they might actually, at least that's what happens in the EU, they're trying to ban AI and make it unusable for the average citizen, only for large enterprise companies.

Do you think there's any hint of that happening in the US? Let's say, at some point, unemployment has drastically increased and, obviously, people don't know about Austrian Business Cycle Theory (ABCT) or the difference between long-term and short-term, or they haven't read the "Economics in One Lesson" book. Do you think there's any chance that they could just say, ‘Oh, guys, we're banning AI from this point on’?

Yes, and so you raise a lot of good points there. So, right, to be clear, I think I'm not worried about, “oh, the AI is going to steal our jobs,” that kind of thing. And I have a whole episode on the Bob Murphy Show podcast where I went through some of the arguments for that. Again, just like in general, technological progress, yes, it, quote, "saves labor" or "frees up labor" in one sector, but then that just allows those workers to go do something else. So, it allows for increasing output, just like we can go through. Yes, we can grow more food now per person, per worker in the agricultural sector than we could in the year 1900. But that's a good thing. That just means, okay, that's how they can now all be—they went into manufacturing, or not, then they went into computer design and whatnot, software engineering. So likewise, here. Also, to just sort of flip it, but are you guys familiar with the cartoon, The Jetsons? Does that mean anything to you, with them flying around and George Jetson? Okay, so, you know, how are we going to get to that kind of world? Or, in general, when people show depictions of flying cars and stuff like 'oh, if Ron Paul had been elected,' and they show futuristic AI, it is going to be a necessary component to get to that realm, like any of those futuristic things where the future looks really good and prosperous. You know, on Star Trek, people are talking to the computer, "You know, make me a cup of tea," or something; the computer is able to have a conversation with you.

So, I'm saying that's what is going to get us there. Um, but you're right. What could happen is suppose I'm right in my views, in the United States, for example, and that yeah, there is a bad recession that kicks in in 2024 because of the past monetary and fiscal decisions that the authorities made, while AI is innovating, and it's causing unemployment in certain sectors. And so, you're right, people might look at that and think, "Oh, the economy is awful, unemployment is 12 percent." And you know, and I would say that's because of, well, you know, the bad monetary policy and whatnot, but they might blame it on, "Oh, it's because the AI is stealing everyone's job," and then there could be legislation to try to get rid of that. Like Tucker Carlson had this famous exchange with—I think it was Ben—yeah, they bring up self-driving cars, trucks, and Tucker Carlson, as a matter of fact, right in the U.S. Unfortunately, yeah, I think that you can understand why, yeah, I was like, "Hey, let these computer programs just start taking over everything."

Like that, you know, is to say, 'Hey, didn't you watch Terminator? Don't you know that's a bad idea?' You know, so I get it, where that would be a natural inclination. And even a lot of the people who understand the Luddite fallacy and other realms, they realize that, you know, oh, coming up with better tools, and so now one person can do the job that five people used to do, that's progress. They think there's something qualitatively different where, you know, if these AI, especially if it gets to the point where there's a robot body and they can put these things in. So, like, what if it's really smarter and better at everything and it's stronger than us physically? You say, what, but again, going back to that, so, if Einstein is born, does that help humanity or hurt us? And I think we'd say that that helps, right? And if Superman showed up, is that bad for humanity? The fact that there's someone among us that can, you know, is stronger than a thousand people put together. No, it helps us. And so again, why having this, you know, growing up? Now, again, if it's put to nefarious purposes, then that's, you know, I can imagine a dystopian future where there's different billionaires and they have their own armies of robots and they break the law and do all sorts of crimes. That's not good. But again, the issue there, it's not, 'Oh, because of economic competition,' it's, you know, it's the fact that they're stealing from you. That would be bad.

I would argue that the average successful startup founder, Silicon Valley mogul, has the same understanding of economics as the average truck driver. And because not many people care about the seen and the unseen… I don’t know if you read Sam Altman's article about labor, basically, he's making the case that AI is going to replace most of the human workforce. So, that's why the costs of things will go to zero, and we have to implement universal basic income to mitigate that. So they're kind of anticipating the problem in the future and trying to come up with this solution. And I've seen that many, let's say, free market-oriented economists are also supporting universal basic income. What's your vision? What's your point on UBI?

So, yeah, a few things here. I know what you mean, and that's kind of what I alluded to a minute ago. So, I'm not saying this is necessarily a nefarious plan, and these billionaires are behind closed doors, stroking their black cat and smoking a cigar. Although some of them might be. I'm not saying this isn't what's going on either. I'm just saying I technically don't know, and my statement doesn't require you to believe in one theory or the other about what their motivations are. 

Yet, the idea of going towards a future where there are a few humans who are multi-multi-trillionaires or even quadrillionaires, and they control all the factories, and they own all the farmland, and they control armies of robots that do all the physical work, and the mass of humanity is just sitting in little, very small rooms in apartment buildings with their virtual reality on, and they don't work. 

They're just completely supported by the state, and they just live in their VR world. And that's why they don't revolt because they're being occupied in there, and, "Oh wow, I'm going to go explore another galaxy, and we're going to have these romps with movie stars and whatnot, and do all kinds of debauchery," and that's how we'll be pacified. To me, that's horrifying. Like, that's Mark of the Beast kind of stuff. And it does seem like some trends are trying to pull us in that direction. 

So, yeah, and the underlying economics is wrong. That it's not true humans will have nothing to contribute in a world where AI becomes more powerful than humans in terms of just even like AGI becomes a thing. It just goes back to standard economic stuff.

It's a standard result, like in the free trade literature. What if your country is better at everything than some other country? You might think it doesn't make sense to trade with them. But again, standard results show that, no, you still have the comparative advantage in the things that you're really good at, and the other countries should specialize in the things that, yes, they're not as good as you, but your margin is smaller on those things, and that's what they should specialize in, and both countries still gain from trade. So, likewise, even if you look at humanity and AI systems as different countries, AI would still benefit from our existence. Humanity would still specialize in those areas where our inferiority was the smallest, and that would make sense not just from our point of view, but the AI's as well. And that's also why AI wouldn't wipe us out. They would see us as still the most productive entities besides them. So, why would they eliminate us when it's more efficient for them to keep us around to do things that allow AI to focus on what they're even better at, right?

And so, it saves them time. They don't have to resort to these primitive techniques or whatever. So, I'm not saying that's definitively what will happen, but I'm just saying even in a worst-case scenario, where you're worried that they're just going to look at us as useless parasites and wipe us out, that's not what would happen. That's not what the economics indicates. 

It's like if you're a brilliant software engineer and also the best in the world at ironing your clothes, it makes sense to hire somebody to iron them for you, even though they're worse at it than you.

Yeah, you're right. In another way, like Michael Jordan, when he was in his prime, could probably cut his lawn faster than the kid down the street. But obviously, it made more sense for him to hire the kid to cut his lawn so he could go practice shooting three-pointers and whatever, like they're just, or the lawyer who's a very fast typist still benefits by hiring somebody to type things while the lawyer focuses on legal strategy and whatnot.

And still, we're afraid of this argument that UBI (Universal Basic Income) is “good” because it basically just, it's good in case it replaces the traditional welfare because it'll be less costly, and so on and so forth. It's still very compelling and resonates with the middle voter, the average voter.

Right. Yeah. And so, I know even self-described libertarian economists who favor UBI precisely on those grounds because they say, “yeah, it's not an add-on. I want this to replace all the existing systems”. And if that were the only switch, yeah, the incentive effects would be better, right? 

That if you're just getting a flat, whatever, $30,000 a year from the government, regardless of how much you work, as opposed to getting all kinds of means-tested things that, “oh, if you have a bunch of little kids and your income is really low, then the government will send you this support”. And the problem with that is it takes away your incentive to go earn more money because then the government scales down your support. And so, on the margin, it's like you face a really high effective tax rate.

And so, UBI gets rid of that. But the problem is, I don't think they would actually do that. Like, let's say they got rid of all the means-tested support, food stamps, and all that stuff, and they just said, “everybody gets a flat X number of dollars a year that a standard person can live on”. And then what if that guy goes and gambles it away in Las Vegas? And so now he's going to starve to death. I don't think society is going to let him starve to death, especially if the person has kids. I think they're going to say, “oh, well, here, we'll give you some more money”. 

Also, there are all these bureaucracies that have been created that are directly dependent on these particular programs with the money flowing through them. I don't think you're going to be able to just wash them all away and have all those people get laid off and say, “no, we don't need you anymore because now we have UBI”. There's no administrative overhead. Everyone's just getting a check deposited in their checking account. So all these thousands and thousands of people who have been working for the government their whole career, now they're laid off. I don't know if that's realistic, I think that's not going to happen.

So, I think you're going to get the worst of both worlds where we'll have this huge extra expenditure now, and you'll still have the other programs in place. And anyway, with all of this, again, particular people might favor it, but to me, it's like the analogy I used recently was to say if you knew some couple, and they were pretty well off, like they had $50 million because maybe they started a company that did really well and then they had a bunch of young kids, if they always told those kids growing up, "Hey, you never need to work a day in your life, don't worry, once we retire, we'll just give you a bunch of money, you can just live and never have to, you don't need to go to school if you don't want to.” Like, I would think that would be poor parenting to let them. And so, to me, in a sense, that's one of the insidious effects of this. I think it would be very destructive and harmful to tell the population, "You never need to work. You can just kind of, you know, put your virtual reality goggles on and just, you know, consume your whole life, and we'll take care of you. That's fine”. To me, that's not fine. 

Still, we're seeing attempts, at least in the European Union. And it resonates quite well with the average voter, especially in the context of CBDC (Central Bank Digital Currency). Do you think it will ever be realistic that there will even be such a policy proposal (UBI) in the US political sphere from some political candidate in the future? Let's say all the truckers are replaced in a month because there's no distinctive technology that allows you for self-driving trucks. Do you think there'll be a new political candidate expressing these views and trying to target both the left and the right?

Oh, yeah. I mean, there are already small-scale experiments in the US where that has been implemented, like at the local level. Off the top of my head, I don't remember where they are, but I think there's a place in California. Where they, in some cases, it's like a private venture where they're just doing this experiment, which, as far as my libertarian views go, they're allowed to do that. It's not illegal, but I just don't want them imposing it through coercion on everybody else. So, yes, more and more people are talking about it. And so, yeah, it would not surprise me at all if, within the next few years, especially again, like I say, if there is a bad crash and unemployment goes up at the same time that AI is making inroads into more and more sectors, a natural conclusion, especially from even populist candidates, is going to be like, "Hey, it's us versus them."

In other words, if you understand how, in times of crisis, you can whip people up to be against foreigners, how much easier is it going to be to get them against AI? So that's certainly restrictions on what AI is allowed to do, to go back to our earlier thing.

But then, also to say, "We need to have this UBI because, at this point, humans are superfluous. As these machines get better and better, what can humans even do? There's going to be a revolt if we don't do something. So, let's pass it.” But that's also why I think it's insidious because I do think it's ultimately like an opiate of the masses kind of thing where it's just kind of diffusing like, "Oh, let's just give them money and get them, so they're not mad."

And then the people just will sit in their little cubicles in their virtual reality worlds, and they won't want to overturn the system. And that would be a way that the few people who are kind of in control of that, maintain the power is that the bulk of the population, as long as we give them enough calories and enough entertainment, they're going to be content while the machines actually build everything.

And we own all the farmland now, so we own all the actual food we control and blah, blah, blah. Like to me, that's a horrifying future that some people are trying to push us towards, whether they realize that it's horrifying.

I think that also people don't understand that there's no scenario, whatever happens, but there's no scenario that there will be no need for human input at all.

That's what I was trying to get at. To me, the vision, the better movie depiction of what AI is going to look like 20 years from now, it's not the Terminator movies. It's the Marvel movies, like Tony Stark in his residence designing cool stuff. And is it called Jarvis, I think? Like his AI system that helps him do stuff. You know what I mean? So, Tony Stark is incredible, but coupled with the AI that he's got, that he built, he's way more productive. But it's not that the system goes rogue and decides to eliminate him. No, it makes him, it's still a tool that he uses. And so I'm saying that's what I think it's going to end up looking like.

And again, you can come up with a scenario. I know, I've read all these, not all of them, but I've read a lot of the literature saying, "No, no, no, once these things become self-aware, they're going to take off and turn on us." You know, maybe. But what's funny is even in their own warnings, it's not that the machines disobey their programming; rather, the scenarios are, what if what we tell it to do turns out “oops”, like the way it implements that is deciding that we're superfluous or something. Like, you know, that's even, I think, what happened in the Terminator movies. I think it was ultimately like they were programmed to maintain security around the world or something. And they decided the one variable we can't control is human beings. So if we just get rid of them, then everything is tranquil, we can achieve peace. I brought it up. Well, that's, unless AI version 7.0 will invent a time machine.

Do you think without AI, we would have witnessed a crash this year? 

Um, so my hunch is that no, I don't think AI, certainly if you're talking about large language models, I don't think ChatGPT and the other large language models by themselves have had a big enough impact yet that it would have affected the macroeconomic forces.

So yeah, AI is not merely large language models. There are all kinds of innovations in other areas. Like, that's something we didn't even touch on, but just when people are trying to say, "What's the big deal? What's it going to do?" Right now, I believe what they're doing is like they're recording all kinds of complicated surgeries, like just getting the video footage, and then they're having AI systems train on thousands of hours of just the world's best surgeons, like heart surgeons and brain surgeons, doing their operations, their procedures, so that once it gets to the point where they can design the robotic arms and things that can hold a scalpel and be as dexterous as a human hand, once they couple that with the knowledge of what to do when you cut somebody open and you see what you see, like once that takes off, I mean, that's just going to be a game-changer because even if it's regulated in Europe and the United States, there's going to be some country somewhere where they're going to let those things be legal, and then everybody who's got a condition's going to fly there and get pretty good surgery done at one-tenth the cost, and that, you know, so that's anyway, I think there are things like that that are going to be happening in the near future.

But to answer your question, no, I think AI has not yet, that's not the reason for people who thought a crash was coming in 2023, and I was one of those people. So looking at the yield curve and whatever, I think it's going to hit in 2024, but yes, technically right now, the economy seems better than if you had asked me two years ago, right now, what would the economy look like?

I didn't think it would still look like this right now, but AI is not my excuse. I think that there's something else about why that would be the case. It wouldn't be that AI bailed us out already.

And the system is still very fragile.

When you say, do you mean like the economic system?

Yeah, the economic system.

Oh, right, yeah, I think it's, you know, whatever metaphor you want to use, a house of cards or whatnot, that, yeah, I think we're sitting on a lot of big imbalances and that a crisis is brewing. 

And among other reasons, like a lot of the metrics people use are the data to try to say, "We've achieved a soft landing. Look at, we brought price inflation down. And unemployment, so we're out of the woods, good job." A lot of those same metrics were true back in late 2006, early 2007. And there were people at the time in the US, including Federal Reserve officials, who were saying “we achieved a soft landing”. And obviously, they were wrong. We were just getting ready for the global financial crisis. So, likewise here, I think it's going to hit by in 2024. Maybe I'm wrong in the timing, but yeah, and I'm not just trying to be a perma bear. I'm not just saying, "Oh, at some point between now and the end of time, there's going to be another recession. And then I'm going to say, so yeah, I told you." I'm not saying that. Like, I think it's, and if it doesn't happen in 2024, then I'm going to have to reevaluate and say, “did I just miss something?” But yeah, I think another crisis is coming, and it's going to be bad. And at that time, a lot of people are going to say, "What the heck were we thinking? Like, look at what central banks did during COVID. How could we think that that wasn't going to have a huge problem or a huge impact at some point?"

Yeah, a good analogy in this case is Mark Spitznagel's analogy of the dry forest. Have you heard of it? 

Oh, yeah, I actually helped edit that book. He mentioned it to me it's not like I'm revealing some secret; he mentioned it. And I asked him, can I say that? He said, "You're in the acknowledgments, go ahead." But yeah, so for your listeners who don't know, yeah, that this is true. It's not just merely a metaphor; it's actual reality that there was a policy in terms of the US Forest Service for a while. They would just stamp out any fires. And that would make sense, right? You'd think, "Oh, we don't want fires burning down the forest. So let's just, anytime there's a little fire, go ahead and put it out right away."

The problem is, they didn't realize until later, and things happened, and they rethought that those little fires performed a valuable service in maintaining the integrity and the stability of the whole ecosystem. And that, because I don't know enough about the specifics to use the terminology that you would use if you were a forester, but it would burn up the brush and the other little things so that there couldn't be this huge inferno. Whereas when they would keep putting out those little fires, that allowed for the buildup of the kindling and whatnot, such that later when something broke out that they couldn't zap right away, and it started spreading, then it just became unstoppable because for years they had been neutering the system's own internal means of preventing some massive inferno.

So that was his analogy for the financial sector that when the Fed would come in at any time, "Oh, there are some bank failures, let's come in and let's bail everybody out and let's reassure the public, your money's fine. Don't pull your money out of the banks. We don't want to have a crisis." And just doing that year after year, then allowing the whole system to become more levered and more fragile, such that if and when something breaks out that the Fed can't directly contain, then it's just going to be this huge, humongous crisis, whereas if they had just had a more laissez-faire approach early on and just said, "No, if some bank gets into trouble, let them go down." That'll teach a lesson to people that be careful what your bank invests in. 

How do you envision the future for your kids in this economy? 

So, I think what we're going to see, and I don't want to put a timeline on it because I'm sure some things will be quicker than I think and some things will take longer, but let's say by the year 2050, I believe that the financial sector of the globe will consist of large organizations that are effectively a combination of humans and AI engines that are collectively making decisions for the organization. They're going to be battling it out on blockchains and things, you know what I mean? So, Bitcoin is one obvious example, but as all these different AI systems get more and more complex, their mining capability is going to go up, and then, of course, the Bitcoin protocol will update and make the problems more difficult to solve for the next block of the chain. And so, you're just going to see these battles. So, I don't think nation-states are going to be as big a deal in the year 2050 as there is this rise of this new entity that we might call the corporation, but if they're going to use that term, you know, that's going to be sort of the thing that the global financial landscape is going to be more about the coalitions and strategic interactions of these entities that are, in a sense, just like right now when you say, "Oh, Google made this decision" or "Xerox made this decision" or whatever, it's kind of shorthand, and you say, "What do you mean by that?"

You say, "Oh well, you know, they have a CEO." Oh, so the CEO just does whatever he wants? No, he answers to the board of directors and, ultimately, the shareholders. So, it's a complicated thing when you say a company did something, you know. But, and I'm saying, so now as it goes on, if you know, GPT-12, when that thing is out, certainly any major business decision they're going to run through and have simulations and all kinds of scenarios and forecasting that the GPT system will help them with. And so, whatever decision pops out of that process, you could say, "Oh, did the humans all decide?" And I think, at some point, you're going to say, "Well, the GPT played a role in that decision too, especially when the thing's talking to you and it seems like it's alive." We don't need to get metaphysical about whether it is really alive or not; it simulates life, okay?

So, I'm just saying, at that point, that's kind of what I'm saying: they're going to merge, in that sense. I don't mean people are going to plug stuff into their brains; they might, I don't know, you know, that's kind of scary, but some probably will, because of the temptation of power or whatever.

But anyway, when I'm saying they're merging, I just mean when we on the outside try to say, "What is guiding the decisions of these organizations?" It's going to be this complex interaction of human and artificial intelligence engines collaborating and making the decision.

And again, I think more and more things are going to be digital then. And so, there's going to be this whole new plane of existence where consumer stuff, people are going to be "living in their fantasy worlds" and doing stuff in VR, but even financial stuff is mostly going to be immaterial.

It's going to be whether it's blockchain or just digital assets that are on a blockchain per se. I think a lot of things are going to be digital. And in that realm, computing power is going to be the measure of strength. Right, it's not going to be who has the most nuclear weapons. It's going to be, you know, who's got the fastest processing speed and stuff like that. 

Yeah, that's actually true. People are starting to invest again in cloud computing machines and servers. If you have intent software, you also need to have a lot of hardware. We realized that.

Thank you for your interview, for insights, and for accepting our invitation.

Oh, sure thing. Yeah, so if I could just plug in if people are interested in this stuff, if you go to www.infineo.ai, you know, we've written lots of stuff on this, if people want to read more about the things we've talked about. 

And also, you have the book you co-authored, on the infinite banking concept…

Oh, yeah. So, when I was with the previous organization, Nelson Nash's institute. So, yes, I have a book called "The Case for IBC" that I co-authored with Carlos Lara and Nelson Nash. 

Thank you. We were looking forward to this interview for the past year, even after the Mises Institute. So, thank you so much. 

Okay, thank you, and thanks for your perseverance. I know that it was tough to pin me down, Nick, but you got me.

Click here for Part 1 of the interview.

Full Video Interview on Youtube:

Learn More About Dr. Murphy’s work:

  • Chaos Theory (2002)

  • The Politically Incorrect Guide to Capitalism (2007)

  • The Politically Incorrect Guide to the Great Depression and the New Deal (2009)

  • How Privatized Banking Really Works – Integrating Austrian Economics with the Infinite Banking Concept (2010), co-written with L. Carlos Lara

  • Lessons for the Young Economist (2010)

  • Economic Principles for Prosperity (2014), co-authored with Jason Clemens, Milagros Palacios, and Niels Velduis

  • The Primal Prescription (2015), co-authored with Dough McGuff, MD

  • Choice: Cooperation, Enterprise, and Human Action (2015)

  • Contra Krugman: Smashing the Errors of America's Most Famous Keynesian (2018), Paul, Ron (foreword); Woods, Thomas E. (preface)

  • Understanding Money Mechanics (2021)

If you enjoyed the interview, consider subscribing to the Founders Project weekly newsletter:

Reply

or to participate.