The Government Shutdown & Healthcare's Increasing Costs

With the government shut down over the democrats' attempt to reinstate the COVID-era Affordable Care Act (ACA) health insurance subsidies, a lot of people have been questioning the health insurance system in this country. For me, this debate has spurred a couple of important points that aren't often talked about.

For context, we're not talking about Medicaid patients, Medicare patients, or patients who receive healthcare through their employer. These are patients who don't have access to those programs and decide to buy insurance on the public ACA exchanges. Once a year, these people go to the exchanges and choose a plan that works for them (around 24M people). Then, up to a certain threshold, the government will subsidize some of this expense. The democrats were able to expand the size of those subsidies as part of an emergency order during COVID that expires at the end of this year. The challenge is that the cost of this healthcare has gone up significantly since COVID, making these plans even more difficult to afford. The democrats are refusing to fund the government unless the Republicans agree to extend the COVID-era subsidy.

How insurance works

To better understand what's happening, let's level-set on what health insurance is for a moment. Health insurance, like any other insurance, works pretty simply. Health insurers pool money from a group of people to cover the costs incurred by those receiving healthcare services. Everyone pays into the pool, and when someone has a healthcare need, the insurer pays all or part of the cost. The key driver of any insurance pool is the actuary, who analyzes the risk associated with the pool of people and sets an annual premium that each person must pay into the pool. Setting this premium can be tricky because you're optimizing for three things: 1/ ensuring the premium is high enough such that the insurer can cover the pool's healthcare expenses, 2/ ensuring that the premium is affordable enough so lots of healthy people participate in the pool and 3/ ensuring that there's enough left over for the insurer to cover the costs of running the health plan (this last amount is capped at 15%-20%). While it's not easy, an actuary can fairly reliably set premiums while managing those tradeoffs.

Preexisting conditions without a mandate 

But there's a big wrinkle with ACA plans: they must cover preexisting conditions*, and patients can sign up whenever they want during the ~3-month open enrollment period. This radically changes the incentives for the typical patient. Many young people aren't getting regular healthcare services, so they really only need insurance in the event of some kind of catastrophic illness or event. So, rationally, many patients who are healthy and don't need healthcare services aren't joining the pool (why bother paying a premium when you can just wait for if and when you get sick and start paying then?). Meanwhile, the patients who do require lots of healthcare services are happy to sign up and pay the premium to make sure they don't have to pay for services out of pocket. The result of all of this, of course, is that the insurance pool is disproportionately made up of sicker patients. So the actuary has to set the premium higher to cover all of the costs. This then makes the plans even more unattractive to a healthy patient, and the situation gets worse. The initial spirit of the ACA was that it would cover preexisting conditions only if there was a mandate for everyone who falls into this group to buy insurance (this would have forced healthier patients into the plan, lowering the cost for everyone). That was the deal. The insurers would cover preexisting conditions as long as there was a mandate. That mandate was in place until 2017 when the Republican Congress abolished it, spiking the cost of the plans (though some states still enforce it). 

Requiring people to have health insurance is obviously a thorny topic. Critics will say it infringes on personal freedoms, and I suppose it does. The problem with this argument is that we're already infringing on personal freedoms. The Emergency Medical Treatment Act requires US hospitals to provide stabilizing treatment to any patient regardless of ability to pay, which gets passed onto every other patient in the form of higher hospital bills and then passed onto the insurance pool when that patient, who now has a preexisting condition, finally decides it's worth signing up for insurance. That's an infringement of its own.

So the result of all of this is that the ACA plans are fundamentally broken (sick patients are incentivized to join the pool while healthy patients are not). Congress has put a Band-Aid on all of this by subsidizing the premiums using tax dollars, which, of course, just get more and more controversial as the cost of healthcare continues to rise. Neither side seems particularly interested in fixing such a fundamentally problematic issue.

The changing definition of "healthcare"

Beyond the fact that the structure of the plans is broken, the cost of insuring patients in general continues to rise. It's worth thinking about this from a first principles perspective for a moment. 

It's often said that "healthcare is a human right". Whenever I hear this, I always think to myself, "What do you mean by healthcare?".

If I asked you this in 1925, you would've meant basic exams from a physician, childbirth, and limited antibiotics. If I asked you in 1975, you would've added specialty care, inpatient surgeries, ER treatment, lab tests, and generic drugs. If I asked you in 2025, you'd add advanced diagnostics, integrated care teams, robotic-assisted procedures, MRIs, CT scans, genetic & molecular testing, wide-ranging specialty drugs, substance abuse programs, and wellness services.

The point is that healthcare changes significantly over time, both in terms of the services available and the expectations of the availability of those services. Said differently, over time, healthcare gets a lot more expensive. Of all the complex things surrounding the cost of healthcare, this one is perhaps the easiest to understand. I read the other day that premiums are up 27% this year on average, largely due to massive demand for Ozempic and other GLPs that many plans have decided to cover. How we define "healthcare" changes over time, and it only goes in one direction.

Back to the actuary working for the insurance company. This person is watching all of this change happen as more expensive healthcare services become available and patients' expectations rise. But the actuary is still limited by the same constraints. They need to set a rate that's low enough to pull in the healthy people but high enough to cover all of these great services. This is where the actuary's job gets difficult, and you start to realize that, in a way, the insurance company really gets paid to be the bad guy. They manage this tradeoff by deciding what they should and shouldn't cover (which is obviously enormously controversial) while trying to maintain a reasonable premium that pulls people into the pool. 

And then, on top of that, the government subsidizes these premiums, which exacerbates the problem as the equilibrium premium price that would bring the ideal number of patients into the pool can be increased by the insurer, increasing the overall cost of the plan.

Avoiding the tradeoffs 

There's always lots of talk about the increase in the cost of insurance, but we don't couple it with an acknowledgement of the increase in stuff we're getting access to and the tradeoff associated with that. We seem to be stuck in a rut of assuming we should get instant access to any new service that emerges. We then get access to those services. We then see our premiums go up. And then we complain about it. We ignore the obvious tradeoff being made and blame the insurance companies when they, as a last resort, have to manage it.

This comes back to my point around what we mean by healthcare being a human right. Are we talking about healthcare in 1925, 1975, or 2025? Because those are very, very different things with very different price points. Obviously, no politician is going to advocate only covering the services that were offered in 1925, so the tradeoffs of instant access to amazing healthcare at a reasonable cost are completely ignored.

The cost of healthcare debate is a lot more complicated than simple greed or inefficiency. It's about very basic and fundamental structural tradeoffs associated with ever-expanding services and patient expectations, and the avoidance of hard decisions at the policy level. Hopefully, we'll look back at the shutdown as the start of an enormously important and long-overdue set of conversations.

*Note: Employer-sponsored plans also must cover preexisting conditions, though they don’t face the same adverse selection outcomes that the ACA plans do, as these people are employed and are likely under less financial pressure, and they’re more likely to join when they join the company, as opposed to when they get sick.

The AI Bubble

It seems like the talk around the AI bubble is heating up, and the experts seem to be more and more confident that we're heading for a correction. A few thoughts:

1/ If you believe AI is way overhyped, then we're in for a correction. If you believe it's properly hyped, we're also in for a correction. In every meaningful technological revolution, the money comes into the market before that technology has reached its potential. Investors understand the opportunity before users understand the use cases. Too much money too fast inevitably leads to a bubble bursting (this was the case with railroads, electricity, the telephone, etc.). In all of these cases, the technology eventually grew into the hype, but it took a lot of time, and the pace of that evolution was impossible to predict. Boom >> Bust >> Recovery.

2/ A lot of people are comparing the AI bubble to the dot-com bubble of 2000. On one hand, this is a ridiculous comparison. Many of the public companies in the dot-com boom were pre-revenue, and their PEs reached around 200, whereas the leading public AI companies have enormous revenues and PEs in the 20s and 30s. On the other hand, this doesn't reflect the strength of the bubble in private markets, as, for a variety of reasons, there is proportionally much more money in the private markets than there was back then. And it's easier to mislabel yourself as an AI company in the private market, which only inflates valuations further. As a reference point, between 1996 and 1999, 2,290 companies went public. Between 2020 and 2024, only 640 companies went public. So the average retail investor might be protected from much of the pain of a correction. 

3/ A reason to be more optimistic about the pace of AI adoption versus the dot-com bubble was comparing each era's constraints. When Amazon went public, there were only 17 million adults with internet access, compared to around 5 billion today. So AI's TAM is, in theory, almost the entire world's population. AI has its own constraints, such as energy, data, compute, and regulatory hurdles, though those feel somewhat less restricting. 

4/ Finally, I do wonder about the mislabeling point that may be more rampant than we saw in the dot-com boom. Finding a software company these days that isn't labeling itself as an AI company in some form is like finding a needle in a haystack. But much of it is simply old-fashioned, rule-based, deterministic software that doesn't think or reason on its own. Worse, several companies have been caught using human labor in the background and disguising it as AI. The broader difference between AI and regular software isn't well understood by most people, and there aren't significant incentives for private investors relative to public market investors to highlight the distinction, as they're often more focused on achieving their next valuation markup rather than pursuing outsized operating margins.

In short, it’s hard to believe we’re not in some kind of bubble, but a correction feels like it’ll be more moderate and proportionally more impactful in the private markets. But even that should be taken with a grain of salt, as bubbles are intrinsically more about human psychology than any fundamental logical reasoning, and, well, as we know, humans are weird.

Mean Reversion In Decision Making

One misunderstanding of leadership I’ve observed is what I call “mean reversion in decision making.”

When faced with a difficult decision, executives often gather information and call a meeting with impacted stakeholders. Good leaders ask thoughtful questions and listen carefully. But too often, they then take everyone’s opinions and settle on the average — the compromise that makes the most people moderately happy.

That’s management malpractice. A leader’s job isn’t to satisfy the majority; it’s to do what’s right for the business. And the right decision, especially a tough one, is often far from the mean.

Doing unpopular things is hard — but that’s the job. If you find your hardest decisions are generally fairly popular with your team, you might be doing it wrong.

AI Thoughts

A few notes on thoughts and discussions I've had around AI over the last few weeks:

Jerry Neuman wrote an excellent piece that’s absolutely worth reading titled AI Will Not Make You Rich where he points out that technological revolutions often create much more downstream value by opening up new opportunities than they do for the innovators breaking new ground. I've written before about how technologies like the automobile led to highways, which led to suburbs, which led to superstores like Walmart, etc. The question with AI is, which companies will benefit most from this new technology downstream? It is surely industries with a high number of knowledge workers that can quickly become more productive, opening up a lot of new capital to invest in various new projects. The opportunity for healthcare feels enormous. In other words, healthcare companies don't necessarily become more valuable because they're using AI. Instead, they become more valuable due to the new capital generated from their use of AI, which can be invested in things we haven’t thought of yet. 

Related, there's been a lot of talk about AI destroying jobs and the need for a universal basic income. There are two reasons I'm quite skeptical of this. First, Americans have become significantly more productive through multiple technological revolutions (agricultural, industrial, and information), and centuries later, we're still roughly at full employment. The burden is on the naysayers to explain why this time it's different. I haven’t heard a convincing explanation. Second, the reason I don't believe this time is different is that people discount human ambition and the unending desire for growth. If AI can replace workers, that translates into profits, and winning companies take those profits and invest them in new ideas that require more people to execute on. Sure, some companies will be able to reduce costs via AI, and instead of investing in new stuff, they'll return that capital to investors. Those investors will say thank you and then invest that capital into a company that is investing in new stuff. Pocketing profits instead of investing in growth is a losing game over the long term, and the money typically winds up in the right place. 

Finally, one of the most interesting questions around AI for me is where the next wave of value in large language models will come from. The infrastructure layer (Nvidia, AMD) and the model layer (ChatGPT, Grok) have already captured enormous upside, not that you shouldn't invest in them, but you're kind of late. The big question is whether there’s a durable layer above the model, or if the model itself is the end product. Put differently: when you think about an LLM “product,” is it just a simple prompt in a ChatGPT-style interface, or does real product value emerge when LLMs are fused with unique data, workflows, and distribution?

The fun (and challenging) part of following and investing in AI at this stage is that it seems the task isn't to have the answers to all the questions; it's much more about figuring out the right questions to ask. 

Software That Knows You

The other day a friend asked me to define AI in as few words as possible.

I blurted out, "AI is a subset of software that can think, reason, and learn."

There are probably a few issues with this definition, but I'm ok with it. 

Another way of describing AI is to call it software that can make its own mistakes. Traditional software only makes a mistake when the person coding it makes a mistake. That's not AI. When the AI runs with something on its own, it becomes much more powerful, and mistakes become more likely. Ironically, there’s great power that comes from the ability to make mistakes.

Microsoft Excel doesn't make mistakes. It's not AI. It doesn't look at your formula and give you its best guess. It either calculates it correctly or it gives you an error message.

With some web services, this distinction isn't so clear. Take recommendation engines as an example. You might think that the videos YouTube recommends that you watch are coming from some magical AI, but they're not. Benedict Evans made a relevant point on this in his newsletter this week:

"YouTube never knew what was in the video, Instagram didn’t know what was in the picture, and Amazon didn’t know what the SKU was: they each had metadata written by people, and they can look at the social graph around them (“people who liked this liked that”), but they can’t look at the thing itself."

Said simply, YouTube doesn't know you, and it hasn't watched the video it’s recommending to you. It's just matching the tags attached to videos you've watched to tags of videos that someone who watched similar videos as you watched and tagging that to your user account. You might not like the video they're recommending, but that's not AI thinking and making a mistake. That's an engineer writing an imperfect algorithm. Evans continues:

"How far do LLMs change this — how far do they mean that YouTube can watch all the videos and know what they are and why people watched, not which upload they watched, and Amazon can know what people bought, not what SKU they bought? And how does that change what we buy, and what gets created?" 

The promise of LLMs is that they actually will start to know what they’re recommending, which is a hard concept to get your head around, but it's particularly exciting for healthcare technology. Traditional clinical decision support tools (CDS) are doing something very similar to what YouTube is doing — matching what it knows about you against patterns of patients with similar inputs. CDS tools don't know the patients they're supporting. They haven't “watched the video” of you. The exciting part of AI in decision support is that the LLM can begin to know you really, really well. By incorporating all kinds of factors (your emails, texts, calendar, conversations with doctors, medical history, social/demographic attributes, wearable data, etc.), it can start to actually get to know who you are orders of magnitude better than your doctor could in a 15-minute visit. Or even a dozen 15-minute visits. It can better assess your health, make more accurate recommendations, and perhaps most importantly, tap into the right highly customized levers to maximize positive behavior change. 

The real innovation and step forward with LLMs in healthcare won't come from more software with more accurate algorithms and better tagging that recommends a better treatment. It'll come from the fact that the AI knows your whole story. It's like YouTube watching all their videos before they make a recommendation. The potential is hard to fathom.

Management Advice

“The only good generic startup advice is that there is no good generic startup advice.”
-Elad Gil

The same goes for management advice. It’s all situational. It’s dependent on your company’s stage, size, personalities, industry norms, relationships, power dynamics, cultural context, and a dozen other factors that make every situation unique.

I thought of this the other day when I came across an article on Business Insider where Mark Zuckerberg announced that he doesn’t do 1:1s with his direct reports and prefers to engage in spontaneous conversations as needed.

The article was promptly reposted all over social media by worn-out managers, justifying their decision to stop doing 1:1s or explaining why they never started them in the first place. “Kill the standing 1:1,” many of them said.

This reminded me of another profile of Zuckerberg from several years ago in Inc. Magazine titled, “Why Mark Zuckerberg Thinks One-on-One Meetings Are the Best Way to Lead.” Zuckerberg described his standing 1:1s with his direct reports as “a really key way in which we share information and keep stuff moving forward.”

Zuckerberg has modified his approach to 1:1s because the situation he’s in today is different from the situation he was in back in 2017. And it’s safe to say most managers are not in the same situation as Zuckerberg — running a company with possibly the best business model in tech history, still growing over 20% annually on more than $160 billion in revenue. His situation is remarkably unique, so I’d be cautious about copying almost anything he does from a management perspective.

I’m not defending the 1:1 or any other management tool. The point of this post is simply a tweak on Elad Gil’s quote above: “the only good generic management advice is that there’s no such thing as good generic management advice.”

Skilled leadership isn’t about applying management tools; it’s about applying the right management tool to the right situation in the right context at the right time.

Aligning Your Sales Plan With Your Growth Plan

One of the most common mistakes I see in B2B tech companies is a lack of clear alignment between their annual plan for growth, their sales team headcount, and their sales team quotas. I thought I’d lay out how I've done this in the past that’s worked really well. I certainly welcome any feedback on it. 

Let’s assume you have $20M in contracted ARR (annual recurring revenue), and by the end of the year, you want to get up to $32M (60% growth). For simplicity, let’s assume you’ll have no churn. So you have a sales bookings target of $12M for the year. 

Let’s say you’re selling into large enterprises, and you have confidence that a talented, hard-working seller can reasonably generate $1M per year. However, on average, because of attrition, rep ramp-up time, and underperformance by some sellers, the average seller's attainment rate in a year is 80% ($800k). 

That means, to get to a $12M booking target, you’re going to need 15 sellers on staff at the beginning of the fiscal year. 

In order to recruit and retain 15 high-quality sellers, market comparables for your industry, stage, and geography, say that you need to pay each of them a total base salary of $100k and an on-target commission of $150k, meaning that if they hit their $1M quota, they have a total on-target earnings for the year of $250k. 

I’ve included these numbers in thetable below. The numbers in red are the numbers you need to fill in; the numbers in black are formulas.

 
 

So you see that to build your bookings plan, you need to know the following for your company:

  1. Annual Bookings Target

  2. Achievable Seller Quota

  3. Seller Attainment Rate

  4. Seller Base Salary (market-based)

  5. Seller On-target Commission (market-based)

As you fill in these numbers, you’ll likely find that the math doesn’t work. Your quotas aren’t high enough to hit your target. You don’t have enough reps, etc. 

Ideally, you fix those things. Lower the bookings number, quickly hire more salespeople, etc. 

But in many cases, that won’t be realistic. The board won’t approve a lower number or you don’t have budget to hire more salespeople, etc. 

That means your booking plan is broken. That’s the one thing you should take away from this post. You don’t have a bookings plan that lines up with reality. Putting your numbers in a simple model like this makes this clear.

Now comes the hard part. You have to start making tradeoffs. If you can’t change the bookings target and you can’t hire more salespeople to make the model work and hit your target, your only other levers are to increase quotas or increase the seller attainment rate.

But that is akin to taking money out of your sales team’s pocket. The effect of this is that you’re telling them that, in order to make the money they’re worth, they need to pull off something you don’t actually think they can accomplish. That’s like calling someone on your HR team or your finance team and telling them you’ve decided to pay them less than you told them you would pay them. This is a really serious decision. As a leader, you’re taking pressure off of yourself so that you don’t have to have a hard conversation with your board, and you’re moving that pressure to your sales team by reducing their earning potential. Great salespeople don’t think of commission as an extra bonus; they think of it like other employees think of their salaries. It’s what they’re worth; it’s what they’re owed if they do their job, just like any other employee. By shifting the pressure to them, you’re risking serious engagement and retention problems. And that should be treated with the same seriousness as a difficult conversation with the board.

Obviously, every leader wants a clean model that works, but that won’t always be the case. Great leaders manage the hard tradeoffs to get to the best answer and clearly communicate their thinking along the way.

Some caveats:

  • The model assumes you know all of your numbers with certainty. That won’t be the case, especially in the early days. You have to put a stake in the ground and your best educated guess for each of them based on the data you have.

  • Negotiating your annual bookings target is a highly complex conversation that depends on your specific situation. I’ll likely write a post on that soon. The usefulness of the model is that it’s a framework to use in the conversation so the board is aware of the tradeoffs you’re dealing with and provides input. Whatever tradeoffs you make, I’d highly encourage you to disclose them. 

  • Consider a base-case bookings plan that the board is happy with and a high-case plan that the internal team rallies around.

  • Don’t stress that your model doesn’t work perfectly; this is true for most early-stage companies. But do focus on how to best manage it. The really big mistake here would be either not knowing that your model doesn’t work or not managing its tradeoffs. Being able to explain the gaps and your plan to fix them is the most important thing. 

Shaping Company Culture

A very common question I’ve received from job candidates over the years is: "What is your company's culture like?"

I've taken two different approaches to answering this question:

The first is to talk about my company's values that our leadership team created, the initiatives we're running that quarter or year around employee engagement or work-life balance, the fun events we do after work, or the employee development initiatives we've invested in.

The second, and much more sincere and accurate, way I've answered this is to take a step back and try to be an impartial observer of my company and talk about what I see every day: trends in the way people behave, how they treat customers, how they treat each other, what the company is good at and what it is bad at, what makes us unique, the sense of mission, and the interesting things that I see inside the company that I don't see in other companies—good and bad.

Often, what I've seen as an objective observer isn't the same as the culture we wanted to create from a top-down perspective.

This is because, at a certain scale, a company's culture stops being what leadership wants it to be and starts becoming the actual, on-the-ground, higher-profile, and consistent behaviors of the broader team. These things very often don't relate at all to work-life balance programs or team outings at the local bowling alley.

These behaviors are directly tied to the high-status people inside your organization—that is, the people you reward and promote. Your teams are watching the behaviors of these people very closely, far more closely than they’re paying attention to any top-down initiative.

If you promote one high-profile salesperson who overpromises and lies about the competition, people will get on board and do the same, or they'll opt out of your company, and that’ll be a part of your culture. If you promote leaders in your company who aren't willing to admit they got something wrong in front of their team, people will emulate that behavior. If leaders don't hold their employees accountable for results, this will spread, and you'll have a culture that isn't accountable.

This is rooted in social learning theory and a concept called “status signaling,” where people learn the optimal way to behave by watching and emulating others with higher internal professional status. In my experience, this is an extremely powerful force that drives culture more than any other aspect.

Quite simply, your culture isn’t driven by what you say it is or what you want it to be; more than anything, it’s driven by the values and behaviors of those that you reward and promote. Do so carefully.

Investing In Pure Health Tech

Define Ventures published an interesting report on venture-backed health tech investment performance titled Health Tech's Defining Decade. They point to a few relevant stats:

  • As of 2020, health tech makes up 10% of all venture funding. 

  • In 2024, there was $18.6 billion in health tech venture investments. 

  • 10% of private unicorns (companies with valuations over $1 billion) are in health tech. 

  • Since 2020, 18 health tech unicorns have exited (through IPO or M&A). See full list below.

I like the way they segmented the 18:

1/ SaaS (pure tech, software) - 1
2/ Services (serving patients) - 7
3/ Hybrid (mix of SaaS and service) - 7
4/ Payer - 3 

This data speaks to the challenges of investing in pure software health tech companies or vertical SaaS in general. 

With a couple of exceptions, the services and hybrid companies are mostly telehealth and traditional provider organizations with some tech that improves the patient or provider experience (I probably would've put Health Catalyst and Progyny more towards the SaaS, pure tech group and possibly Fitbit — while they're a hardware company, their profits are primarily software-based). 

The payers are, well, payers with some tech to drive better outcomes.

The only pure SaaS company they list is Doximity, and while they are an incredible company, you shouldn't really think of them as a health tech company; they're an advertising platform for pharma companies. Again, they're a fantastic company and definitely support the healthcare ecosystem, but the vast majority of their revenue comes from the same budget pool as Instagram and Snapchat.

It's now become sort of a common trope for health tech companies to build a software product and grow really fast with nice high gross margin revenue only to find out they've run out of TAM (total addressable market) and need to pivot to lower margin services to continue to grow, and by services I don’t mean managed services or software consulting, I mean doing the actual work that their customers do. And that's consistent with this data. It's really hard for pure software companies in healthcare to get big. They can’t just provide software to their customers who do the work; they have to do the work, too. While people will talk about healthcare as a $4.5 trillion industry, healthcare IT represents just 4% of that, or about $180 billion. Most of the TAM has already been eaten up by EHRs, telehealth, analytics, and health information exchange. Further, the software infrastructure that large healthcare organizations use is dominated by several large horizontal software vendors — AWS, Snowflake, Salesforce, Symantec, Okta, etc.

So, the opportunity largely exists in either unseating the large incumbents in some form or in targeted areas, such as clinical/operational efficiency, patient experience/engagement, analytics, etc., but again, you're likely going to run into a TAM problem. 

Of course, all of this brings us back to AI and how it will impact health tech and health tech investing, as that’s probably the best opportunity for more pure tech companies to achieve service-like valuations. And there are probably more questions about this topic that we haven't even thought of yet than the ones we have. My sense is we'll find that these products really impact margins more than TAM, but who knows? Regardless, as AI emerges in health IT, new frameworks will be required for how to think about these opportunities. I'll be thinking about that a lot and posting those thoughts here.

Life Advice

A few weeks ago, I had lunch with my Godson, a sophomore in college. Towards the end, he asked, “Do you have any career advice for me?”

I immediately thought of this blog post on Capital Gains that talks about the dangers of giving life advice based on your own experiences. From the piece:

"…agreeable extroverts will probably tell you that the best thing you can do for your career is to meet as many people as possible, so you have lots of second-degree connections through which you can hear about opportunities. This is probably true for them, but someone who naturally loves nerding out about esoteric programming languages would dismiss this advice out of hand—meeting people is hard, getting them to like you is harder, so you should focus on simple, achievable goals like getting a thousand stars on your Github repository.”

Success comes from a combination of hard work, luck, and skill at the work one is doing. However, a less often discussed component is our inherent temperamental traits. In other words, depending on who you are, some stuff is easy for you, and other stuff is really, really hard. And this matters a lot.

Personally, it’s very easy for me to wake up in the morning and go to the gym. I almost don’t even think about it. I just go. On the other hand, it’s very difficult for me to keep in touch with friends and acquaintances. It takes lots of work.

I have a friend who is the exact opposite. He has struggled his whole life to find a way to build a consistent workout routine. I’ve seen him try enough to know that I don’t have more motivation or willpower than he does. I’m certain of it. It’s just inherently easier for me to do the thing we’re both trying to do. Conversely, he effortlessly keeps in touch with a huge network. Regular phone calls to friends when he’s driving. He stops in on people when he’s in town. Remembers birthdays. Sends gifts. I have to really work at it. It’s hard for me. Of course, it’s important to him, and he does it with purpose, but I can see that it’s also easier for him than it is for me.

Obviously, this applies to our work as well. So if you advise someone to get into the legal profession and, despite your personal success in that field, they have a really hard time consuming large amounts of complex text for long periods of time, all else equal, that person is really going to struggle, and you’ve given them really bad advice. Similarly, if you tell a math genius who is shy to get into a sales job early because that’s where the money is, you’re really setting that person back. Both because they’ll really struggle at the job and because they’ll suffer from a very high opportunity cost of not building multimillion-dollar machine learning models for an AI company or something like that.

People giving career advice are often biased and tend to lean towards their unique, long, and winding path to success. That’s a bad idea. The most extreme example is “follow your passion.” This is great advice if your passion is financial analysis, but not so good if it’s skateboarding.

All of this is to say that giving career advice is really hard, and we have to be cautious about not sending younger people down our paths because what worked for us in so many ways may never work for them.

A while back, I wrote a summary of my best career advice at the time titled 15 Things I Wish I Knew Before I Started. I went back to take a look at it to see if I was guilty of giving biased career advice. It's not perfect, but I was happy to see that it holds up fairly well.

Anyway, if you’re wondering what I told my Godson, I came up with two pieces of (hopefully) unbiased advice:

1/ Pursue opportunities with leverage (adapted from Naval Ravikant). You may not be able to do this in your first job out of college — honestly, your first few years are probably going to suck, so you’ll have to keep your head down and grind for a while — but always try to ultimately pursue work where your outputs are disproportionate to your inputs.

2/ Leave something on the table. Leave jobs, partnerships, and business relationships where the other person feels like they got more from you than what they paid. That results in a reputation with a surplus where people will feel good about working with you and will say good things. That surplus compounds. If someone thinks you’re good and they hear from someone else that you’re good, now they think you’re really good, and they’ll tell people that you’re really good, now those people think you’re really good. And on and on. If you do this well, your career will feel less like you’re climbing a ladder and more like you’re sailing on a boat with a steady wind at your back.

Why It's So Hard To Measure Healthcare Technology's Impact

With the thousands of healthcare technology companies that have raised hundreds of billions of dollars in venture capital and other financing over the last couple of decades to make healthcare more efficient, you'd think we'd be seeing the cost of healthcare delivery come down. Of course, this hasn't happened at all. A lot of people like to point to this chart, which shows that the price of things like TVs is dropping while the price of things like healthcare is going up.

 
 

This chart has always annoyed me a bit because it lacks so much of the nuance of each of these industries, but let's go with it for a second and understand why the cost of healthcare isn’t dropping like the cost of a television.

Clearly, televisions have become much cheaper and much better over the last several years. Many technological advances such as LCD, OLED, automated mass production, and labor cost reduction have brought down the cost of producing a TV. Add to that significant global competition, perfect pricing transparency, and the right incentives among consumers, and it’s no surprise that prices have dropped dramatically.


Let's apply that logic to healthcare. Imagine a health system that employs 500 primary care providers. They go out and buy AI software products that make their primary care providers, say, 100% more efficient. Because of technological advancement, it now costs the health system 50% less to supply a PCP appointment to a patient. That's great news. But how does that cost reduction actually show up and become visible to an end consumer?



Well, first of all, it's highly likely that rather than reducing costs, consumption would go up, and costs would stay the same if not increase. More appointments mean more patients, which means more downstream referrals to higher-cost services. The end consumer doesn't buy their own personal set of healthcare services like their own TV for their living room; they buy into a pooled group of patients under an insurance plan. If that pool increases their consumption of healthcare services, each person's premium increases. More healthcare consumption means higher insurance premiums. So, it's possible that technological advancements actually increase the cost of healthcare, which is good in some cases and not so great in others.



But put that aside, let's say demand and consumption stay the same, and the health system can deliver on that demand more efficiently because of the AI they've procured and that they have material savings to do something with. If the health system is a for-profit, they might be inclined to return that money to shareholders without impacting the price for consumers. If the health system is nonprofit, they might just reinvest it back into upgrading a building or hiring more specialists, which would also keep prices flat.
When a television company makes a technological advancement that allows it to make better TVs at a lower cost, it might also be inclined to invest in growth or return money to shareholders. However, because of intense competition and price transparency, it is also very likely they will pass those savings on to consumers, lowering the price of the TV for everyone. But health systems don't have the same incentives as the television company in several important ways:

1/ Demand for their services isn't elastic (people don't shop for PCPs on price). 
2/ Their prices are regulated and negotiated with payers and aren't dynamic like television's.
3/ The price actually being paid isn't clear to the patient, as they only pay a portion that depends on their specific health plan.

I wrote about all of these issues in 15 Reasons Why Healthcare Has a Business Model Problem and Healthcare's Incentive Misalignment, so I won't rehash them here. 



The point of this post is that there are a ton of amazing healthcare tech companies that are reducing costs for providers, payers, and patients, improving the quality of life for clinicians, improving the quality of care across the board, and doing absolutely amazing things for the American healthcare ecosystem. But, for the reasons described above, these savings are unlikely to appear in high-level cost metrics. At least not until the incentives change.

AI Adoption In The Enterprise

When Steve Jobs was a child, he read an article about a study on the efficiency of motion for the various species on the planet. The study ranked different species by how efficiently they moved across long distances. The study showed that the condor was the most efficient species and used the least energy to move across a kilometer. Humans were not very efficient and were about a third of the way down the list. However, a scientist decided to build on this study by testing the efficiency of a human on a bicycle. He found that a human on a bicycle was way more efficient than the condor or any other species. Jobs used to tell people this story as an analogy for the personal computer. With a computer, humans can move much faster and get much more done than we could without one.

As Jobs put it, “Computers are a bicycle for the mind.”

SaaS products have become an extension of this concept inside the modern enterprise. They’re like bicycles for the mind in that they make the average worker more productive by improving their workflows, providing them with new data and insights, providing infrastructure, or integrating different software applications to streamline work.

Getting these products into market at scale was extremely difficult. I have the scars to prove it. CIOs were resistant to software hosted outside of their direct control. Integrations took a lot of time and effort and dramatically slowed sales cycles. There was reluctance to partner with lots of modular software vendors. Users were reluctant to change their workflows. Buyers wanted heavy customization. It took years for it to work.

But amazing go-to-market teams overcame all of this, and now, most large companies use hundreds if not thousands of SaaS applications. 

These SaaS companies and their investors have thrived thanks to the adoption of SaaS, but more importantly, thanks to the SaaS business model, where you aren’t just selling software for a one-time fee, implementing it and going out and finding the next customer. You are selling an annuity contract that will grow as you add features and your customers add employees. SaaS companies generally use ‘per-seat’ pricing, where a customer pays based on the number of employees using the product. This annuity places a large focus on CAC/LTV ratios (the cost of acquiring a customer relative to the lifetime value of having that customer). If you calculate that the average customer pays you $100k in the first year and that that price will increase 10% per year, and the average customer would stay with you for 7 years, the SaaS company can justify very large investments in new customer acquisition. This has resulted in innovative sales organizations and structures to drive speed and efficiency. SaaS companies would hire armies of Sales Development Reps or “SDRs” right out of college to do prospecting with clear promotion paths up to salespeople, sales managers, etc. Similar customer success and user success teams were built to manage ongoing renewals and upsells and drive engagement with the product to make sure the customer kept paying the annuity.

With the emergence of AI, there’s now much speculation that these SaaS companies are in trouble: 1/ because AI will allow a SaaS company’s customers to have fewer employees, reducing the number of seats they can sell, and 2/ because competitive SaaS products will do the job for the employee, making the SaaS product superfluous (the AI doesn’t need a bicycle).

When rolling out AI across enterprises, you could consider a simple three-step framework from the lightest touch to the heaviest touch: 1/ Making existing employees more productive, 2/ Replacing the work employees are already doing, and 3/ Figuring out what work needs to be done. Let’s consider each:

Phase 1: Making existing employees more productive, e.g., the extension of the ‘bicycle for the mind.’ This is what the LLMs are doing now and how most people are experiencing AI at the moment. They can perform tasks done by humans, but they require oversight. In my mind, they’re really just slick SaaS-like products. There are also other types of AI, like robots, that can mimic human expertise and analyze information, but these are by no means widespread and are really just making teams more productive. It’s all great and seems to be getting better, but I don’t think it’s going to change the industry or the business model in a massive way. It probably actually helps because the value creation gets bigger. In theory, a SaaS company makes an employee, say, 5% to 20% more productive and charges their customers some percent of that productivity gain. If SaaS+AI makes employees 80% or 100% more productive, the SaaS contracts could get really large. 

Phase 2: Replacing the work the employees are already doing. The most talked about version of this that I’ve seen is automating coding or data entry or customer service or lower-skilled salespeople with AI agents where the AI is doing the human’s job rather than augmenting the work of the human. This leapfrogs the ‘bicycle of the mind’ concept. If successful, it would very much change the SaaS business model because you’re not selling a license for an employee; you're effectively selling an employee. In theory, if an employee makes $50,000 per year and the AI replaces their work, they could price the product at $49,999 per year, just under the employee’s salary. Because the AI doesn’t sleep or take vacations or can work much more quickly than the human, they could charge 2x, 3x, or 10x the employee's salary. Lots of people are very concerned about this phase because it’s the thing that will eliminate jobs at a very large scale.

I’m skeptical of that. For that to happen, a company would have to conclude that the value at Phase 1 is tapped out, meaning that it’s better to stop making the human more productive, and they should pass the baton to the robot. It seems to me there will be a very high bar for that decision. I wrote about the Jevons Paradox in December, which says that when a resource gets more efficient, we use more of that resource. Maybe the best example of this is bank tellers. It was widely believed that when the ATM was invented, it would eliminate bank teller jobs. The exact opposite happened. There are more bank tellers today than ever. When workers get more efficient, we want more of them to do more stuff. Companies are meant to grow. If bank tellers get more efficient because they don’t have to distribute cash or take deposits all day and can do higher value activity at the same price, companies will hire more of them. I suspect this will happen with AI. The day-to-day work may change, but as AI becomes more prominent, your workers get more efficient, and you want more of them, not less. Going the other way seems anti-growth to me, and anti-growth is not a stable place for a company to be.

Further, many SaaS companies don’t use a per-seat model. Healthcare SaaS companies might charge based on patient or member lives. Others charge based on the amount of differentiated data they provide or some set of assets the customer needs to support. AI will still need many of these things to do the work of humans, and I’m not sure those pricing models will change just because a robot is using the product.

Finally, today’s companies are built entirely around people. This was the key insight in the founding of the workforce management company Rippling – that a company’s central nervous system is its people – the tools, systems, support, infrastructure, management, and allocation of resources all center around people. We like to think companies are built around their customers, but they’re not. More granularly, the day-to-day tasks that get done are people-first tasks. We know they can be greatly augmented by software, but replacing them at scale is going to be a long, complex road. Ben Thompson had a great piece on this recently, pointing out that the companies that really find they can operate with AI as a replacement for employees will have to be companies that haven’t started yet. Companies that don’t have the hard-coded employee-based structure that companies have today. Companies that are native AI companies. And I don’t mean they start a company with people and have an AI product. I mean, they start a company without people, or at least without any large functions beyond a small group of individuals that manage the robots. That’s an important insight and distinction.

Phase 3: Figuring out what work needs to be done. This is where things get interesting. This, to me, is the pinnacle where the AI climbs the stack of human intelligence and replaces the highest-value activities. It’s where it goes from a front-line worker to a strategist who can zoom out and tell senior leaders how to run their company. There are signs of this with things like supply chain management and dynamic resource allocation, but many of those things still feel like Phase 2. Real Phase 3 is so far out at the moment that it’s kind of hard to comprehend and talk about. 

All of this is just one framework to look at the diffusion of AI into the enterprise. As with any framework, you might have the elements of it correct, but the framework itself might be wrong. Regardless, as it stands today, the evolution of SaaS into AI-driven enterprises is less about replacement and more about enhancement. The replacement concept gets talked about a lot but still feels vague and, historically speaking, feels somewhat irrational. Companies built around people will find ways to amplify their workforce’s potential, not eliminate it.

I’m pretty sure the bicycle will be around for a while.