The market saw DeepSeek as an AI disruptor, but history suggests otherwise. Here’s why NVIDIA, OpenAI, and AI’s biggest players will only grow stronger.
KEY TAKEAWAYS
MY HOT TAKE
How deep is your love? Ok, it has been a few short weeks since DeepSeek surfaced right in the middle of the US stock market. The breach sent waves through the rapidly growing bands of AI zealots, causing a bit of a panic. When I say zealots, I mean investors—stock investors—not users of or developers of AI. OR people who understand how AI models work.
I wrote quite a bit about this in the aftermath of the market’s negative reaction. I viewed the DeepSeek announcement as being positive for the overall AI ecosystem. All nationalism aside, competition is healthy for any industry, and to be clear there were already many notable players, least of which was darling NVIDIA, Microsoft, Alphabet, Meta… and many more mega-cap household names. Not to mention all the emerging startups that we never even heard of. Oh, and what about AI giant OpenAI and its ChatGPT? Sorry, it’s still a private company. Keep that in mind, because it is important for my plot this morning.
So, I am on record all over the press stating that the announcement was good for AI. I also posited that the announcement would have no material impact on the future performance of the key players in the industry. Even if DeepSeek were able to accomplish the same thing as OpenAI, Google’s Gemini, or Meta’s Llama with less computing power, actual power ⚡🔌, or, I think, most importantly, financial power (DeepSeek purportedly accomplished this with only $6 million), it still does not spell doom for the entire AI ecosystem. Quite the opposite! I like to think of it in reverse. Given the large number of resources, a more efficient model can yield something more powerful than the less efficient models. Ya, like better, faster, smarter—all good things when it comes to AI. I likened this to PC processors. Making more efficient chips meant more expensive and better performing PCs, not cheaper ones with the same level of performance. That is how Intel became the company it is… or was, before NVIDIA moved its cheese.
Now, there are many folks who doubt that DeepSeek is not being completely honest about the resource required to build its model, but I would say “who cares?” DeepSeek delivered what appears to be a very solid model. Full disclosure: I have not tried it for security reasons, but the expert users of the world seem to be impressed with its performance. DeepSeek even got a nod from OpenAI founder Sam Altman. But Altman also, said something to the effect of “so what, we can do that too.” And I am pretty sure they can, based on my experience with technology. The arrival competition was simply a telegram from the world that, if they (OpenAI) wants to survive, they better tighten their laces and prepare for a race. Remember this is software, so code can be produced relatively quickly and cheaply. I am fairly confident that engineers at all the major AI players have already built several versions of the same type of model that was introduced by DeepSeek.
Have you noticed that I haven’t even mentioned NVIDIA yet? Why? Because it makes the chipsets that fit into the servers that fit into the data centers that crunch the data for ALL OF THESE COMPANIES. That’s right, it manufactures the shovels that all the gold miners need to find their prizes. It is ChatGPT, Meta, Google, IBM, and XAI that were put on notice that day. But I get it, investors are trying to be cute and figure out all the implications of this so-called "more efficient” model. And who better to punish than NVIDIA. However, as aforementioned, if you know anything about how technology innovation happens, you may look at this a bit differently.
I know, I know, there was that fiber bubble when CLECs (competitive local exchange providers) laid thousands of miles of fiber cable, financed by hedge funds, VCs, private equity investors… just about everyone. Their thesis was simple: more data transmission needed more fiber. Simple, right? Wrong! My brilliant friends at Bell Labs already had DWDM working in the lab, so THEY knew that the freshly buried fiber might have been a bit of overkill. DWDM stands for Dense Wavelength Division Multiplexing, which allows more data to be pushed through a single strand of optical fiber. This technology along with overestimated demand caused the bubble to burst sending many network builders into bankruptcy. The reason for the overbuild can also be traced to all the cheap and dumb money chasing after the market. Here is a little side note. Demand ultimately DID catch up to supply, but it took about a decade. 😉
So, can we use this bit of history to inform us in today’s AI world? Is DeepSeek the DWDM that will pop this bubble? Nope. First of all, we are just at the beginning of the massive demand for AI. AI will proliferate through just about every bit of technology that we use today. All that requires more AI models, data centers, power, data center infrastructure, servers, and yes, parallel processing chips. On that note, this does not mean that NVIDIA is out of the woods. NVIDIA needs to continue to improve its chips’ efficiency and power consumption, or its competitors AMD and to a, sadly, lesser extent, Intel will take greater shares from the still-growing-despite-DeepSeek addressable market.
Earlier this week, Elon Musk, in the midst of his very busy schedule, announced the release of XAI’s latest large language model (LLM) called Grok-3. There was lots of speculation about where the model was borne, and by borne, I mean which data center. There are b-rolls all over the internet of drone flyovers showing a massive data center in Oregon or some other remote location. That is where all those NVIDIA chips are going, and that is just one of many being used by all the big, well-known AI names. Will those data centers become abandoned now because of DeepSeek? Oh, and by the way, even DeepSeek needs data centers.
Creating LLMs is not a one-and-done kind of thing. Sure, there is the heavy computing lift of Pretraining. That is when AI rummages through all the data it can get its hands on, and processes it. From that effort is borne a model like GPT-4 or Grok-3. It is like a child being borne with all the school knowledge through, say 8th grade. Impressive and very capable. However, think about all the skills you have learned since your 8th grade graduation. Not just through further formal education, but through life—your interactions, your experiences in the world. LLMs go through the same process through constant interaction and fine tuning. That takes massive amounts of processing power as well, as the process is iterative. Finally, in order for these models to be able to reason beyond just regurgitating data at you, they require significant processing power. Do you want a smarter, more capable AI? Well, you are going to need more processing power, because the iterations have to become more frequent—the whole life cycle, from pre-training to fine tuning to addressing use cases (specific to function or industry). This all takes data centers and chips! More of them.
Now, let me step back. The demand is going to continue to grow, but it is not endless. Like anything else, all business decisions must be tempered with logic and rigorous analysis. Do you know what DeepSeek is probably working on right now. That’s right, a more powerful model to compete with the at-least 3 even more powerful and similar models that were released by the incumbents since it first hit the market.
As users, this has been a great exercise. It will yield us better, more capable models in the future. As investors, this validates that AI is here to stay, and if you didn’t get emotional and you agree with my assessment, this provided a buying opportunity. I think it was legendary venture capitalist Marc Andreessen that said DeepSeek was AI’s Sputnik moment, drawing a comparison to the Soviet Union’s 1957 launch of Sputnik pulling it ahead of the US in the space race. But did all that justify market declines? I am pretty sure not. There should have been no Sputnik moment for investors in AI stocks. They should have been well-aware that high-return stocks come with high volatility. It’s just the market, stupid.
That brings us back to a statement I made way up top. ☝️🙃 Do you think that DeepSeek’s announcement affected the value of its chief competitor OpenAI? I would say, it probably increased its value. Do you know why? Not only because of everything I just mentioned, but also because it is a private company, not subject to emotional vagrancies of the public market. Check out the following chart which shows how major players were affected by DeepSeek’s announcement. If you look closely at it you will see that even the companies most affected have all but recovered since, and that the whole lot of them pretty much returned to their prior trends. Sputnik did not end the space race, it put it into overdrive, and it is still happening today. 🚀🚀 And there is nothing artificial about that bit of intelligence.
YESTERDAY’S MARKETS
Stocks climbed yesterday after FOMC minutes showed us that the Fed is taking its sweet time, but we already knew that, so investors yawned and found themselves thankful that nothing silly showed up. That’s about as “known” as you’re gonna get in the new normal of unknowns—markets like that for now. Housing Starts dropped more than expected, which should be expected given the rate outlook and builder sentiment.
NEXT UP