Artificial intelligence has leapt into the public consciousness with a string of recent product announcements and relentless media coverage[1]. We have a slew of new launches from Nvidia, Alphabet (Google), Meta (Facebook), Microsoft and Apple, plus many more initiatives from well-funded start-ups like Anthropic, Cohere, Inflection, Mistral, Perplexity, SSI and xAI. These companies’ various demos, social media postings and advertisements have dominated the business news and helped to educate the public about a remarkable array of new AI capabilities. For example, the best AI:
- Can read, write, see and hear;
- Author text and poetry;
- Compose music in any genre;
- Produce lifelike images;
- Create high resolution animation;
- Render photorealistic video;
- Converse in most languages;
- Write computer code;
- Do complex mathematics;
- Solve puzzles and play games;
- Drive cars and trucks;
- Design molecules;
- Find new drugs; and
- Operate factories
You get the picture. Not surprisingly, on the back of these many breakthroughs, there has been an explosion of deal and partnership activity as leading companies jockey for position and competitive advantage. Elon Musk’s xAI recently raised an additional $6 billion at a remarkable $50 billion post-money valuation in November for this new AI company. Ditto Mistral, an upstart French contender which earlier this year closed $640 million (€600 million) at a $6.2 billion valuation, triple its value six short months prior. Microsoft has announced a medley of high-profile AI deals with Inflection, G42 and OpenAI. Reddit announced its own partnership with OpenAI shortly after going public last spring and saw its already pricey stock jump approximately 10%[2]. Apple tapped the same company to supply advanced technology for its own array of AI-enabled software, phones and computers. The hubbub reached a tumultuous peak in early October when OpenAI completed a new $6.6 billion financing round at a breathtaking, hectocorn-level $157 billion valuation - the largest venture deal in history.
BRAVE NEW WORLD
So, what’s all the fuss about artificial intelligence? The first thing we’d say is that AI technology is not new. After decades of false starts and dead ends, it’s just finally working. Frankly, until now, it’s been a long, largely irrelevant history. You might remember:
- Thinking Machines, an MIT company that burst on the scene in 1983, only to go bankrupt a decade later after failing to find a market for its expensive and underpowered innovation, the Connection Machine.
- This was followed by various expert system efforts claiming to do some specific thing well — one trick ponies that traded stock, set ticket prices or planned capacity utilization, leading mostly to more disappointment but showing flickers of hope along the way.
- Then we had the tantalizing rise of machine learning, deep learning and natural language programs, giving us chess and go playing masterminds, phone assistants and call center front ends, but which weren’t good for much else.
- And now we have Generative AI, where the best large language models (“LLM”s) can produce text, images and video in multiple languages — opening up a vast array of applications.
- But not to be confused with Artificial General Intelligence (“AGI”) — the holy grail, where software can do everything a human brain can do and more (though this is possibly just over the horizon, some experts contend within five to ten years).
AGI is what the cognoscenti sometimes refer to as the Singularity, where the AI technology potentially becomes untethered from human control, demonstrates independent executive action and “chooses” to leave our messy biological selves behind by turning on and destroying mankind. There is no evidence of anything like that happening yet, but we’ll want to keep a close eye on this wholesale disaster possibility. As they say in Silicon Valley, we can compute intelligence but not consciousness. And to be clear, we aren’t overly-worried about this admittedly scary scenario because to date researchers have been exceedingly cautious about granting autonomy or permitting unbridled recursive self-improvement to the technology — while maintaining “alignment” with responsible moral values and personal safety. But the possibility merits careful monitoring and constant vigilance, especially because some of the most capable AIs are now available broadly as open source.
The evolution of AI has taken us along a path from highly procedural programs, where we tell the computer to do this and if that happens do that. These are giant, entirely deterministic, logical constructs. At scale, these programs might have hundreds of thousands or millions of instructions expressed in lines of code, which live in our world as large enterprise applications, typically developed and maintained by companies like SAP, Oracle, or Salesforce.
In contrast, a new generation of LLMs are remarkably predictive and wildly complex software systems capable of learning without being programmed. They do this by ingesting billions and trillions of training examples and breaking those examples into parameters or “tokens,” whose magic is how they are “generative,” “pre-trained” and “transformed” — hence the GPT. The trick is to associate all the parameters in as many logical ways as possible so that the machine can in the future assemble and display these connections in the form of elaborate “predictions” we users experience as written prose, poetry, computer code, mathematical expressions, game solutions, photos, animations, and videos.
Stepping back from all the acronyms and tech talk, AI is basically about computers performing intelligent tasks typically done by humans. Or as we recommend you think about it, super useful computing. It may be easiest and most helpful to consider AI as a new way to interface with the most advanced computing available. At their deep cores, these systems are still a pile of zeros and ones, enabled by hardware, software and communications technologies. But what’s different is how users interact. We have progressed through tape, punch cards, command lines, graphical user interfaces, search and social media. Now we have arrived at a point where the models can react to prompts and natural language questions with equally intelligible unstructured answers and outputs. They can converse, and in this sense, an AI might be thought of as a digital person, a new kind of hyper-helpful being, more life form than machine.
Looking ahead, you might conceive of AI coming into your life or workplace as an instructor, coach, consultant, expert advisor, or super-capable coworker — a supportive friend or intelligent colleague. We are all used to the idea of automating physical tasks. Think turbines, copy machines, chain saws and tractors. Now we have to wrap our minds around the much bigger and less familiar idea of automating and dramatically enhancing intelligence — power tools for our brains.
THE REVOLUTION IS NOW INEVITABLE
Why is this all happening now? It turns out there are important precursor phenomena which have set the table for the AI conflagration. Think of these elements as dry digital tinder on the forest floor:
- Substantially different ways to interact with computers — access and output;
- More data being created and stored — approximately 12X more in just the last decade;
- Highly customized dedicated chips for training and running the models, mostly from Nvidia but dozens of other competitors are in the wings;
- Overlapping high-capacity broadband communication networks; and
- Vastly improved new capabilities around generative AI, as described above.
With our investor, business and public policy hats on, that’s all we really need to know about the technology foundation. However, people often overlook three major, increasingly visible social trends that are critical to understanding this breakout moment:
- Employees have abandoned the office. Just in the US, approximately 90 million people are working from home at least one day a week. And this doesn’t include folks traveling on business. This massive shift in workplace location means network topologies and the associated security apparatus for commercial data need to be different[3].
- Computers are also leaving the office. Cloud based architectures are relentlessly gaining ground, growing approximately 20% annually while on premise compute is flat. Our core digital network is collapsing to the center. This is the same thing that happened with water, sewage, gas and electricity networks, so a completely natural progression.
- People are doing something different with their computers and other intelligent devices like phones and cars. (The typical car has 1400 chips and electronics comprise approximately 40% of total manufacturing cost.) Computing is no longer just about complex accounting, Word docs, Excel spreadsheets and PowerPoint.
We all have more devices, produce more data, and are communicating over more overlapping, higher capacity networks. It all means more complexity. The only way to handle the tsunami of information is with AI-based techniques. In fact, the trend is inevitable.
We think of the coming boom in AI driven productivity as a new Agricultural or Industrial Revolution. But instead of tractors and combines, mechanical looms and railroads, we have extremely capable software for knowledge workers. We finally have the prospect of more productive lawyers, software engineers, researchers, investors and even academics and civil servants. In particular, we expect it will make good workers as productive as the very best employees and open up vast new fields for automation.
Here are two examples to make things more concrete:
- Example 1: GitHub Co-pilot, an AI based programming tool from MSFT, makes production software coders dramatically more productive. It costs $30/month and makes a programmer 25-100% more productive, while dramatically improving job satisfaction and reducing fatigue. It’s a huge gain but interestingly doesn’t replace the person. However, the product does likely dampen aggregate demand for the skill set.
- Example 2: The Boston Consulting Group (“BCG”), in collaboration with Harvard Business School, conducted a study on AI's impact on productivity, specifically using GPT-4 for knowledge work. The study involved 758 BCG consultants who were assigned 18 realistic consulting tasks, comparing their performance with and without AI assistance. The tool didn’t make the best consultants much better but radically improved average performance — again, made good workers more like your best employees. Remarkably, they also learned the AI was so easy and intuitive to use that it didn’t really matter whether the consultants got any training.
So why the hype, maybe even hyperventilation? Aren’t these AI models just another bucket of code, more software from the rapacious BigTech technology complex? Turns out AI isn’t just a faster processor, cheaper memory, more bandwidth or a better camera on your phone. It’s a qualitatively different phenomenon.
In particular, the latest generation of large language models have four unique features. They are:
- General: They have been trained on vast data sets so they have broad knowledge, sport expansive subject matter expertise and can be applied almost everywhere. It’s hard to think of an unaffected skill area or economic domain.
- Hyper-evolving: The models are innovation explosions, improving their capabilities at an incredible pace, backed with massive spending. The whole sector is moving faster than chips have moved, where Moore’s law doubled density every two years. By contrast, GPT 3.5 was launched in 2022 with 175 billion parameters, 4.0 arrived last year with a speculated one trillion or more parameters and GPT 5.0 is coming soon with a rumored 40 trillion or more parameters.
- Plummeting in price while improving ease of use to the point you don’t know you are using it. The technophiles call it “ambient computing” — where the capability will be all around you all the time, embedded in all the devices you typically use, carry or drive. And lastly…
- Autonomous. AI can be configured not to need human oversight (cars, surveillance monitoring, and of course weapons — all fascinating subjects for another day).
THE END OF THE BEGINNING
For now, it’s enough to say that the changes will be profound, BUT they will come slower than all the hype might suggest. Technology can move quickly, but the way people behave does not. Human behavior, habits and cultural practices evolve more slowly than technology can move. For this reason:
- Big tech changes always take longer than you think — smart phones have been around 20 years, internet 30 years, PCs 40 years and microprocessors 50 years. So less happens in the short term but these incremental changes accrete, like financial interest on a savings account, and compound over time. People consistently overestimate how much will change in one year and underestimate how much will change in ten. Case in point: after 15 years of hype and failed promises, driverless taxis are finally a reality, with Waymo’s fleet of retrofitted Jaguars quietly scaling up to 100,000 rides per week in select US cities[4].
- Consider, right now, the only people making money on AI are chip vendors like Nvidia and cloud computing companies such as Amazon, Microsoft and Oracle who are supporting massive model creation and training efforts. Think of these companies as AI infrastructure players selling picks and shovels for the AI gold rush. As a point of interest, in early June, Nvidia crossed $3 trillion in market cap, and since then Nvidia has become the most valuable company in the world at approximately $3.6 trillion, having now surpassed better known companies like Apple, Microsoft and Amazon.
Here’s what we know for sure. We are going to have:
- An AI investment mania. (It’s already started. Strategics are pouring resources into the space, with investments of $117 billion and $81 billion in 2022 and 2023, respectively. And not to be outdone, VCs have committed over $95 billion globally into the category in 2023 and in 2022 committed $103 billion[5]. Within the US, AI and machine learning-focused businesses have represented nearly half of all venture capital funding, and in the process are already producing at least a dozen unicorns.)
- Crazy promises and some fear mongering (think domestic robots, flying car predictions, ominous sci-fi like warnings from our youth);
- Bad behavior and fraud (these will be coming soon, investment scams like FTX and Binance from the crypto world);
- Some disappointing early results and generative garbage (autonomous car crashes, racist pictures on the internet, deep fakes and political election meddling);
- Eventual triumph of some obviously useful applications (elimination of huge swaths of white-collar labor and creative professional tasks);
- Establishment of successful business models, separate from the infrastructure buildout now underway; and
- Widespread dissemination of easier to use and less expensive incarnations.
Take ChatGPT as the most visible exemplar. The product currently has over 200 million active weekly users for the company’s free version, easily the fastest software application ramp in history. OpenAI is monetizing this groundswell of interest by offering subscriptions to more advanced versions, starting at $20/month for individual subscribers. It is also working closely with Microsoft, Apple and a long list of other companies to share its technology on an OEM basis where the capability will be embedded in other offerings.
It’s a particular instance of a hugely capable large language model, which itself is a fancy objective seeking, artfully constrained, neural network that’s been trained on multiple very large data sets to compose text, draw pictures, translate languages, solve mathematical problems, write code, play games and generally astound you with human-like intelligence and seeming creativity.
THE COMMERCIAL FRONTIER
But the truth is these competing models are converging on similar capability levels, including the opensource offerings, most notably LLaMA from Meta. So, it’s not clear there is a single knock-out winner here. The technology is maturing quickly, and the next frontier will likely be about who can train and run their models more efficiently, that is to say cheaply. Right now, the dirty and not-so-secret issue the leading LLM products share is that they cost a fortune to build and are expensive to run. Every time you make an AI query, you are activating ten times as many electrons as a Google search, whizzing through all those sprawling data centers springing up around the world.
For this reason, we may see a boom in small language models (SLMs), which are just what they sound like — substantially smaller by design, comprising fewer parameters and trained on smaller, more focused data sets. This narrow cast approach makes them much cheaper to create and operate than the broad scope, multi-function, multi-media, multi-lingual LLMs. Think of an SLM like a digital appliance that does one thing well, maybe reading x-ray images, drafting news copy or searching for promising new drugs — very useful but highly circumscribed activities. An SLM won’t be as broadly capable as something like ChatGPT, but it also won’t need nearly as much compute, memory or communication bandwidth to function so it can be more easily bundled with a phone, laptop, car, toy or any other dedicated piece of hardware.
ChatGPT hype notwithstanding, we doubt there is one killer app — no hero product that tastes like chocolate and cures cancer, all singing and all dancing AI. Rather, we anticipate there will be thousands of small applications seeping into every part of the economy — like a rising tide of computing capability. Or think of the future as being about many AI appliances, smaller targeted applications great at doing one thing well — matching risk profiles to investment portfolios, writing computer code, navigating in traffic, diagnosing skin cancer, or cranking out NDAs for lawyers.
There is a very good chance that in the final analysis, AI is a feature, not a product, and certainly not a single dominant company. Think back to the optical fiber communications boom around the original internet buildout. Or the object-oriented programming explosion. Both were the same kind of thing, where huge hype and media attention around a technology capability ended up being overblown. The capabilities were real and eventually made their way into the fabric of our economy, but not in the form of one or two overvalued companies trading for 30X revenue.
One final thing worth noting is the current mismatch between pilots and actual field deployments at scale — what we’d call enterprise level roll outs. There is a tremendous amount of experimenting and learning going on, but not so many production quality, broad-based commercial deployments quite yet. We are hearing lots of talk and seeing plenty of people sampling the goods, but so far not many companies have made wholesale commitments to transform their business models with AI. The story, instead, has been about individuals making the choice to adopt the technology unilaterally and transform themselves as more capable employees, educators, students, authors and artists. Said another way, the commercial reality so far has been all about IT departments studying and running trials, while employees have been signing up in droves and using the products without telling anyone.
WHAT ABOUT THE DATA?
It’s critical to realize the models, whatever they may be and wherever they may have originated, still need data to operate. AI is a binary weapon — software model plus data set. It’s an oversimplification, but not far from the truth to say AI is like a restaurant. You can’t run it with just recipes. You need actual ingredients to make the food. All AI models need data for training and then, typically, updated and current information to run, because data is almost always dynamic and perishable.
And in exactly the same way that the rise of streaming networks made digital content owned by professional sports franchises more valuable, we believe the next generation AI models will make high quality data sets more valuable — everything from DNA profiles of human populations, to geospatial data, to purchase histories, to mobility information. Right now, we are all talking about the models, but we suspect the real story a decade from today will be about how these AI models valorized the digital information all around us, including inside your companies.
Recent history gives us a hint about how the future will unfold. The whole social media phenomenon runs on attention and advertising, where billions of people have, mostly unwittingly, handed over their personal information (pictures, emails, location, shopping histories, likes, dislikes, friends, family relations — whole digital identities) in return for some convenience and community. Leave aside whether that was a good trade, today, Alphabet, Meta, Amazon, Microsoft, Alibaba, Baidu and Tencent all have huge consumer datasets, almost entirely assembled for free. It’s not coincidental that many of these data-rich companies are leading the AI arms race.
The corporate world won’t be so easily duped. Most companies already have prohibitions against uploading corporate data and some of the most enlightened firms are treating proprietary data sets as crown jewels — intellectual property likely to be made vastly more valuable in an AI denominated world. Enterprise applications in the future will be trained on deeper data sets — think specialization around finance or healthcare, and then further enhanced with information about a company’s internal processes, people, products and customers.
Abstracting away from the technical mumbo jumbo, these are new software platforms which are hugely important and profoundly impactful. They will change the way we work, create, communicate and operate our economy. AI is not quite fire or electricity, but we believe the mature wave of new models will be more important than PCs and social media, as a technology maybe on par with the advent of semiconductors and the internet. That’s another way of saying HUGE, the dawn of a more sentient age.
HOW WILL IT ARRIVE?
The technology will largely enter your life quietly, as a feature in products and services you already use or as an agent to do your virtual bidding. It will seep into everyday usage. The majority of white-collar workers in the US are already using AI today, many without realizing it. And that proportion will inevitably rise as utility improves and costs drop. Think of AI like ambient music in the background everywhere you go. If you listen, you’ll hear it in airports, taxis, grocery stores, and waiting rooms. AI will be the same.
We suspect the creation of these LLM scale platform technologies will be done by the very largest (or at least best funded) corporations and worldwide by state actors because it is hugely expensive and takes enormous compute resources plus access to vast data sets. Think of AI platforms built with LLMs like nuclear power stations, complex and expensive, and in their own way, dangerous. The models will need to be managed like there is some risk of harmful radiation, or even a dangerous meltdown.
Today these massively capable software power stations are coming online and what we now need are electrical engineers to wire buildings, make useful gizmos, and install and integrate AI appliances. We need the software equivalent of electricians, application engineers trained to implement the technology in real business situations. This evolution has created explosive demand for “prompt engineers,” people who are skilled at interrogating the AI models, asking the right questions and posing the most relevant queries.
In conclusion, we would underscore three things:
- First, the battle over platforms is probably over, and being won by a handful of established companies and a few extraordinarily well-funded start-ups. The next chapter of the story will be about how well we integrate and apply the technology. Don’t get star struck. AI is just another digital power tool, albeit an amazingly capable one.
- Second, it will take a very long time to roll out, but good to get started. We’d emphasize it’s incredibly easy to dabble. And there’s probably lots of low hanging fruit to pick in your own organization. Why not give it a whirl?
- And lastly, remember, in the end it’s all about the data, especially your proprietary information.
Here’s a final thought: imagine a beautiful, AI created photorealistic digital video rendering of rain, maybe including the sound of thunder and pitter patter of raindrops — no matter how amazing it looks, this digital simulation won’t make you wet. Take some comfort in knowing AI won’t be making lunch, making friends, or making babies any time soon. But everything else is up for grabs.
FOOTNOTES
Data herein captured as of November 20, 2024, and will not be updated in the future. BayPine has not independently verified 3rd party data for accuracy.
- And thanks to the work of individuals such as Mustafa Suleyman and Ethan Mollick in their respective books The Coming Wave and Co-Intelligence: Living and Working with AI.
- Reddit (tkr: RDDT) announced its partnership with OpenAI on May 17, 2024. Stock price growth is measured between May 17th and September 30th.
- McKinsey & Company, the American Opportunity Survey. https://www.mckinsey.com/industries/real-estate/our- insights/americans-are-embracing-flexible-work-and-they-want-more-of-it#/
- https://singularityhub.com/2024/09/05/waymo-robotaxis-are-giving-100000-rides-a-week-itll-soon-be-more/
- Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.