Brief thoughts - What will the AI bubble burst mean for the rank-and-file tech employees?
At this point most of us believe the AI bubble is going to burst. There are more than enough articles on it (my favourite).
So rather than “will it burst,” or “when will it burst,” let’s think about “so what will that mean for us in practice?”
As a Data Scientist heavily using AI, and also an ex-virtually-everything-else-in-the-tech-world (engineering manager, product manager, software engineer, electrical engineer), I have a few thoughts.
Obviously, generative AI is here to stay. But here are some shifts we could expect. Some due to the burst, some just with time as the landscape shifts:
- Firms that have been investing hundreds of billions in AI will come knocking on our doors for their money. We’ll feel the strain. Not only for increased cost of any platform that has seen AI investment, but…
- The cost of state-of-the-art foundation models will rise exponentially, and companies like OpenAI and Anthropic will brag (and deceive) about why you should be using their models. Meanwhile, out of date, slower or slightly less accurate models will be available at a fraction of the cost (as is already the case - but even more so). The choice of which model to use will be up to you.
- This will be a harbinger for all of the following:
- Companies will soon realize that a well designed user experience is now worth much more to a user (and in model costs) than a chatbot with some API calls slapped on top of their existing services. We’ll see a shift away from CEO and investor-led application of AI to everything, to instead using models for specific purposes: when a human language interface would add obvious value, offsetting increasing foundational model costs.
- On that, browser interfaces will soon be good enough to interact with web pages directly. Chatbots will become less useful as browsers like Chrome translate our text instructions to web page actions. So we’ll shift from designing our own language interfaces to well documented, well thought out user experiences.
- Documentation is already everything, but it’s going to be even more the case. Every model, every line of code we build is twice as effective if it is well documented - if an LLM can understand it, understand its caveats and use cases, then we might never have to use it manually again. More of our job will be spent doing documentation (NOT entirely auto-generated. Manually!) to reap the rewards down the line.
- A lot of tech professionals’ work will be about choosing between AI models based on a price/performance tradeoff. Some of us will probably be outsourcing that - to AI aggregator services that pick models based on real-time price and performance requirements, abstracting the problem for us.
- As is already somewhat the case, a huge skill for tech professionals will be knowing when and how to use AI versus when to build things yourself. For instance, right now AI is incapable of writing most complex queries for me - I write them myself, test them, and then use AI to run them over different segments and interpret the results. Another way of looking at it is that AI will be able to do the easy 80% of your job, but the hardest 20% will still be up to you - the real innovation. As a result, we’ll continue hiring mostly seniors and above. AI will be our satisficing tool, while optimising will require real thought and work.
- Employment and compensation will shift in the tech industry. Right now, we’ve been on a hiring spree for AI engineers, but it will shift back to product managers, developers and data scientists. Those who can translate well between business requirements, technical requirements and AI models will thrive.
- Along with the dead internet theory coming to fruition, training of AI on human-generated content is going to be crucial. Tagging that content equally crucial. Not just for the foundational models, but for our own RAG and fine tuned models. If we don’t do this, we’re going to have hallucinations on top of hallucinations.
- Along the same lines, we’ll start to pay the price of our early-days haphazard AI usage. Bad quality AI generated queries and code already make up a large portion of production code - which is fine while it works - but once we have to pay our technical debts, we’ll have to pay them back with interest.
- Speculative shareholder investment will shift back to non-technical companies, and away from foundational AI infrastructure and companies. Likewise, a lot of our jobs (or high paying ones) will shift away from the magnificent 7 and similar companies to more specific domains.