2024 and the Danger of the Logarithmic AI Wave
It’s time to stop looking back at 2023 and look ahead to 2024. It’s a scary world out there, and AI is ramping up. It isn’t clear whether it’s going to be the most amazing technology we’ve ever seen or the most disastrous. As a massive force multiplier, will it be used for good or to do harm? Probably more the latter than the former because, like any tool, AI responds to the will of whoever uses it, and a lot of folks with access to this technology, too many, don’t have the world’s best interest at heart.
This was driven home at last month’s meeting in New York with HP’s Wolf Security division. This division is unique in the IT space as it focuses exclusively on trying to anticipate future threats and producing ways to mitigate them. It’s responsible for the unique engineering of HP’s enterprise PCs and printers and is monitoring a significant increase in AI activity that would be concerning. But let’s look ahead to 2024 and talk about why we should be far more worried than we are.
AI is advancing very quickly, almost unbelievably quickly, but it’s still largely linear. What will change in 2024 is that AI will start to be used to advance itself. As AI gets more capable, it will speed up its own advancement and that advanced AI will in turn be used to speed the next iteration. This speed will advance from linear to algorithmic because each advancement will increasingly speed up the following advancement.
One indication of this is the anticipated release of Windows 12 that should have been sometime around 2030 but is now speculated to be next year. Whether that speculation is true or not highlights the beginning of what this technology is capable of.
Now, product companies like Microsoft know that the market can’t consume new products faster than annually, but malware has no such limitations. It can be cycled. In fact, you could create an AI loop where the AI virus could evolve over time once the core code advances to a level that it would be impossible for any existing technology to stop it. At that point it seems unlikely that those who created this monster either on purpose or accidentally could control it.
Just as there is no limitation on how rapidly malware can advance, there is no limit on how quickly defensive technology could advance either. Fortunately for us, there are folks like the HP Wolf Security division working on defensive technology. I’m sure government defense departments are also aware of this danger and are working to mitigate it.
It is a race now, one I truly hope we can win because the alternative will be ugly. But these efforts give me hope and I’m giving that hope to you that those working to protect this world from AI weapons have the will, the funding and the support they need to be successful.
Because we could all use a little hope.
Generative AI is impressively powerful and it has already accelerated advancements. I mentioned Windows but more interesting than that is the prediction from NVIDIA’s CEO, Jensen Huang who recently predicted we were within five years of having AGI or Artificial General Intelligence. That is the true AI game changer with regard to autonomous automation.
In short, while the advancements in 2023 have been amazing, we haven’t seen anything yet. The speed will only increase from here as we move to logarithmic AI advancement. I hope that those focused on creating AI for good are better, faster and more focused than their opponents and that our New Year will be amazing.
About the author: As President and Principal Analyst of the Enderle Group, Rob Enderle provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.