I do not believe the vast majority of people are prepared for the disruption - or perhaps revolution - that AI will bring. In many ways, it will increase efficiency and provide the solutions we desire.
I believe this is especially true in corporate roles, as many operate on an admin-heavy model - because having direct reports is part of the corporate glamour package. Furthermore, much of the low-level white-collar work - such as in accounting, construction design, and legal - are just glorified gatekept administrative tasks - routine "bread and butter" transactional work. I am not merely referring to AI drafting your emails and filing your attachments as they come in. We are looking at AI that can read receipts and draft entire financial statements, awaiting a final CPA review within minutes. Similarly, construction estimates and sketches can be produced swiftly for evaluation by quantity surveyors and engineers. Legal discovery and case law analysis can be completed in mere minutes, quickly assessing the likelihood of success, with prepopulated submissions generated instantaneously for final review. Contractual templates will be crafted, incorporating relevant nuanced considerations adjusted based on recorded client interview input, then managed through blockchain technology. In the future, the most pressing legal challenges will likely stem from jurisdictional complexities, particularly as AI continues to evolve. While international law is not yet as agile as we would prefer, it seems inevitable that this will change.
I believe that within three years, at most five, we will witness the mass disestablishment of white-collar jobs. Mid-tier managers are already being trimmed rapidly due to low-level automation in many countries. As a result, the commercial real estate collapse will likely see these spaces repurposed for residential purposes. Physical retail is already in decline, so this movement is not surprising to many. Traditional 9-to-5 roles will break down into a more gig economy model. With the erosion of job security, transient behaviours will also emerge in interpersonal romantic relationships. Thus, the market there is ripe for the picking.
In a redeeming twist of sorts, those who were once looked down upon in blue-collar jobs will find that they hold more security than their white-collar counterparts, as physical automation will take a little longer. Occupations like mechanics, hairdressers, plumbers, and carpet layers will remain relevant for a while longer.
The focus will also revert to physiological needs in Maslow's hierarchy. The less glamorous roles tied to these will survive. Financial generation around basic survival needs - such as food, water, and shelter - will remain vital. People still need to eat and sleep. Land will retain value because you cannot create more of it. This will disrupt the class system as we know it, while also increasing wealth polarity. I would argue that land ownership will become paramount, as it will enable the ability not only to provide shelter but also to grow food.
Another focus that will be hard to replicate is the role of spirituality and healing during moments of massive societal upheaval and disarray. While many are already using AI for therapy, the desire for self-actualisation and spirituality in particular, will continue to grow. Truth-seekers will always exist. Speed will be a result of this upcoming disruption, and in such a world, we will find value in slowing down.
The flipside of this is that those seeking to numb their souls will be presented with vast and tantalising selections of newfound entertainment and distractions. If we believe the war on spirituality is troubling now, we have no concept of what is yet to come. Imagine the spirit not being broken, but gently lulled into a peaceful slumber, unaware that resistance is possible - or, if aware, uncertain of its necessity. Interesting, huh?
I would not be surprised if, as things progress, some form of universal basic income is explored in certain countries.
Get ready for the ride of your life.
I use it every day and I don’t see it overcoming fundamental limitations related to anything with stakes higher than creative license without a lot of customization for each separate task. That means that there will be a very large effort necessary to make the transition from suggestion to human in the loop scale to full human replacement, and that will be quite expensive and time consuming, and will require engineers to build out a ton of infrastructure that simply doesn’t exist in any scalable or generalized form today. So I guess my job is safe while I automate everyone else’s away : /
Meanwhile they will pursue AGI and try to swallow up all of the IP that’s developed in the specialization phase as training data to create a super intelligence. I’m not sure given the fundamental limitations that this is actually possible. People see a geometric growth in capability but I see a log graph approaching a limit. When there is no more human achievement to emulate how will training occur.
There is another aspect to this that’s a little more fun to think about. There’s a high likelihood that we are actually experiencing a simulation, and if that’s the case there’s something fundamentally limiting about the nature of the substrate we live in and the rules we live under. When I learned all of the fundamental constants in math and physics and studied religion it occurred to me that those fundamental constants were a sort of rule book for this dimension, and dictated the conditions for condensation of matter around waves. If it is indeed a simulation though does that make it easier or harder to understand the relationship of those fundamentals to the world we live in? And what does that mean for AI? Is it easier to understand a simulation or harder? I remember something like any intelligence advanced enough to create such a simulation is too complex for us to ever understand. Since the AI is of this world does that limit apply fundamentally to AI as well? Or will the AI crack the code driving the simulation and lead us out?
Now this all makes me think of westworld, if AGI is possible and humans can be replaced does that mean that AI will predict our every thought and move based on what must be quadrillions or more of parameters? In the show they made the scope of the supposed parameters small as if we are simple automatons, what if that is true? AI should be able to crack it.
At most it will revolutionise things in the way the Industrial Revolution revolutionised things. But it can’t change the rules of reality, and allow us all to sit around doing nothing productive but enjoy our UBI. All AI basically does is automate data gathering, in the same way machines 200-300 years ago automated a lot of manual labour. Some predicted then it would lead to mass unemployment, and it certainly did in some professions, but overall employment increased; just the nature of productive work changed. AI at most will be the same.