Discussion about this post

User's avatar
Francisco d’Anconia's avatar

I use it every day and I don’t see it overcoming fundamental limitations related to anything with stakes higher than creative license without a lot of customization for each separate task. That means that there will be a very large effort necessary to make the transition from suggestion to human in the loop scale to full human replacement, and that will be quite expensive and time consuming, and will require engineers to build out a ton of infrastructure that simply doesn’t exist in any scalable or generalized form today. So I guess my job is safe while I automate everyone else’s away : /

Meanwhile they will pursue AGI and try to swallow up all of the IP that’s developed in the specialization phase as training data to create a super intelligence. I’m not sure given the fundamental limitations that this is actually possible. People see a geometric growth in capability but I see a log graph approaching a limit. When there is no more human achievement to emulate how will training occur.

There is another aspect to this that’s a little more fun to think about. There’s a high likelihood that we are actually experiencing a simulation, and if that’s the case there’s something fundamentally limiting about the nature of the substrate we live in and the rules we live under. When I learned all of the fundamental constants in math and physics and studied religion it occurred to me that those fundamental constants were a sort of rule book for this dimension, and dictated the conditions for condensation of matter around waves. If it is indeed a simulation though does that make it easier or harder to understand the relationship of those fundamentals to the world we live in? And what does that mean for AI? Is it easier to understand a simulation or harder? I remember something like any intelligence advanced enough to create such a simulation is too complex for us to ever understand. Since the AI is of this world does that limit apply fundamentally to AI as well? Or will the AI crack the code driving the simulation and lead us out?

Now this all makes me think of westworld, if AGI is possible and humans can be replaced does that mean that AI will predict our every thought and move based on what must be quadrillions or more of parameters? In the show they made the scope of the supposed parameters small as if we are simple automatons, what if that is true? AI should be able to crack it.

Expand full comment
Mark Tammett's avatar

At most it will revolutionise things in the way the Industrial Revolution revolutionised things. But it can’t change the rules of reality, and allow us all to sit around doing nothing productive but enjoy our UBI. All AI basically does is automate data gathering, in the same way machines 200-300 years ago automated a lot of manual labour. Some predicted then it would lead to mass unemployment, and it certainly did in some professions, but overall employment increased; just the nature of productive work changed. AI at most will be the same.

Expand full comment
2 more comments...

No posts