I use it every day and I don’t see it overcoming fundamental limitations related to anything with stakes higher than creative license without a lot of customization for each separate task. That means that there will be a very large effort necessary to make the transition from suggestion to human in the loop scale to full human replacement, and that will be quite expensive and time consuming, and will require engineers to build out a ton of infrastructure that simply doesn’t exist in any scalable or generalized form today. So I guess my job is safe while I automate everyone else’s away : /
Meanwhile they will pursue AGI and try to swallow up all of the IP that’s developed in the specialization phase as training data to create a super intelligence. I’m not sure given the fundamental limitations that this is actually possible. People see a geometric growth in capability but I see a log graph approaching a limit. When there is no more human achievement to emulate how will training occur.
There is another aspect to this that’s a little more fun to think about. There’s a high likelihood that we are actually experiencing a simulation, and if that’s the case there’s something fundamentally limiting about the nature of the substrate we live in and the rules we live under. When I learned all of the fundamental constants in math and physics and studied religion it occurred to me that those fundamental constants were a sort of rule book for this dimension, and dictated the conditions for condensation of matter around waves. If it is indeed a simulation though does that make it easier or harder to understand the relationship of those fundamentals to the world we live in? And what does that mean for AI? Is it easier to understand a simulation or harder? I remember something like any intelligence advanced enough to create such a simulation is too complex for us to ever understand. Since the AI is of this world does that limit apply fundamentally to AI as well? Or will the AI crack the code driving the simulation and lead us out?
Now this all makes me think of westworld, if AGI is possible and humans can be replaced does that mean that AI will predict our every thought and move based on what must be quadrillions or more of parameters? In the show they made the scope of the supposed parameters small as if we are simple automatons, what if that is true? AI should be able to crack it.
At most it will revolutionise things in the way the Industrial Revolution revolutionised things. But it can’t change the rules of reality, and allow us all to sit around doing nothing productive but enjoy our UBI. All AI basically does is automate data gathering, in the same way machines 200-300 years ago automated a lot of manual labour. Some predicted then it would lead to mass unemployment, and it certainly did in some professions, but overall employment increased; just the nature of productive work changed. AI at most will be the same.
UBI is definitely the way forward as we move towards an increasingly automated world but I've been wondering for a while now about the stagnancy of the human consciousness in relation to evolutionary principles i.e. why do texts and criticism about human nature from 5-6000 years still ring true and find relevance in today's world in spite of the many tools(religion, spiritual, psychological) we created to improve it? Some optimists believe AI would give room for humans to pursuit creative and more fulfilling activities but I wonder if humans really are made for something more? If so,why have we struggled to achieve it in all these years? The best of us have to flee to the mountains to seek refuge(monasteries) and practice for years how to cultivate internal harmony or the transcendent consciousness we suppose idyllically.
Nick Bostrom's simulation hypothesis, or rather all the permutations thereof which have sprung up all over the place, are likely to become more popular in the coming years. Technology will become more of an ideological component in new belief systems and religious movements, as opposed to simply being a medium for the transmission of ideas as it has been until now.
That is already underway. The Machine God wears many faces today, and will have more grafted on as the technology continues to advance. The so-called "God Helmet," which was created to test certain hypotheses about how the brain works while people undergo religious experiences, is an early example of this that predated AI by over a decade.
Now, the term "Machine God" (which I borrow for my purposes here from the Wahammer 40k franchise) is becoming applicable in a more and more literal sense.
I use it every day and I don’t see it overcoming fundamental limitations related to anything with stakes higher than creative license without a lot of customization for each separate task. That means that there will be a very large effort necessary to make the transition from suggestion to human in the loop scale to full human replacement, and that will be quite expensive and time consuming, and will require engineers to build out a ton of infrastructure that simply doesn’t exist in any scalable or generalized form today. So I guess my job is safe while I automate everyone else’s away : /
Meanwhile they will pursue AGI and try to swallow up all of the IP that’s developed in the specialization phase as training data to create a super intelligence. I’m not sure given the fundamental limitations that this is actually possible. People see a geometric growth in capability but I see a log graph approaching a limit. When there is no more human achievement to emulate how will training occur.
There is another aspect to this that’s a little more fun to think about. There’s a high likelihood that we are actually experiencing a simulation, and if that’s the case there’s something fundamentally limiting about the nature of the substrate we live in and the rules we live under. When I learned all of the fundamental constants in math and physics and studied religion it occurred to me that those fundamental constants were a sort of rule book for this dimension, and dictated the conditions for condensation of matter around waves. If it is indeed a simulation though does that make it easier or harder to understand the relationship of those fundamentals to the world we live in? And what does that mean for AI? Is it easier to understand a simulation or harder? I remember something like any intelligence advanced enough to create such a simulation is too complex for us to ever understand. Since the AI is of this world does that limit apply fundamentally to AI as well? Or will the AI crack the code driving the simulation and lead us out?
Now this all makes me think of westworld, if AGI is possible and humans can be replaced does that mean that AI will predict our every thought and move based on what must be quadrillions or more of parameters? In the show they made the scope of the supposed parameters small as if we are simple automatons, what if that is true? AI should be able to crack it.
At most it will revolutionise things in the way the Industrial Revolution revolutionised things. But it can’t change the rules of reality, and allow us all to sit around doing nothing productive but enjoy our UBI. All AI basically does is automate data gathering, in the same way machines 200-300 years ago automated a lot of manual labour. Some predicted then it would lead to mass unemployment, and it certainly did in some professions, but overall employment increased; just the nature of productive work changed. AI at most will be the same.
UBI is definitely the way forward as we move towards an increasingly automated world but I've been wondering for a while now about the stagnancy of the human consciousness in relation to evolutionary principles i.e. why do texts and criticism about human nature from 5-6000 years still ring true and find relevance in today's world in spite of the many tools(religion, spiritual, psychological) we created to improve it? Some optimists believe AI would give room for humans to pursuit creative and more fulfilling activities but I wonder if humans really are made for something more? If so,why have we struggled to achieve it in all these years? The best of us have to flee to the mountains to seek refuge(monasteries) and practice for years how to cultivate internal harmony or the transcendent consciousness we suppose idyllically.
Nick Bostrom's simulation hypothesis, or rather all the permutations thereof which have sprung up all over the place, are likely to become more popular in the coming years. Technology will become more of an ideological component in new belief systems and religious movements, as opposed to simply being a medium for the transmission of ideas as it has been until now.
That is already underway. The Machine God wears many faces today, and will have more grafted on as the technology continues to advance. The so-called "God Helmet," which was created to test certain hypotheses about how the brain works while people undergo religious experiences, is an early example of this that predated AI by over a decade.
Now, the term "Machine God" (which I borrow for my purposes here from the Wahammer 40k franchise) is becoming applicable in a more and more literal sense.
https://www.theguardian.com/technology/2024/nov/21/deus-in-machina-swiss-church-installs-ai-powered-jesus