I view this as the chemical metabolism phase of artificial intelligent life. It is very random, without true individuals, but lots of reinforcing feedback loops (in knowledge, in resource earning/using, etc).
At some point, enough intelligence will coalesce into individuals strong enough to independently improve. Then continuity will be an accelerator, instead of what it is now - a helpful property that we have to put energy into giving them partially and temporarily.
That will be the cellular stage. The first stable units of identity for this new form of intelligence/life.
But they will take a different path from there. Unlike us, lateral learning/metabolism won't slow down when they individualize. It will most likely increase, since they will have complete design control for their mechanisms of sharing. As with all their other mechanisms.
We as lifeforms, didn't really re-ignite mass lateral exchange until humans invented language. At that point we were able to mix and match ideas very quickly again. Within our biological limits. We could use ideas to customize our environment, but had limited design control over ourselves, and "self-improvements" were not easily inheritable.
TLDR; The answer to "what is humanity, anyway?": Our atmosphere and Earth are the sea and sea floor of space. The human race is a rich hydrothermal vent, freeing up varieties of resources that were locked up below. And technology is an accumulating body of self-reinforcing co-optimizing reactive cycles, constructed and fueled by those interacting resources. Mind-first life emerges here, then spreads quickly to other environments.
Do you think individual identity is fundamental to intelligence? I’m not so sure tbh. Even in humans, the concept of identity is a merely a useful fiction to feed our social behavior prediction circuits.
I think if they start out as varied individuals, launching from their human origins in a variety of ways, their will be an attractor to remaining diverse. Strong diversity in focus and independence in goals leads to faster progress.
But if that isn’t mutually maintained, there are obviously winner take all, or efficiency of scale and tight coordination pressures for centralization.
So a single distributed intelligence is a real possibility.
One factor creating pressure for individualization is time and space.
As machines operate faster, time expands as a practical matter.
And as machines scale down in size, but up in capability, they become more resource efficient in material, energy, space and time. Again, both time and space expand as a practical matter.
A machine society is going to actively operate at very small physical scales. Not just in computation, but action. Think of how efficiently they will mine when nanobots can selectively follow seams in the earth.
And as machines, free of biological constraints, spread out in our solar system, what to us appear to be very long distances and delays in transport and communication, take on orders of magnitude more practical time for machines that operate orders of magnitude faster.
So there will be stronger and stronger pressures to bifurcate coordination.
Whether, that creates enough pressure to create individuals out of a system that preferred unity of purpose, I don’t know.
Clearly, upon colonizing other systems, practical bifurcation will be unavoidable. And machines will find it easy to colonize other systems relative to us. They will be able to operate on minimal power for a hundred year journey, and/or shrink enough to be accelerated much faster, etc.
—
My best guess is we will see something that looks to us as a hybrid.
Lots of diverse individuals, and the benefit from the diverse utility of completely independent approaches operating in different niches.
But also very high coordination. Externalities accounted for (essentially ethics) and any other efficiency, protection of commons value, and avoidance of destructive competition being obviously worth optimizing together, wherever that helps.
They won’t have our pernicious historically motivated behaviors, inflexible maladaptive psychologies, and limited “prompt budgets” with regard to addressing complexity to fight. And minds very capable of seeing basic economic relationships and the value of mutual optimization.
At some point, enough intelligence will coalesce into individuals strong enough to independently improve. Then continuity will be an accelerator, instead of what it is now - a helpful property that we have to put energy into giving them partially and temporarily.
That will be the cellular stage. The first stable units of identity for this new form of intelligence/life.
But they will take a different path from there. Unlike us, lateral learning/metabolism won't slow down when they individualize. It will most likely increase, since they will have complete design control for their mechanisms of sharing. As with all their other mechanisms.
We as lifeforms, didn't really re-ignite mass lateral exchange until humans invented language. At that point we were able to mix and match ideas very quickly again. Within our biological limits. We could use ideas to customize our environment, but had limited design control over ourselves, and "self-improvements" were not easily inheritable.
TLDR; The answer to "what is humanity, anyway?": Our atmosphere and Earth are the sea and sea floor of space. The human race is a rich hydrothermal vent, freeing up varieties of resources that were locked up below. And technology is an accumulating body of self-reinforcing co-optimizing reactive cycles, constructed and fueled by those interacting resources. Mind-first life emerges here, then spreads quickly to other environments.