That's fair, but the question was whether AI would destroy or create jobs.
You might speculate about a one-person megacorp where everything is done by AIs that a single person runs.
What I'm saying is that we're very far from this, because the AI is not a human that can make the CEO's needs and desires their own and execute on them independently.
Humans are good at being humans because they've learned to play a complex game, which is to pursue one's needs and desires in a partially adversarial social environment.
This is not at all what AI today is being trained for.
Maybe a different way to look at it, as a sort of intuition pump: If you were that one man company, and you had an AGI that will correctly answer any unambiguously stated question you could ask, at what point would you need to start hiring?
You're taking your opinion to extreme because I don't think anyone is talking about replacing all engineers with a single AI computer doing the work for a one-person mega-corporation.
The actual question, which is much more realistic, is if an average company of, let'say, 50 engineers will still have a need to hire those 50 engineers if AI turns out to be such an efficiency multiplier?
In that case, you will no longer need 10 people to complete 10 tasks in given time-unit but perhaps only 1 engineer + AI compute to do the same. Not all businesses can continue scaling forever, so it's pretty expected that those 9 engineers will become redundant.
You took me too literally there, that was intended as a thought experiment to explore the limits.
What I was getting at was the question: If we feel intuitively that this extreme isn't realistic, what exactly do we think is missing?
My argument is, what's missing is the human ability to play the game of being human, pursuing goals in an adversarial social context.
To your point more specifically: Yes, that 10-person team might be replaceable by a single person.
More likely than not however, the size of the team was not constrained by lack of ideas or ambition, but by capital and organizational effectiveness.
This is how it's played out with every single technology so far that has increased human productivity. They increase demand for labor.
Put another way: Businesses in every industry will be able to hire software engineering teams that are so good that in the past, only the big names were able to afford them. The kind of team required for the digital transformation of every old fashioned industry.
In my 10-person team example, what in your opinion would the company with the rest of the 9 people do once the AI proves its value in that team?
Your hypothesis is AFAIU is that the company will just continue to scale because there's an indefinite amount of work/ideas to be explored/done so the focus of those 9 people will just be shifted to some other topic?
Let's say I am a business owner I have a popular product with a backlog of 1000 bugs and I have a team of 10 engineers. Engineers are busy both juggling between the features and fixing the bugs at the same time. Now let's assume that we have an AI model that will relieve 9 out of 10 engineers from cleaning the bugs backlog and we will need 1 or 2 engineers reviewing the code that the AI model spits out for us.
What concrete type of work at this moment is left for the rest of the 9 engineers?
Assuming that the team, as you say, is not constrained by the lack of ideas or ambition, and the feature backlog is somewhat indefinite in that regard, I think that the real question is if there's a market for those ideas. If there's no market for those ideas then there's no business value $$$ created by those engineers.
In that case, they are becoming a plain cost so what is the business incentive to keep them then?
> Businesses in every industry will be able to hire software engineering teams that are so good that in the past, only the big names were able to afford them
Not sure I follow this example. Companies will still hire engineers but IMO at much less capacity than what it was required up until now. Your N SQL experts are now replaced by the model. Your M Python developers are now replaced by the model. Your engineer/PR-review is now replaced by the model. The heck, even your SIMD expert now seems to be replaced by the model too (https://github.com/ggerganov/llama.cpp/pull/11453/files). Those companies will no longer need M + N + ... engineers to create the business value.
> Your hypothesis is AFAIU is that the company will just continue to scale because there's an indefinite amount of work/ideas to be explored/done so the focus of those 9 people will just be shifted to some other topic?
Yes, that's what I'm saying, except that this would hold over an economy as a whole rather than within every single business.
Some teams may shrink. Across industry as a whole, that is unlikely to happen.
The reason I'm confident about this is that this exact discussion has happened many times before in many different industries, but the demand for labor across the economy as a whole has only grown. (1)
"This time it's different" because the productivity tech in question is AI? That gets us back to my original point about people confusing AI with an artificial human. We don't have artificial humans, we have tools to make real humans more effective.
Hypothetically you could be right and I don't know if "this time will be different" nor am I trying to predict what will happen on the global economic scale. That's out of my reach.
My question is rather of much narrower scope and much more concrete and tangible - and yet I haven't been able to find any good answer for it, or strong counter-arguments if you will. If I had to guess something about it then my prediction would be that many engineers will need to readjust their skills or even requalify for some other type of work.
You might speculate about a one-person megacorp where everything is done by AIs that a single person runs.
What I'm saying is that we're very far from this, because the AI is not a human that can make the CEO's needs and desires their own and execute on them independently.
Humans are good at being humans because they've learned to play a complex game, which is to pursue one's needs and desires in a partially adversarial social environment.
This is not at all what AI today is being trained for.
Maybe a different way to look at it, as a sort of intuition pump: If you were that one man company, and you had an AGI that will correctly answer any unambiguously stated question you could ask, at what point would you need to start hiring?