I thought the point of the large language model version of AI was that they can understand human communication.
MCP seems like we have given up on making the models good or smart. We are bending over backwards to make the internet easier to interact with for AI than for humans.
If general intelligence is on the horizon, this all seems a colossal waste of time. (Not your resume. I mean the general direction of AI development.)
MCP isn't a replacement for AI intelligence; it is a complement: a pragmatic way to make AI web actions more reliable, efficient, and scalable. Don't assume a zero-sum game between AI intelligence and integration work.
> We are bending over backwards to make the internet easier to interact with for AI than for humans.
I'm detecting an emotional reaction here, which I can understand and sympathize with, but I have a feeling it is distorting a full understanding of MCP's role.
Also, in terms of level of concern about AI; MCP in particular strikes me as probably much lower down the list. That said, one might view it as part of a general trend of people sacrificing our "humanity" (including privacy and control) for a little bit of convenience -- which I grant is concerning trend.
it's adapting the world (well, internet) to suit the model rather than the other way around -- to the point where there is a growing amount of content on the internet designed exclusively for machine consumption at the expense of direct human consumption.
it's like self-driving cars -- if we had a dedicated separate road network just for self-driving cars, and required that they all communicate with standard protocols, then we'd have self-driving cars by now -- but that's not actually the goal of FSD. the goal is to have cars that can use existing infrastructure and co-exist with human drivers.
A major distinction here is that it is very cheap to host content on the internet and VERY EXPENSIVE to build things like a separate road network in the real world.
Who is actually hurt if I publish an llms.txt or MCP in addition to my existing content?
Chat bots require a special API I suppose, but an intelligent agent would just learn to use the existing way for programs communicating with other programs over a network. Unfortunately the I in LLM stands for intelligence.
I mean MCP is basically like an OpenAPI or graphql spec for LLM tool use. There has to be some standard. In fact it's not even for the LLM, MCP really is so that humans don't have to build bespoke integrations with every service.
MCP seems like we have given up on making the models good or smart. We are bending over backwards to make the internet easier to interact with for AI than for humans.
If general intelligence is on the horizon, this all seems a colossal waste of time. (Not your resume. I mean the general direction of AI development.)