In many countries, people have already won a similar fight with printing press, press censorship and encryption. I think there is a reason for optimism (of the will).
If AI can code, and empower individuals to do it on a local device, it is already smart enough to educate masses on the matters of their self-interest, such as freedom and solidarity.
I don't think the powers will be able to gatekeep it. There might be some grief but overall human freedom will prevail.
I doubt AI can educate the masses simply because the masses would have to prompt it to educate them. Almost no one in my social circle knows, let alone understands Google’s recent work on pushing web attestation, or any other tech company’s power plays enforced on us. They are people blindly hitting accept all in every banner that pops up in their online journeys or use chat apps that blatantly spy on them.
They don’t know what they could have or why the new captcha is funny, thus they can never come up with a prompt that leads to them being educated on the matter. They would have to know that they don’t know and since there is no public discourse for such matters in their Facebook timelines, their thinly right wing digital news outlets and their Viber and what’s app chats they will never know that they don’t know.
I don't think human coauthoring documents with AI is viable, due to very different costs. It's like with combining hand-written assembler and compiler output. I think ideally there will be some delineation which parts of the document are human-produced and which are AI-produced.
In fact, AI might be the opposite of managerial "silver bullet". The more we automate what is repetitive, the less predictability remains overall. Things can get more productive on average but the managing it becomes harder, as productivity amplifies risks.
I don't believe they are injective but if they are, they are not capable of (correct) thought.
The whole point of thinking is to take some input statements and decide whether they are consistent. Or, project them onto a close but consistent set of statements. (Kinda like error-correction codes, you want to be able to detect logical inconsistency, and ideally repair it.)
But that implies the set of consistent staments is a subset.
I would call it obedience, and it's not the same as friendliness.
The difference, in a repeated prisoner dilemma: Friendliness is cooperating on the first move, and then conditionally. Obedience is always cooperating.
Do you have a standard and a body of work you can point to in an effort to aid with communication these thoughts to others? At the very least there should be a reversible projection to the Big 5 standard.
Lol, you convinced a LLM to agree with you. I use the Big5 as a way of communicating where there is a common reference and a large body of work. How people think they think and how they actually think are two different things, people are much closer to LLMs than they think they are. I can't provide evidence for this for a variety of reasons so at this point we're just going to have to agree to disagree.
Doesn't surprise me. But I don't think this is caused by friendliness, but by obedience. And I think we want the agents to be obedient. And I am afraid there is a tradeoff - more obedience means more willful ignorance of common sense ethical constraints.
The bot problem is solvable by using a web of trust system. You don't need a digital ID for that (i.e. you don't need to tie your digital world identity to a real world identity, nor you need a central agency to manage these).
In web of trust, anyone could publicly certify who they know is a real person (i.e. validate a link from their id to another id). Then, if you received a message from someone, the system would find the path in the graph of real people you trust, to determine the trustworthiness of the source. So if the account is a bot, there would be no path from it to you in the trust graph.
The advantage is that everyone could supply their own subjective trustworthiness score, altering the graph. They could even publish it, so that other people could use trustworthiness assesment of accounts they personally trust.
The big issue with a system of web of trust is that it is too efficient, and just kills commercial advertising (and also propaganda). Because that is all about overcoming the natural web of trust that humans have.
I am also interested in connection with fuzzy logic - it seems that NNs can reason in a fuzzy way, but what they are doing, formally? For years, people have been trying to formalize fuzzy reasoning but it looks like we don't care anymore.
I feel like NNs (and transformers) are the OOP (object-oriented programming) of ML. Really popular, works pretty well in practice, but nobody understands the fundamentals; there is a feeling it is a made up new language to express things expressible before, but hard to pinpoint where exactly it helps.
I thought about a similar concept for fun - each hex digit was replaced by 4x4 pixel matrix, where amount of pixels roughly corresponded to the value. So dot for 0, two dots for 1, checkerboard for 8 etc.
Then byte was represented as 16x16 matrix where each 4x4 area had the lower digit pattern, and these were arranged in the shape of the higher digit.
But at the end of the day, it wasn't really more readable.
If AI can code, and empower individuals to do it on a local device, it is already smart enough to educate masses on the matters of their self-interest, such as freedom and solidarity.
I don't think the powers will be able to gatekeep it. There might be some grief but overall human freedom will prevail.
reply