There are three main problems with trying to offer a simple answer to the question of "what is the first computer?"
The most obvious of the problems is that a computer isn't a singular technology that springs up de novo, but something that develops from antecedents over a long, messy transition problem that requires a judgement call as to when the proto-computer becomes an actual computer. A judgement call which is obviously going to be biased based on the other considerations. Consider, for a more contemporary example, what you would argue as the "first smartphone" or the "first LLM." Personally, I think the ENIAC is still somewhat too proto-computer for my tastes: I'd prefer a "first" that uses binary arithmetic and has stored programs, neither of which is true for the ENIAC.
The second major issue is it's also instructive to look at the candidates' influence on later development. Among the contenders for "first computer," it's unfortunately kinda clear that ENIAC has the most lasting influence. ENIAC's development produced the papers that directly inspires the next generation of machines. Colossus is screwed here because of the secrecy of the code-breaking effort. Meanwhile, Zuse and Z3 suffer from being on the losing end of WW2. ABC has a claim here, but it's not clear whether or not the developers of ENIAC drew influence from ABC or not.
The final major issue isn't so much an issue by itself but rather something that colors the interpretation of the first two issues: national pride. An American is far more likely to weight the influence and ingenuity of the ENIAC and similar machines to label one of them the "first computer." A UK person would instead prefer to crown Colossus or the Manchester Baby. A German would prefer the Z3.
In many ways the ENIAC was more like an FPGA than a computer. It was programmed with patch cables connecting the different computational units as well as switches, and had no CPU as such. The cables had to be physically rerouted when changing to a new program, which took weeks. My understanding is that it was eventually programmed to emulate a von Neuman machine around 1948/49. As far as I understand, this was done mainly by Jean Bartik based on Von Neumans ideas.
If this is correct, it was not a von Neuman machine originally, but it eventually became one, and at approximately the same time as the Manchester Baby.
I'm writing my own programming language right now... which is for an intensely narrow use case, I'm building a testbed for comparing floating-point implementations without messy language semantics getting in the way.
There's lots of reasons to write your own programming language, especially since if you don't care about it actually displacing existing languages.
> The back half of winter was characterized by blackened, salt-saturated puddles and banks. I wonder if the prevalence of EVs has made things less dirty in the winter.
The dominant cause of that is probably brake and tire particulate matter, not car exhaust. And EVs make tire pollution go up (because they're heavier) and brake pollution... I'm not sure if the weight effect there is counteracted by the decreased amount of friction brake use (as opposed to resistance braking).
On my Polestar 2, I was surprised how in actual use, friction braking was basically zero - to the point where when you start a trip the brakes are used for a few seconds to make sure they're still working (and scrub them a bit.) In actual driving - without trying particularly on my part - it's just always regen.
There is a decent legal argument to be made that §230 doesn't immunize platforms for the speech of their algorithm, to the extent that said speech is different from the speech of the underlying content. (A simple, if absurd, example of this would be if I ran a web forum and then created a highlight page of all of the defamatory comments people posted, then I'm probably liable for defamation.)
The problem of course is that it's difficult to disentangle the speech of algorithmic moderation from the speech of the content being moderated. And the minor issue that the vast majority of things people complain about is just plain First Amendment-protected speech, so it's not like the §230 protections actually matter as the content isn't illegal in the first place.
Compilers are some of the largest, most complex pieces of software out there. It should be no surprise that they come with bugs as all other large, complex pieces of software do.
If you don't understand the difference between something that rigorously translates one formal language to another one and something that will spit out a completely different piece of software with 0 lines of overlap based on a one word prompt change, I don't know what to tell you.
AIUI, all such results are because the FDA has given up since aduhelm and said "well, if it clears amyloid, that's as good as slowing Alzheimer's, right?" despite the actual results on Alzheimer's progression being largely negative.
For what it's worth, early statins were originally cleared based only on the evidence that they lower cholesterol without longer term studies showing a reduction in mortality. Of course there is now plenty of evidence showing statins improve overall endpoints.
Similarly, there were other drugs that lowered cholesterol that didn’t show a significant reduction in coronary events. As we later learned, it’s not nearly as simple as “cholesterol bad.”
~~yes by 4 months. If I had AD i wouldn't bother with those treatments.~~ Sorry I missed the context you are right the fact that they slow AD by 4 months is a proof that amyloid plaques are part of the pathogenesis.
Clang does the sensible thing with UB and just returns poison (a form of undefined value) in both cases, which manifests as do nothing on x86-64 and load a zero value on i386, because you need to push something on the stack and fldz is one of the cheapest ways to push something. Meanwhile, gcc is in both cases for the UB variant returning a + a + a + a;
FWIW, going back through older gcc versions, it seems i386 gcc stops implementing 'add the arguments' in version 11.1, although it's not until 15.1 that it has a sensible assembly for 'a + a + a + a'. The x86-64 gcc version is broken in 4.0 (where it stops copying the register arguments to the stack when va_start isn't called, I guess). Then it's adding xmm0 to the top 3 values on the stack until 11.1, when it's adding 'a + a + a + a', although not sensibly until version 15.1.
The term "loot box" has, since I want to say the early 2010s, referred to the mechanic described in the quote. It's hard for me to say what the earliest games were to create this mechanic, especially since its origin seems not to be in the traditional Western games but in East Asian games.
The model is very strongly associated with the rise of "live service" gaming, with Overwatch and Battlefield being some of the more notorious offenders.
I must say, I do love how this comment has provoked such varying responses.
My own observations about using AI to write code is that it changes my position from that of an author to a reviewer. And I find code review to be a much more exhausting task than writing code in the first place, especially when you have to work out how and why the AI-generated code is structured the way it is.
There's a very wide range of programming tasks of differing difficulty that people are using / trying to use it for, and a very wide range of intelligence amongst the people that are using / trying to use it, and who are evaluating its results. Hence, different people have very different takes.
LLMs can't lie nor can they tell the truth. These concepts just don't apply to them.
They also cannot tell you what they were "thinking" when they wrote a piece of code. If you "ask" them what they were thinking, you just get a plausible response, not the "intention" that may or may not have existed in some abstract form in some layer when the system selected tokens*. That information is gone at that point and the LLM has no means to turn that information into something a human could understand anyways. They simply do not have what in a human might be called metacognition. For now. There's lots of ongoing experimental research in this direction though.
Chances are that when you ask an LLM about their output, you'll get the response of either someone who now recognized an issue with their work, or the likeness of someone who believes they did great work and is now defending it. Obviously this is based on the work itself being fed back through the context window, which will inform the response, and thus it may not be entirely useless, but... this is all very far removed from what a conscious being might explain about their thoughts.
The closest you can currently get to this is reading the "reasoning" tokens, though even those are just some selected system output that is then fed back to inform later output. There's nothing stopping the system from "reasoning" that it should say A, but then outputting B. Example: https://i.imgur.com/e8PX84Z.png
* One might say that the LLM itself always considers every possible token and assigns weights to them, so there wouldn't even be a single chain of thought in the first place. More like... every possible "thought" at the same time at varying intensities.
That is fine. You should, and you'll get the best results doing so.
>LLMs can't lie nor can they tell the truth. These concepts just don't apply to them
Nobody really knows exactly what concepts do and don't apply to them. We simply don't have a great enough understanding of the internal procedures of a trained model.
Ultimately this is all irrelevant. There are multiple indications that the same can be said for humanity, that we perform actions and then rationalize them away even without realizing it. That explanations are often if not always post-hoc rationalizations, lies we tell even ourselves. There's evidence for it. And yet, those explanations can still be useful. And I'm sure OP was trying to point out that is also the case for LLMs.
I’m not anthropomorphizing. I’ve been in many situation where the AI wrote some code some way and I had to ask why, it told me why and then we moved on to better solutions as needed. Better if it just wrote the code and its reasoning was still in context, but even if it’s not, it can usually reverse engineer what it wrote well enough. Then it’s a conversation about whether there is a better clearer way to do it, the code improves.
It sounds like you either have access to bad models or you are just imagining what it’s like to use an LLM in this way and haven’t actually tried asking it why it wrote something. The only judgement you need to make is the explanation makes sense or not, not some technical or theoretical argument about where the tokens in the explanation come from. You just ask questions until you can easily verify things for yourself.
Also, pretending that the LLM is still just token predicting and isn’t bringing in a lot of extra context via RAG and using extra tokens for thinking to answer a query is just way out there.
You just steamrolled on, pretty much ignoring the comment you are replying to, made unkind assumptions, and put words in my mouth to boot. I don't mind some aggressive argumentation, but this misses the mark so completely that I have really no idea how to have a constructive conversation this way.
> where the AI wrote some code some way and I had to ask why, it told me why
I just explained that it cannot tell you why. It's simply not how they work. You might as well tell me that it cooked you dinner and did your laundry.
> the code improves.
We can agree on this. The iterative process works. The understanding of it is incorrect. If someone's understanding of a hammer superficially is "tool that drives pointy things into wood", they'll inevitably try to hammer a screw at some point - which might even work, badly.
> It sounds like you either have access to bad models or you are just imagining what it’s like to use an LLM in this way
Quoting this is really enough. You may imagine me sighing.
> Also, pretending that the LLM is still just token predicting
Strawman.
Overall your comment is dancing around engaging with what is being said, so I will not waste my time here.
Human code is still easier to review. Also, I program 80% of the time and review PRs 20% of the time. With AI, that becomes: I review 80% of the time, and write markdown and wait 20% of the time.
I'd expect that probably less than 10% of my time is spent actually writing code, and not because of AI, but because enough of it is spent analyzing failures, reading documents, participating in meetings, putting together presentations, answering questions, reading code, etc. And even when I have a nice, uninterrupted coding session, I still spend a decent fraction of that time thinking through the design of how I want the change rather than actually writing the code to effect that change.
The most obvious of the problems is that a computer isn't a singular technology that springs up de novo, but something that develops from antecedents over a long, messy transition problem that requires a judgement call as to when the proto-computer becomes an actual computer. A judgement call which is obviously going to be biased based on the other considerations. Consider, for a more contemporary example, what you would argue as the "first smartphone" or the "first LLM." Personally, I think the ENIAC is still somewhat too proto-computer for my tastes: I'd prefer a "first" that uses binary arithmetic and has stored programs, neither of which is true for the ENIAC.
The second major issue is it's also instructive to look at the candidates' influence on later development. Among the contenders for "first computer," it's unfortunately kinda clear that ENIAC has the most lasting influence. ENIAC's development produced the papers that directly inspires the next generation of machines. Colossus is screwed here because of the secrecy of the code-breaking effort. Meanwhile, Zuse and Z3 suffer from being on the losing end of WW2. ABC has a claim here, but it's not clear whether or not the developers of ENIAC drew influence from ABC or not.
The final major issue isn't so much an issue by itself but rather something that colors the interpretation of the first two issues: national pride. An American is far more likely to weight the influence and ingenuity of the ENIAC and similar machines to label one of them the "first computer." A UK person would instead prefer to crown Colossus or the Manchester Baby. A German would prefer the Z3.
reply