I think part of it is also that we're able to still LARP as full developers of complex systems while vibe coding by seeing an interface that makes us look like l33t h4xx0rs even though we're just pressing continue 15 times
> look like l33t h4xx0rs even though we're just pressing continue 15 times
I feel seen.
I also think there’s a certain element of reacting against absolutely everything becoming a bloated electron app.
I have no doubt - if it hasn’t already happened - that some apps will unironically embrace the most ridiculous option by shipping as electron apps that implement a TUI layer as their front-end.
> I have no doubt - if it hasn’t already happened - that some apps will unironically embrace the most ridiculous option by shipping as electron apps that implement a TUI layer as their front-end.
Considering the insane memory consumption of claude code running in my terminal, electron was never really the problem, bad software was the culprit all along.
The culprit is using web technologies where they don't belong, which Electron is also guilty of. Claude Code is 400k lines of JavaScript for a TUI where a sane implementation in C would be two orders of magnitude less code.
Except most of the TUIs I’m seeing are god awful with horrible input latency because they’ve reimplemented everything from scratch in python or whatever. Multiple hundreds of ms per keystroke: it sucks.
Bad UI plagued software development since ages immortal. The reason is not AI. Good UI design is a skill (or art?) and not an afterthought. But most people do not see it that way and that is why things are the way they are.
Companies make UI/UX to prioritise first 30 minutes of the experience, to keep user using it long enough that they stick with it. Not the 8h/day work the UI will get when a tool become pillar of your work.
Sometimes I swear that people are just making up acronyms here to troll people.
I assume you mean "orders of magnitude" and not "out of memory". I have never seen the former used as an acronym before, let alone without some kind of contextual clue. (In typical Baader-Meinhof fashion, I'm sure I'll see it again in the next 24 hours...)
I have just combined those together. I use astronvim as main editor but when I need IDE I switch to Idea. I use ideavim with configuration as close as possible to astronvim. So text editing is the same for me in both programs. Modal editors are still great and they can do a lot of work that looks like magic for AI era trained developers.
I’m relatively certain it’s just this at the end of the day. Everything I see people doing in their custom built TUIs or claude/codex CLI can be done, likely even easier, in a simplified IDE or easier to scan UI, but it feels nice/cool/cyberpunk/work-like to look like you’re doing more.
Everyone will have a “reasonable” explanation though for why they have to stay in the terminal even when they aren’t really coding anymore and it wouldn’t be hard to have a window next to your terminal if you really have to, but live and let live. Whatever makes you happy as be all become managers.
I too like a cyberpunk interface even if it’s last the need :)
It is much easier to quickly generate a usable tui for simple monitoring and management than a usable gui. Go + lipgloss + bubble tea and a single prompt will give you whatever you need in a minute or two - much faster to compile and no platform specific issues. I can’t speak for anyone else, but I do a lot of work in the terminal still and I’d much rather stay in that context then open up yet another window
> I can’t speak for anyone else, but I do a lot of work in the terminal still and I’d much rather stay in that context then open up yet another window
I do a lot of work in the terminal and that's exactly why I'd rather have other windows to the side so that my terminal can stay exactly focused on what I'm doing there. Those other windows might also be terminals, but I have a big screen, and I want to make use of it to see things all at once. A GUI gives far more flexibility for arranging those multiple views.
I've sat with coworkers taking two to twelve keystrokes to flip between things that I just have side by side in separate IDE windows, browser windows, or tabs... or can switch between with a single click instead of those keystrokes.
Window managers are more flexible than multiplexers, but I also think there's a higher floor of effort juggling multiple separate GUI programs than going between tabs and panes in a terminal emulator.
Multi-monitor terminal juggling also probably loses out to GUIs, though for me it's usually IDE or Browser on one and multiplexer on the other. One big zellij session connected to multiple terminal emulators is probably the best way I could think to handle that.
> a higher floor of effort juggling multiple separate GUI programs than going between tabs and panes in a terminal emulator.
Depends very much on your window manager. Tiling window managers such as Hyprland let you open multiple windows and it will automatically arrange them side-by-side. Want one of them to be 60% and the other 40%? No problem, there's a keyboard shortcut (configurable) for that. Have four windows open in a grid arrangement and want to switch between them? Just slide the mouse, no clicking needed so the movement can be as rough and imprecise as you want, OR if you don't want to take your hands off the keyboard then SUPER+arrow keys (also configurable) will move the focus to the next window in that direction. (And if you are in focus-follows-mouse mode then it also moves your mouse cursor to be in the middle of the focused window, so you won't lose window focus by accidentally bumping your mouse and moving it one pixel). Keyboard shortcuts for maximizing and un-maximizing windows, for throwing them onto other workspaces and switching between workspaces...
I throw windows around my screen all the time, and rarely take my hands off the keyboard to do it. It's the fastest, most flow-like window manager experience I've found yet.
How does the browser interact with the os? A tui exe has direct access. With “only html” now we need a server of some kind. How is multiple layers and running processes superior to a thin terminal based wrapper around the relevant io?
That said, obviously it depends on the use case. I’m not going to make a tui to interact with locations on a map - a web app makes a lot of sense in that case. But something like lazydocker makes sense more sense as a light terminal based program
TUIs already increased in popularity before agents became a thing. The low latency, the ease of remoting and the limited screen real estate which forces the developer to carefully design the interface are genuine advantages. I've been using mutt, vim, tig, tmux, newsboat, etc for over a decade at this point, and the cyberpunk feeling faded quickly.
No it can never be the same. The terminal is about not having to switch from the keyboard. My entire workflow is tmux panes with different TUIs and terminals. Not to mention performance, with a neovim IDE you may have tens of them open in different panes for example. I wouldn't try that with VSCode.
Projects like opencode are making the distinction between GUI and TUI almost meaningless. And that "only" downside is a massive, deal-breaking one. At this point I only have a browser besides the terminal, and I can see that going away soon for the most part thanks to LLMs.
But GUIs are hard to built - mainly because of tech debts around all three major platforms. But nontheless displaying graphics is harder than outputting control chars.
You could whip up decently usable UI in Delphi far quicker than similar one in any TUI framework.
The problem is that world went away from that and into HTML/CSS/JS/DOM mess that makes simple UI things hard and complex UI things slow and/or hard, on top of the bloat.
VB6 could have you roll a GUI interface in minutes, so even trivial tasks could have a GUI.
The tools for CDE on Unices were arguably even better but CDE never really got any momentum.
That it’s tough to put together a GUI now is definitely a regression and Microsoft shooting themselves in the feet regularly over the last 25 years is squarely to blame.
It's an aspect I've wondered about, constraints do make you consider what's essential. For example in btop (screenshot in the article) the graphs are rendered with dots at low resolution, if there was another version where those graphs were full resolution is it telling you meaningfully more?
Since the dots in btop's rendering are using the Braille characters, meaning you get six dots in the space that would be taken up by one alphanumeric character, the resolution on those dots is surprisingly high. A maximized terminal on my screen is size 316x86, so that's 316×2 x 86×3 = 632x258 of "Braille dot resolution" (a term I just made up) available for the graphs. Sure, that's lower than the 2560x1600 pixel resolution of my screen, but you're entirely right to ask "Does that really matter?" The graph would be smoother with about 4x more horizontal pixels and 6x more vertical pixels to work with, but I doubt I would glean any more information at first glance.
I’ve been running Claude with --dangerously-skip-permissions. It’s so nice that I’m not sure I can go back. Pressing continue 15 times is surprisingly heavy, but you don’t notice till you don’t have to do it anymore.
Try an external sandboxing tool. When you need to adjust the sandbox, close the agent, launch it with the new params, and resume the session. It doesn't take long to arrive at a stable configuration; for me it's mostly about rw access to the CWD, read access to other local repos, and access to Nix. Other than that I can just use YOLO modes and not sweat it.
I briefly evaluated a bunch (had an LLM make a list of those that satisfied some basic criteria, then visited READMEs and websites) and chose nono. No regrets: https://nono.sh/
Hey, thanks for the tip! I'll also give those a try.
Even if I end up liking virt-free like nono stuff for agents, I am trying to explore and learn about microVM options lately for other development purposes as well. This is a serendipitous recommendation for me. :D
I thought that that was the case for me, but then I tried using Claude Code through the desktop app last week and it was so bad. Slow, glitchy... I went back to the TUI in no time.
Having worked with development since the early 2000s, I think its great that development has become more accessible and I dont particularly like that the old guard tries to gate-keep the idea of "being a developer". Being an engineer I feel requires more credentials, it always has. But if you feel like you're a developer, all the more power to you!
I mean, I guess there's that novelty for the first few years of your career. I've been doing this a decade. I don't care about looking and feeling like a l33t h4xx0r and I doubt my peers do either.
TUIs just solve the right problems in the same world we're already working in - the terminal. That they're fast to launch and terminals have modern features like rich color and mouse support just adds to that.
That's a hard one. SO's hostile community to newbies, like any expert community, comes from the longstanding users having seen the basic questions 1000s of times and understandably not wanting to answer variations of them over and over, while for the newbies those questions genuinely are there and they don't have the routine knowledge yet of where to look or how to even look for solutions in the first place.
In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions. LLMs seem to be getting pretty good at those as well though, so I don't know where that leaves us.
SO for discussions of taste? I have these two options to build this, how should i approach this?
They tried to sell their own GPT wrapper for a while, didn't they? The use case I can see for that is:
User asks question - LLM answers it - user is unsure about the answer - it gets posted as a SO thread and the rest of the userbase can nitpick or correct the LLM response.
Edit: I also seem to remember they had a job portal in the sidebar for a while, what happened to that? Seems like a reasonable revenue stream that is also useful to users.
> In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions.
I think the deeper question is how SO would get paid for that.
Historically, SO has been funded by advertising. Users would google their question, land on SO, get an answer, and SO would get paid by advertisers. (The job portal was a variation on the advertising product.)
Even in your ideal world, newbies and experts would first ask their questions to an LLM. The LLM might search SO and find the answer there, but the user would get the answer without viewing an ad, so SO wouldn't get paid for that.
The same issue is facing Wikipedia. Wikipedia isn't funded by commercial advertisers, but they are funded by donations, which are driven by ads. If LLMs just answer the questions based on Wikipedia data, the user won't see the Wikipedia ad asking them to donate; they may not even know that Wikipedia was the source of the information, so they may not even develop a fondness for Wikipedia that's necessary to get users excited to donate.
This is why you see people shouting about how LLMs are "killing the web." I think it's more correct to say that LLMs are killing free web resources. Without advertising, not even donation-funded resources can remain available for free.
Oh, I was thinking more of user enters question into SO -> LLM answer on SO -> user evaluates whether LLM answer was sufficient (or system itself judges whether answer is also interesting to other users?) -> question + answer combo made public, judged by other users.
There are of course several huge issues with this, but thats why I prefaced it with ideal world hahaha
the biggest of which is why most users would want their questios publicized if the ChatGPT answer not on the stackoverflow platform will be enough or even better
Or how existing users and question-answering volunteers feel about just being cleanup and training data after LLMs
I used a system prompt similar to this, where I just dumped the entirety of https://grugbrain.dev/ into it and prefaced it with the assistant having to emulate grug.
Didn't find it particularly useful, but is is funny!
I actually feel like these integrations are fine, as long as they are opt-in or easily opt-outable of permanently. For now, I don't see the harm in adding another default search engine, it's much less obstrusive than the home page sponsored links. And if it gets them a little more independent from google by siphoning perplexity's seemingly infinite vc investment money, so be it.
I wonder if the rigidity could be improved while staying modular, maybe just use many more screws? I don't mind undoing more than 5 screws for the bottom to come off, make it 20 and it's still totally fine.
IIRC from one of their videos, they mentioned that they deliberately use cast aluminium instead of CNC machined like the macbook. If they sacrifice build quality for sustainability deliberately, I don't see how they could compete with Apple.
What is the implementation difference between using the system WebView (fragmented, especially bad under linux) and using one shared tauri-base runtime that only gets breaking changes updates every 2 years or so so there aren't twenty different ones running at the same time and it ends up like electron?
Would bundling one extended support release of chromium or firefox's backends that are then shared between all tauri apps not suffice?
reply