Hacker Newsnew | past | comments | ask | show | jobs | submit | poorman's commentslogin

Same. I installed Forgejo two months ago when Github wouldn't let me create agent accounts. It's been awesome. Any time I want a new feature I open my agent on the server and tell it to add the feature to Forgejo. Took all of 15 minutes for it to add a working Show/Hide "Viewed" files on the PR reviews.

You mean you upstream those changes or are you running your own fork?

I am going to say this for all the people thinking like this. This attitude will get you nowhere in life. It historically never has and in the future it never will.

> When Nick is not at a computer, you can find him out racing sailboats and involved with various entrepreneurial ventures.

Says the man who is going to be the feudal lord in GP's scenario…


The man who thinks he's going to be the feudal lord in GGP's scenario.

In GGP's scenario, there isn't much room at the top. :)


Nah thats petty nobility that will get wiped out for taking the wrong side in the first dick measuring war lmao.

That's right -- the best way to succeed within a system is to hustle as hard as you can, and definitely don't stop to question the system itself.

Better than those who just want to burn the system down with no real plan for what comes next, and unable to comprehend the inevitable bloodshed of the 'glorious revolution' that they crave.

You think you are describing the Bolsheviks, but your description is equally fitting for those who want to abolish human labor without providing people alternative ways to make a living.

And no, hand waving about "UBI" doesn't count unless they start actually doing the politics required to implement UBI.


There's a lot of bloodshed going on under the status quo. Why do you think people are 'unable to comprehend' it? Maybe they just want to reallocate it and aren't especially sympathetic to those who who have avoided it up to now.

Do you comprehend the scale of the inevitable bloodshed that maintaining the status quo is bound to lead to? You don't do so any better than those you're chastising.

Most of them fried their brains with stimulants long ago. Thankfully for them, they no longer have to think. An LLM does it for them.

But it’s just the same idiots were rabidly cheering the latest JavaScript framework a decade ago, NFT’s and all manors or ridiculous things anyone with 2 working brain cells saw transparently though.


Not sure if you're being sarcastic or not, but I think this is actually good advice. It's great to be a free-thinker and question things, but I do think there is some (monetary) value in just not asking too many questions, but optimizing to be the best at whatever you're doing.

Edit: to give an example, I probably would have done better in school had I spent less time questioning the education system and more time just accepting it and trying to get good grades.


Yeah, succeed in the system, fuck everybody else. If the system is making the world a worse place, all the better, you can take advantage since you’re in the system. All that until you find yourself spat out by the system and get to experience what you’ve been part of with no recourse.

Your interpretation of the comment in this way says more about you than anything else. Because that's not what I or the parent comment said.

and your conclusion on this situation says a lot about your current state of economic privilege and/or ignorance

The trick is to compartmentalize

Historical data is never a guarantee of future performance. The downside of your attitude is that you can’t really point at the right thing to do, so then you invest your time and effort on the same things, when it could be the case that the rules of the game have changed.

I don’t see the comment as necessarily defeatist. If anything, it’s an invitation to rethink what might work instead, and whether there are things worth lobbying for/against beyond what can be solved at the individual level.


> I am going to say this for all the people thinking like this. This attitude will get you nowhere in life. It historically never has and in the future it never will.

I'm picturing a 12th century French feudal lord saying these words to some of his serfs complaining from a lack of firewood.


You can use AI because you know its good for you, but still think it won't make all people better off

I am going to say this for all the people thinking like this : lol

Hmmm ... there is definitely historical precedence for the article's assertions.

There is also precedence for what happens when such a big wealth imbalance is present (spoiler: it's a revolution).

This article is methodical in its points.

Your retort reads like an easily dismissed hot take.


And your retort (and this report) are doom and gloom. Humans are remarkably good at adapting and have adapted through far worse conditions than economic systems. The negative net is easy and very popular today but positivity is just as possible. It’s all about how you read data and there’s a lot of room for interpretation. If you’ve fallen for the doom that’s on you but calling something with so much historical precedence as hope for humanity ‘an easily dismissed hot take’ doesn’t make you look very bright.

>And your retort (and this report) are doom and gloom. Dinosaurs are remarkably good at adapting and have adapted through far worse conditions than _______, hell, they were around 99 million years longer than humans.

Species go extinct all the time, most species go though all kinds of things before then, so there is nearly zero correlation between surviving something bad in the past and surviving something else bad in the future.

Modern humanity is not anti-fragile any longer like we were in the past.


The articles argument is fine, but it takes as an axiom that AI is better right now at much cognitive work. I haven't found that to be true in the tasks I've looked at.

It's certainly cheaper and faster, so there's potential for it to unlock more demand but I'm sceptical that current models will replace a large fraction of knowledge work.


I was just thinking about this project the other day. Seems we have a whole lot of unused compute (and now GPU). I wish someone would create a meaningful project like this to distribute AI training or something. Imagine underfunded AI researchers being able to distribute work to idle machines like SETI@home did.


Asked Gemini about that: "are there efforts to train big LLM in a distributed fashion à la seti@home ? "

answer was really interesing: - https://github.com/PrimeIntellect-ai/prime - https://www.together.ai/


All SVGs should be properly sanitized going into a backend and out of it and when rendered on a page.

Do you allow SVGs to be uploaded anywhere on your site? This is a PSA that you're probably at risk unless you can find the few hundred lines of code doing the sanitization.

Note to Ruby on Rails developers, your active storage uploaded SVGs are not sanitized by default.


Is there SVG sanitization code which has been formally proven correct and itself free of security vulnerabilities?


It would be better if they were sanitized by design and could not contain scripts and CSS. For interactive pictures, one could simply use HTML with inline SVG and scripts.


GitLab has some code in their repo if you want to see how to do it.


This is what they actually use: https://github.com/flavorjones/loofah


Sanitisation is a tricky process, it can be real easy for something to slip through the cracks.


Yes. Much better to handle all untrusted data safely rather than try to transform untrusted data into trusted data.

I found this page a helpful summary of ways to prevent SVG XSS: https://digi.ninja/blog/svg_xss.php

Notably, the sanitization option is risky because one sanitizer's definition of "safe" might not actually be "safe" for all clients and usages.

Plus as soon as you start sanitizing data entered by users, you risk accidentally sanitizing out legitimate customer data (Say you are making a DropBox-like fileshare and a customer's workflow relies on embedding scripts in an SVG file to e.g. make interactive self-contained graphics. Maybe not a great idea, but that is for the customer to decide, and a sanitization script would lose user data. Consider for example that GitHub does not sanitize JavaScript out of HTML files in git repositories.)


Yeah I’ve worked on a few pieces of software now that tried SVG sanitizing on uploads, got hacked, and banned the uploads.


I guess it is a matter of parsing svg. Trying to hack around with regex is asking for trouble indeed.


just run them through `svgo` and get the benefits of smaller filesizes as well


svgo is a minifier, not a sanitizer.


I should have clarified `svgo + removeScripts`

https://svgo.dev/docs/plugins/removeScripts/


This is huge. A lot of these are the underpinnings of modern computer science optimizations. The ACM programming competitions in college are some of my fondest memories!


> A lot of these are the underpinnings of modern computer science optimizations.

Note that older articles have already been open access for a while now:

> April 7, 2022

> ACM has opened the articles published during the first 50 years of its publishing program. These articles, published between 1951 and the end of 2000, are now open and freely available to view and download via the ACM Digital Library.

- https://www.acm.org/articles/bulletins/2022/april/50-years-b...


In a concurrent environment, I wonder if the overhead of wrapping every API call with a synchronized would make this significantly slower than using ConcurrentHashMap.


Thanks. This is actually one of the topics I really want to tackle next.

If we just wrap every API call with synchronized, I'd expect heavy contention (some adaptive spinning and then OS-level park/unpark), so it'll likely bottleneck pretty quickly.

Doing something closer to ConcurrentHashMap (locking per bin rather than globally) could mitigate that.

For the open-addressing table itself, I'm also considering adding lightweight locking at the group level (e.g., a small spinlock per group) so reads stay cheap and writes only lock a narrow region along the probe path.


I think that's a great idea! I just checked one of my larger projects and it 55% ConcurrentHashMap and 45% HashMap so I'd personally benefit from this plan.


RL is still widely used in the advertising industry. Don't let anyone tell you otherwise. When you have millions to billions of visits and you are trying to optimize an outcome RL is very good at that. Add in context with contextual multi-armed bandits and you have something very good at driving people towards purchasing.


It's likely that yes, you will end up with an alias that links you because of a cookie somewhere, or a finger print of the elliptic curve when do do a SSL handshake, or any number of other ways.

The ironic thing is that because of GDPR and CCPA, ad tech companies got really good at "anonymizing" your data. So even if you were to somehow not have an alias linking your various anonymous profiles, you will still end up quickly bucketed into a persona (and multiple audiences) that resemble you quite well. And it's not multiple days of data we're talking about (although it could be), it's minutes and in the case of contextual multi-armed bandits, your persona is likely updates "within" a single page load and you are targeted in ~5ms within the request/response lifecycle of that page load.

The good news is that most data platforms don't keep data around for more than 90 days because then they are automatically compliant with "right to be forgotten" without having to service requests for removal of personal data.


There is definitely a miss-alignment of incentives with the bug bounty platforms. You get a very large number of useless reports which tends to create a lot of noise. Then you have to sift through a ton of noise to once in a while get a serious report. So the platforms up-sell you on using their people to sift through the reports for you. Only these people do not have the domain knowledge expertise to understand your software and dig into the vulnerabilities.

If you want the top-teir "hackers" on the platforms to see your bug bounty program then you have to pay the up-charge for that too, so again miss-alignment of incentives.

The best thing you can do is have an extremely clear bug-bounty program detailing what is in scope and out of scope.

Lastly, I know it's difficult to manage but open source projects should also have a private vulnerability reporting mechanism set up. If you are using Github you can set up your repo with: https://docs.github.com/en/code-security/security-advisories...


The useless reports are because there are a lot of useless people


One way to correct this misalignment is to give the bounty platform a cut of the bounty. This is how Immunifi works, and I've so far not heard anyone unhappy with communicating with them (though, I of course will not be at all shocked or surprised if a billion people reply to me saying I simply haven't talked to the right people and in fact everyone hates them ;P).


AI generated bounty report spam is a huge problem now.


The best thing you can do is to include an exploit when it is possible, so this can be validated automatically and clear the noise.


Totally agree. I was just thinking that I wouldn't want this feature for Claude Code but for Codex right now it would be great! I can simply let tasks run in Codex and I know it's going to eventually do what I want. Where as with Claude Code I feel like I have to watch it like a hawk and interrupt it when it goes off the rails.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: