Hacker Newsnew | past | comments | ask | show | jobs | submit | Tuna-Fish's commentslogin

The site makes it very clear that the purpose is very explicitly not to "get away with it", it's to try and get fined, presumably to then challenge the legality of the laws in a higher court.

It's more complicated than that. For an asset to be derived work from an original, it is not necessary for it to contain anything from the original. If you start from copyrighted assets, and meticulously replace them all with your own art piece by piece, while following the style and constraints of the originals, and while looking at the originals, I'd bet that a court would find your work to be derived from the originals and therefore under their copyright.

A lot of the fan-driven reimplementations of classic games are trivially derived works, because people seem to think that the copyright only covers the pixels in the originals and if you replace them you're fine.


FreeDoom does that with Doom and it has compatible assets but not in the same style altough they are done in such smart way that most PWADs and TC are totally playable without clashes, from Requiem to Back To Saturn.

On game engines, reimplementations are not derivations at all but tools for interoperability, totally legal to create. From Wine to most of the stuff of https://osgameclones.com, to GNUStep against NeXT/OpenStep API (and Cocoa from early OSX) and so on.

If you could sell Cedega back in the day you can totally sell OpenTTD with free assets, period.

The entire PC industry exists today because of cheap IBM BIOS clones from Taiwan.


Yes, the engines are fine. And if the assets are free, then it is fine to sell the engine with them.

What I'm contending is if the assets are actually free. And just because they were all created by volunteers and contain no data from the originals doesn't mean that they are actually free. The rules around derived works are complicated, and too close homages have been found to be derived works, even if there is no actual copying.

If this were to go into court, things that would matter would include both "how visually similar do they look" (the answer is "very"), and "was the artist aware of and did they refer to the originals while doing their work", (given it was done by volunteers who are enthusiasts of the original game, the answers are almost certainly "yes" and "they can't prove they didn't").

And on those facts, the new art is a derived work of the original and falls under its copyright.


Ahem, no. Not the case there. The artwork under OpenTTD fails under fair reimplementation for cohesiveness with the current extensions and modules. Ditto with FreeDoom with Doom: is not inspired but art-compatible so your Strain, Requiem, Back to Saturn and so on PWADs run the same without texture or styling clashes.

Artistically speaking FreeDoom it's closer to Half Life and the like than Doom but here's the catch: playing Strain for instance won't look like a mess, but different, a bit like a demaked Half Life (or a game from its era with the Unreal engine) but not a copy.


> The artwork under OpenTTD fails under fair reimplementation for cohesiveness with the current extensions and modules.

I sincerely doubt that. Unlike the FreeDoom assets, it is too visually similar to the originals, and visual cohesiveness with existing materials (which were created to fit the style of the originals) is a point in favor of it being a derived work, not against.


> The entire PC industry exists today because of cheap IBM BIOS clones from Taiwan.

Forgot to reply to this part: And the reason those clones exist is because multiple companies reimplemented the BIOS in a clean way, where they had one team produce a clean spec of all the interfaces, and then sequestered a different team, who could attest in court that they had never worked with, seen or in other way come in contact with materials related to the IBM PC, to produce a replacement BIOS based on only the given spec. The clone makers that didn't go to all this effort were sued out of existence.

Do you believe that the free assets produced for games generally meet this standard?


The lender generally has a positive EV, but variability is high. The interest rates on leveraged buyouts are high, and the lender has priority over everything but taxes. If the company can stay afloat for a while, the lender probably got made whole and then some, even if the full loan never got paid back.

No, it is not. It is in fact no arithmetic at all, if you understand how SI works.

Is it 1mm/sec?

no, what? µ is the dimensionless number 10^-6, just like k is the dimensionless number 10^3.

And you are doing what with that dimensionless number? Multiplying?

dvd-ram drives and media were always premium products, with the drives at least ~4x more expensive than the -r drives of the time, and the media was much worse than that.

When -r disks bought in bulk cost ~20c each, $10 disks are a hard sell.


On modern Apple devices, the HW indicator light is wired directly between the power rail of the camera module and ground. Turning the camera on via software energizes the power rail. The only way that the camera is on and the led is not is if the led has burned out.

This is a "nothing-up-my-sleeves" implementation, it's not really possible to hide anything weird in the complexity. Apple clearly didn't just want a light that's always on when the camera is on, they wanted an implementation where they can point to it and clearly prove that the light is always on if the camera is on.


The project is an inference framework which should support 100B parameter model at 5-7tok/s on CPU. No one has quantized a 100B parameter model to 1 trit, but this existing is an incentive for someone to do so.

> quantized a 100B parameter model to 1 trit

I had the same question, after some debates with Chatgpt, it's not the "quantize" for post-training we often witness these days, you have to use 1 trit in the beginning since pre-train.


You missed US states competing on setting up age verification legislation that lets anyone sue any developer who produces systems that don't do age verification for life-destroying amounts of money.

So, Linus? Patrick Volkerding? I mean, I can build a Linux system from basically nothing.

Hé man I thought us Europeans were kings of dreadful regulations!

Eh private prosecutions and third party standing are generally disfavored to such an extent that sure, attention-whoring legislators will propose it, but whether it even passes constitutional muster on the state level is an open question, and open in every state.

The standing is provided by your child seeing naughty things on the internet.

+50% top speed over the V280. Bell offered it as an alternative to the V280 in the early stage of the contract, but it was judged too experimental (and probably too expensive). Apparently DARPA is funding further development of the concept.

Japanese, Chinese, Korean and Indic scripts are mostly 2 bytes per character on UTF-16 and mostly 3 bytes per character in UTF-8.

Really, as an East Asian language user the rest of the comments here make me want to scream.

I am not sure if you mean me, as I just asked a question. I wonder what the best way is to handle this disparity for international software. It seems like either you punish the Latin alphabets, or the others.

> I wonder what the best way is to handle this disparity for international software. It seems like either you punish the Latin alphabets, or the others.

there are over a million codepoints in unicode, thousands for latin and other language agnostic symbols emojis etc. utf-8 is designed to be backwards compatible with ascii, not to efficiently encode all of unicode. utf-16 is the reasonably efficient compromise for native unicode applications hence it being the internal format of strings in C# and sql server and such.

the folks bleating about utf-8 being the best choice make the same mistake as the "utf-8 everywhere manifesto" guys: stats skewed by a web/american-centric bias - sure utf-8 is more efficient when your text is 99% markup and generally devoid of non-latin scripts, that's not my database and probably not most peoples


  > sure utf-8 is more efficient when your text is 99% markup and generally devoid of non-latin scripts, that's not my database and probably not most peoples
I think this website audience begs to differ. But if you develop for S.Asia, I can see the pendulum swings to utf-16. But even then you have to account for this:

  «UTF-16 is often claimed to be more space-efficient than UTF-8 for East Asian languages, since it uses two bytes for characters that take 3 bytes in UTF-8. Since real text contains many spaces, numbers, punctuation, markup (for e.g. web pages), and control characters, which take only one byte in UTF-8, this is only true for artificially constructed dense blocks of text. A more serious claim can be made for Devanagari and Bengali, which use multi-letter words and all the letters take 3 bytes in UTF-8 and only 2 in UTF-16.»¹
In the same vein, with reference to³:

  «The code points U+0800–U+FFFF take 3 bytes in UTF-8 but only 2 in UTF-16. This led to the idea that text in Chinese and other languages would take more space in UTF-8. However, text is only larger if there are more of these code points than 1-byte ASCII code points, and this rarely happens in real-world documents due to spaces, newlines, digits, punctuation, English words, and markup.»²

The .net ecosystem isn't happy with utf-16 being the default, but it is there in .net and Windows for historical reasons.

  «Microsoft has stated that "UTF-16 [..] is a unique burden that Windows places on code that targets multiple platforms"»¹

___

1. https://en.wikipedia.org/wiki/UTF-16#Efficiency

2. https://en.wikipedia.org/wiki/UTF-8#Comparison_to_UTF-16

3. https://kitugenz.com/


the talk page behind the utf-16 wiki is actually quite interesting. it seems the manifesto guys tried to push their agenda there, and the allusions to "real text" with missing citations are a remnant of that. obv there's no such thing as "real text" and the statements about it containing many spaces and punctuation are nonsense (many languages do not delimit words with spaces, plenty of text is not mostly markup, and so on..)

despite the frothing hoard of web developers desperate to consider utf-16 harmful, it's still a fact that the consortium optimized unicode for 16-bits (https://www.unicode.org/notes/tn12) and their initial guidance to use utf-8 for compatibility and portability (like on the web) and utf-16 for efficiency and processing (like in a database, or in memory) is still sound.


Interesting link! It shows its age though (22 years), as it makes the point that utf-16 is already the "most dominant processing format", but if that would be the deciding factor, then utf-8 would be today's recommendation, as utf-8 is the default for online data exchange and storage nowadays, all my software assumes utf-8 as the default as well. But I can't speak for people living and trading in places like S.Asia, like you.

If one develops for clients requiring a varying set of textual scripts, one could sidestep an ideological discussion and just make an educated guess about the ratio of utf-8 vs utf-16 penalties. That should not be complicated; sometimes utf-8 would require one more byte than utf-16 would, sometimes it's the other way around.


hn often makes me want to scream

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: