The money itself might not be dirty, couldn’t you just claim something like “I sold a secret, highly valuable algorithm to this guy”? Tax would still need to be paid of course
Immediate follow up questions from the tax man, and then shortly afterwards the police "who is this guy? where is the invoice? what is his phone number?"
No, it doesnt typically work that way at all. The tax man just wants to get paid.
I grew up in an area known for people growing cannabis before it was legal. An enormous amount of taxes got dodged through cash land deals, but tons of people just claimed the income under various categories and no one ever came knocking because of that.
Its usually the other way around. If you caught the Fed's eye, then they might try to get you on tax evasion or something. Although, frankly even that was very rare. There are just a lot of very obvious fish to fry.
“I didn’t see these specific people get caught much in this specific situation therefore in general it works this way” - do you see how silly this sounds?
>Income from illegal activities, such as money from dealing illegal drugs, must be included in your income on Schedule 1 (Form 1040), line 8z, or on Schedule C (Form 1040) if from your self-employment activity.
That's there too; see https://support.google.com/docs/answer/14206696 — you can click on the "Ask Gemini ⟡" and carry on a conversation, e.g. "summarize emails about <topic>" and use those to paste into the doc. (I haven't found all that much use for referencing other files though. But the "proper chat" is useful for saying things like "no actually I meant something more like: …" and carrying on.)
No it wouldn't, companies like Nintendo and Sony make money from selling games on their platform, and they still need to sell as many of their consoles as possible to do that.
It’s more like a torrent client that has thepiratebay built into it. The main point of Tachiyomi is to have easy access to pirated content directly in the app. Tachiyomi without extensions is useless.
Another analogy: An emulator that also displays all ROMs and lets you play them directly. It downloads ROM from various websites, similar to Tachiyomi.
The example at the end made me wonder if Apple's model is actually better than GPT2 for text prediction. It generated garbage, but all that garbage made somewhat sense in the context of only the word "Today".
Whereas GPT2 hallucinated random stuff about the US government. A text prediction model should predict what the user wanted to type, so if you evaluate the models based on that, GPT2 actually performed horribly, since the user showed zero intent in talking about the US.
The example at the end sounds just like the predictions you get from normal phone keyboards in the last couple of years, which presumably don't use a modern GPT-style language model. A bit disappointing.
Seriously disappointing. I was expecting that it would not produce total gibberish. It acts like it's a Markov chain, and only considers the last 1-2 words. Identical to the currently-shipping thing that we've had for the past however-many years.
People trying to draw this comparison proves making good products is harder than it seems...
The default goal everyone is assuming is spitting out the longest correct sequence possible.
But in reality the mental cost of a wildly wrong prediction is much higher than the mental cost of a slightly wrong one, so what you'd train the model for is sequences of a few words at most being with higher confidence.
Most people can/will tune out slightly wrong words especially as they get a feel for what autocorrect is good and bad at.
If you unleash the full range of tokens GPT 2 can normally output, you'll constantly be blasting out words they didn't expect.
—
The fact your long sequence prediction got better doesn't matter because the UI is autocomplete not "auto-write": they're still expecting to drive, and a smart but noisy copilot is worse than a dumb but lazy one in that case.
I wouldn't be surprised if they trained the model to an effective context window of just a few hundred tokens with that in mind
GPT-2 saw "today," and thought "this must be news copy" and generated more news copy. Given a few more words, it could have narrowed down the context. The Apple suggestions aren't even grammatically correct, seemingly no different from the shallow statistical completion we've already had for years, so it's weird that they branded it in lofty AI terms
Or it could be the contrary. The new feature doesn't suggest whole sentences because the model they are using produces gibberish. It is quite possible that if the model was better then the would allow it to suggest longer phrases.
I suspect someone (Craig even) was under some pressure from The Board to have >0 references to generative-AI in their presentation this year since every single company (even non-software) is now expected by Wall St to "be doing some AI". Even though Apple is at the top of the heap with ML in photography and many other domains, without some kind of LLM the tech news narrative will be "Apple is years behind".
It seems obvious to me that it's not, because if you asked a human to guess what comes after "today" in a text, they'd never say "probably some gibberish about a day a day".
Garbage in, garbage out? The preceding text is gibberish, so the prediction will be worse. Presumably they also only show completions with a much higher confidence threshold.
Maybe: "Today was fine. Since I've retired, I'm taking my life a day a day".
Or maybe I wanted to express myself in the timeless words of the poets:
"A day, a day of glory! A day that ends our woe! A day that tells of triumph. Against our vanquished foe!"
"Rose is a rose is a rose is a rose. Loveliness extreme.
Extra gaiters. Loveliness extreme."
"A-well now, everybody's heard about the bird, everybody's heard about the bird, About the bird, the bird, bird bird bird, Haven't you heard about the bird? Don't you know that the bird's the word?"
The text does point out that the branch is somewhat predictable even for a random array. In that case, the odds of having seen the array-maximum increase as you scan through the array. For example, on a random array I would predict the first iteration to take the branch 1/2 of the time, but the last iteration to take the branch only 1/N of the time.
The CPU's branch predictor won't be able to perform that kind of algorithmic analysis, but patterns like the above also work reasonably well for simpler heuristics like 'predict the same outcome as the last time the branch was taken'.
This was a flaw on the Nokia N900. Or at least, on some of the devices.
I solved the issue by using TOPK magnetic cables. The magnet would remain in the port (microUSB, lightning, or USB-C, not sure about miniUSB but I got like one device which uses that), and all the cables would work with it. I even leave them connected (the LED does not draw much power and I got solar anyway) which allows me to quickly start or stop charging a device. Which, given I got quite a few, is very useful. Another issue is I keep getting leftover microUSB but need more and more USB-C for which the feature isn't very important (the more USB-C I'd have, the easier it'd be to stop using magnetic cables).
The exception is my MBP as it has a MagSafe (v3, my wife's MBP and my old MBP having v2) but I forgive Apple for that; its very useful and probably where TOPK got their inspiration from.