The slowing down thing sounds like a hack needed for engines that don’t give you control over the main loop.
I haven’t tried this yet, but for a custom engine I would introduce a second delta time that is set to 0 in the paused state. Multiplying with the paused-dt „bakes in“ the pause without having to sprinkle ifs everywhere. Multiplying with the conventional dt makes the thing happen even when paused (debug camera, UI animations).
I don't think it's a hack necessarily - as a well implemented time system would produce the same results at game rate = 0 as with pause.
Also there's a need for different time domains - like imagine, in a paused state the menu animations still need to play, or if the player enters a conversation, the game logic needs to pasue (depending on designer intent etc.)
Unity does this. You get scaledDeltaTime (when you set game speed to 0, it'll return 0) and unscaledDeltaTime (returns time between frames ignoring game speed). Pauseable logic uses the former. Pause menus use the latter.
You have some packets of data a, b, c. Add one additional packet z that is computed as z = a ^ b ^ c. Now whenever one of a, b or c gets corrupted or lost, it can be reconstructed by computing the XOR of all the others.
So if b is lost: b = a ^ c ^ z. This works for any packet, but only one. If multiple are lost, this will fail.
There are way better error correction algorithms, but I like the simplicity of this one.
Colour of bits isn't a property of bits. It's provenance. It's facts about history of the things.
There may be no trace from pure noise to original work, but you didn't get that particular noise randomly, you in fact got it from the original work.
Once you understand that law cares less about the thing itself, and more about the causal chain that led to it, it stops seeming magical and becomes perfectly reasonable.
(Also, FWIW, it's not that far conceptually from code = data, but there's still tons of technical people who can't comprehend the fact that there is no code/data distinction in reality. "Code" vs "data" too isn't a property of bits, it's only a matter of perspective.)
In this particular case there's also the simpler, more technical/mathematical argument: you cannot possibly just "accidentally" have that exact noise. Getting those specific bits instead of any other sequence from the space of random numbers that much long requires you to extend effort at least equivalent to possession of the exact copyrighted work that happens to fall out of the XOR exercise.
Except there are two people, B and C with noise. The only thing you can prove is that if you XOR both noise vectors together, you have a copyrighted work.
Both people will say they are innocent and that the other person used the other's noise vector and the copyrighted work to produce their noise vector.
> Both people will say they are innocent and that the other person used the other's noise vector and the copyrighted work to produce their noise vector.
Simple: because you can't find out who tells the truth, simply jail both. :-)
If they are all posting their noise vectors up on xor-music.com, sure. If they have valid reasons for making available a specific 'noise' vector (maybe they can prove it decrypts to something useful), then probably not.
Judges and juries don't need to guilt to be mathematically proved, they just have to be pretty sure.
Yes. Or at least hint at it, at which point someone will probably volunteer or let slip some information that gives you a rough shape of the causal chain, at which point you know where to dig and pressure further, and eventually convince someone to confess or get a warrant to be sure.
If the prosecuting side has a reason to care that much, it doesn't matter whether it's 10 or 100 people - in fact, if it's 100 people, the original source is in deeper shit because this is now obviously not just personal use, but distribution.
Sounds nice on paper but it becomes exponentially difficult when many people are involved, and some groups of the vectors XOR'ed together also demonstrably result in legal content.
There is no trace from a dead body back to the original act of killing, but police regularly manage to link them anyway (at least when the body had a large enough bank account).
They do this by means such as "questioning people" and "finding evidence". For example, if you have a file on your computer describing your plan to use XOR to infringe copyright, that would be considered "evidence".
This steps over the fact that the first crime is considerably messy while the other is extremely clean and can be committed where the law cannot see without a warrant.
> This is because legal people want something to exist that does not physically exist.
No law exists "physically".
Otherwise: Even in Computer Science the situation is more complicated, as is explained in the linked articles). Relevant excerpt from the first linked article:
"Child pornography is an interesting case because I find myself, and I think many people in the computing community will find themselves, on the opposite side of the Colourful/Colour-blind gap from where I would normally be. In copyright I spend a lot of time explaining why Colour doesn't exist and it doesn't matter where the bits came from. But when it comes to child pornography, I think maybe Colour should make a difference - if we're going to ban it at all, it should matter where it came from. Whether any children were actually involved, who did or didn't give consent, in short: what Colour the bits are. The other side takes the opposite tack: child pornography is dangerous by its very existence, and it doesn't matter where it came from. They're claiming that whether some bits are child pornography or not, and if so, whether they're illegal or not, should be entirely determined by (strictly a function of) the bits themselves. Legality, at least under the obscenity law, should not involve Colour distinctions.
[...]
The computer science applications of Colour seem to be mostly specific to security. Suppose your computer is infected with a worm or virus. You want to disinfect it. What do you do? You boot it up from original write-protected install media. Sure, you have a copy of the operating system on the drive already, but you can't use that copy - it's the wrong Colour. Then you go through a process of replacing files, maybe examining files, swapping disks around and carefully write-protecting them; throughout, you're maintaining information on the Colour of each part of the system and each disk until you've isolated the questionable files and everything else is known to be the "not infected with virus" Colour. Note that developers of Web applications in Perl use a similar scorekeeping system to keep track of which bits are "tainted" by influence from user input.
When we use Colour like that to protect ourselves against viruses or malicious input, we're using the Colour to conservatively approximate a difficult or impossible to compute function of the bits. Either our operating system is infected, or it is not. A given sequence of bits either is an infected file or isn't, and the same sequence of bits will always be either infected or not. Disinfecting a file changes the bits. Infected or not is a function, not a Colour. The trouble is that because any of our files might be infected including the tools we would use to test for infection, we can't reliably compute the "is infected" function, so we use Colour to approximate "is infected" with something that we can compute and manage - namely "might be infected". Note that "might be infected" is not a function; the same file can be "might be infected" or "not (might be infected)" depending on where it came from. That is a Colour.
[...]
Random numbers have a Colour different from that of non-random numbers. [...]
Note my terminology - I spoke of "randomly generated" numbers. Conscientious cryptographers refuse to use the term "random numbers". They'll persistently and annoyingly correct you to say "randomly generated numbers" instead, because it's not the numbers that are or are not random, it's the source of the numbers that is or is not random. If you have numbers that are supposed to come from a random source and you start testing them to make sure they're really "random", and you throw out the ones that seem not to be, then you end up reducing the Shannon entropy of the source, violating the constraints of the one-time pad if that's relevant to your application, and generally harming security. I just threw a bunch of math terms at you in that sentence and I don't plan to explain them here, but all cryptographers understand that it's not the numbers that matter when you're talking about randomness. What matters is where the numbers came from - that is, exactly, their Colour.
So if we think we understand cryptography, we ought to be able to understand that Colour is something real even though it is also true that bits by themselves do not have Colour. I think it's time for computer people to take Colour more seriously - if only so that we can better explain to the lawyers why they must give up their dream of enforcing Colour inside Friend Computer, where Colour does not and cannot exist."
Whether people are liable is a question for the courts, and I suspect they simply look through the tech and ask "do you end up with a copy of the work?"
(unless you're an AI company, in which case you can copy the whole internet just fine)
But that's an unsolvable question. Just like when A is caught on camera stealing a diamond, but A turns out to have an identical twin B. So the prosecutor can't do anything.
And you could say the same is true if you lost an AES key. But if they can establish a chain of evidence that shows (to whatever degree the court you're in requires) that it does contain the work, you've lost.
How many ways could they do this? Could they note in court that they found you getting your copy from a "super secure no liability legal loophole" piracy service? Could they just get B's side, whether through subpoena or whatever mechanism you have to communicate with B? (You must, since your file is "just noise" and useless to you as it is)
The evidence of infringement would be apparent when b and c were colocated and there was a utility next to them for XORing files and piping it into VLC
It's similar to RAID schemes but instead of drive failure it's port unavailability. There's a reference at [1] or an FPGA-centric one at [2], but it applies to anywhere where dual/single-port rams are readily available but anything more exotic isn't.
[1] Achieving Multi-Port Memory Performance on Single-Port Memory with Coding Techniques - https://arxiv.org/abs/2001.09599
[2] https://people.csail.mit.edu/ml/pubs/fpga12_xor.pdf
See also: RAID levels that use one disk for parity. Three disks is simplest, but technically you can do more if you trust that only one will go bad at a time.
A few months ago, I had a rare occasion of trying to explain them to a relative who had just bought a fancy NAS and wanted help setting it up.
There are a myriad middle states in-between "frupid" (so frugal that it's stupid) and "Instagram scale".
Python requires much more hand-holding that many don't want to do for good reasons (I prefer to work on the product unimpeded and not feeling pride having the knowledge to babysit obsolete stacks carried by university nostalgia).
With Go, Rust, Zig, and a few others -- it's a single binary.
This is a post about keeping your infrastructure simple, so Instagram is not a good ceiling to pick. People do all kinds of hacks to scale Python before they hit Instagram levels
Not too long ago, I used to think that Markdown was the Bee‘s knees. But having been forced to write some documentation in plaintext, I learned that plaintext is significantly more readable than raw markdown.
I think one of Markdown‘s biggest sins is how it handles line breaks. Single line breaks being discarded in the output guarantees that your nicely formatted text will look worse when rendered. I understand there are use cases for this. But this and the „add a trailing space“ workaround are particularly terrible for code documentation.
> I think one of Markdown‘s biggest sins is how it handles line breaks. Single line breaks being discarded in the output guarantees that your nicely formatted text will look worse when rendered.
My experience has been the complete opposite. Markdown parsers that don’t discard single linebreaks (e.g. GitHub-flavored markdown) turn my nicely formatted text into a ragged mess of partially-filled lines. Or for narrow display widths, an alternating series of long and short lines.
Markdown parsers that correctly discard single linebreaks make sure that the source text (reflowed to fit a max number of characters per line) and the rendered text (reflowed to fit the display width per line) both look reasonable.
> But many modern artists challenge these long traditions, creating statues of figures that are fully clothed. Consider Thomas J. Price’s “Grounded in the Stars”: a 12-foot, monumental sculpture of a woman standing in heroic counterpoise, wearing a T-shirt, leggings and comfortable shoes!
Looking at that modern statue, I can‘t help but be bored. It doesn‘t draw my attention. I think that’s because it depicts a normal, everyday clothed person. We see those everyday. It‘s something mundane.
A naked statue is more interesting to me. It‘s less a depiction of a person and more of mankind in general. It has an abstract but intimate quality, inviting to reflect (wow that sounds posh).
Programmers have enjoyed an occupation with solid stability and growing opportunities. AI challenging this virtually over night is a tough pill to swallow. Naturally, many subscribe to the hope that it will fail.
How far AI will succeed in replacing programmers remains to be seen. Personally I think many jobs will disappear, especially in the largest domains (web). But I think this will only be a fraction and not a majority. For now, AI is simply most useful when paired with a programmer.
>
Programmers have enjoyed an occupation with solid stability and growing opportunities.
This is not the case:
- Before the 90s, programming was rather a job for people who were insanely passionate about technology, and working as a programmer was not that well-regarded (so no "growing opportunities").
- After the burst of the first dotcom bubble, a lot of programmers were unemployed.
- Every older programmer can tell you how fast the skills that they have can become and became irrelevant.
Over the last decade, the stability and opportunities for programmers was more like a series of boom-bust cycles.
Let me put it this way: I do have my opinion on this topic, but this whole topic is insanely multi-faceted, and some claims that I am rather certain about are more at the boundaru of the Overton window of HN, so I won't post it here.
But the article which the whole discussion is about
offers in my opinion a rather balanced perspective regarding using AI for coding (which does not mean that this article is near to my opinion).
I will just give some less controversial thoughts and advices concerning AI:
- A huge problem when discussing AI is that the whole topic is a hodgepodge of various very diverse topics.
- The (current) AI industry has invested a lot of marketing efforts to re-define what AI stood for in the past (it basically convinced the mass of people that "AI = what we are offering")
- I cannot say whether AI will be capable of replacing lots of people in office jobs or not (I have serious doubts). Media loves to disseminate this topic, but in my opinion it does not really matter: the agenda is rather to spread fear among employees to make them more obedient.
- Even if AI will be capable of replacing only few office workers (a scenario that I rather believe in), it does not mean that management will not use "AI"/"replace by AI" as a very convenient excuse to get rid of lots of employees. The dismissed workers will then mostly vent their spleen on the AI companies instead of the management; in other work: AI is a very convenient scapegoat for inconvenient management decisions.
And yes, I consider it to be possible that some event that leads to mass layoffs might happen in a few years (but this is speculative).
- While I cannot say how much quality improvement is possible for current AI models (i.e. I don't know whether there exists a technological barrier), the signs are clear that as of today AI companies have hit some soft "cost barriers". I don't know whether these are easily solvable or not, but be aware of their existence.
- So, my advice is: if an AI model is of use for some project that you have (e.g. generating graphics/content for your web platform; using it as a tool for developing the next scientific breakthrough; ...), do it now. Don't assume that the models will do this nearly freely for you anymore in the future (it can be that this will stay possible in the possible, but be cautious).
Endgame is to produce AI which will not need any supervision by the time the current generation of experienced developers will retire or even sooner. I don’t know if it will happen but many bet on this and models are still improving, flattening is not yet seen.
This implies programming is done and there will be no other advancements.
And flattening is being seen, no? Recent advancements are mostly from RL’ing, which has limitations (and tradeoffs) too. Are there more tricks after that?
Yeah, even the AI CEOs are admitting that training scaling is over. They claim that we can keep the party going with post training scaling, which I personally find hard to believe but I'm not really up to speed on those techs.
I mean, maybe you can just keep an eye on what people are using the tools for and then monkey patch your way to sufficiently agi. I'll believe it when we're all begging outside the data centers for bread.
[Based on other history of science and technology advancements since the stone ages, I would place agi at 200-500 years out at least. You have to wait decades after a new toy is released for everyone to realize everything they knew was wrong and then the academics get to work then everyone gets complacent then new accidental discovery produces a new toy etc.]
For a brief blip in time the last few years it was possible to jump from a code camp to a decent paying job and vaguely disappear for a while like Milton from office space. The current period from a bad economy is more of a reversion to the mean.
reply