Hacker Newsnew | past | comments | ask | show | jobs | submit | iFire's commentslogin

I don't use tla+ to model real-world systems anymore, Claude is able to model systems in Lean 4 and the binary executable can handle real input or I can directly generate c / rust on proofs with numeric types that have ring structure (integers, rationals, bits).

https://github.com/lambdaclass/truth_research_zk


The point of TLA+ is easily readable specs with refinements. That's why it was designed in a novel way rather than in the older executable-with-more-complex-logic style that Lean (and other maths/spec languages) offer. I'm not saying you should prefer TLA+'s approach, but that's what it's meant to accomplish and LLMs don't change that.

I'm currently choosing between the right formalization for a big hardware project.

I'm considering between SVA, TLA+ and Lean. With the former being more domain specific and the later more general.

Do you think we'll move towards "Lean for everything" or do domain specific formalisms still make sense?


Have you considered P? It feels like a good abstraction for engineers as it's "proper" code.

https://github.com/p-org/P


what's SVA?

SystemVerilog Assertions. Hardware (silicon ASICs, and also FPGAs often) are written in a language called SystemVerilog. It has a feature called "concurrent assertions" which is usually just called SVA.

These are sort of temporal regexes, e.g. you can write

  assert property($fell(rst) |-> foo == 1 ##[1:20] foo == 0)
Which means if the rst signal fell (changed to 0) then foo must be 1 and 1-20 cycles later it must be 0.

The nice thing about them is that there are a few commercial tools that can formally verify them. They're super expensive (~$100k/year for one license), but fairly widely used because they work really well.

It's probably the most successful application of formal verification because it doesn't require much expertise to use. Unlike software formal verification which pretty much immediately requires you to become an expert on loop invariants, termination measures, hoare triples etc. At least that has been my experience.


Do you find Lean 4 sufficient for highly async systems?

I haven't made money on yet, but I'm trying to model a webtransport (http/3, quic) system for massive multiplay vr games.

See https://aws.amazon.com/builders-library/challenges-with-dist... for how async related to distributed systems.


https://en.wikipedia.org/wiki/Essential_tuple_normal_form is cool!

Since I had bad memory, I asked the ai to make me a mnemonic:

* Every

* Table

* Needs

* Full-keys (in its joins)


I have so many questions about that. Should that normal form basically replace 5NF for the purposes of teaching?

Why do they hate us and do not provide any illustrative real-life example without using algebraic notation? Is it even possible?

I just want to see a CREATE TABLE statement, and some illustrative SELECT statements. The standard examples always give just the dataset, but dataset examples are often ambiguous.

> (in its joins)

Do you understand what are "its" joins? What is even "it" here.

I'm super frustrated. This paper is 14 years old.



Chris Date has a course on this using his parts and supplies example. Don't have time to find it but maybe ai can find it.

https://www.oreilly.com/videos/c-j-dates-database/9781449336...

https://www.amazon.ca/Database-Design-Relational-Theory-Norm...


Orange light synced with the daybreak and sunrise made my life better.

So something is odd with this scientific research. Any explanations?


The way, that the linked “mixed” research showed positive effects. Although “little effect”, but that can easily mean that for some it has great effect, for others, nothing. Also, this is still not definitive, research count is unfortunately low.

Any real doctor would tell you, that if it works for you, keep the habit. We are different, and there are outliers in everything.


Exposure to lots of light, synced with the daybreak, causing an improvement in your life is perfectly consistent with pretty much every study out there. The conflicted research the article linked to is whether or not the color of dim lights in the evening has an effect.


Ah! So it's the total light output of the light not the specific color. So it means I can use regular colours but dimmer. That would make late night color editing better.


So it's saying orange light but really really bright makes a difference?

That's ok.

I usually have really dim orange lights...

Edit: Or was it the contrapositive..


Here's the ELI5 version:

What it's saying is that the color of light doesn't make a difference, especially if the light is dim. There was research into the effects of the color of dim light and separate research into the effects of bright light during different times of the day.

The research into bright light found that lots of it in the morning was really beneficial, and bright light late into the night is harmful. The statistics in the research showed it was very accurate, and the article doesn't doubt it.

The other research, into the effects of the color of dim light found that dim blue light late at night was harmful, but dim yellow light wasn't, although the statistics on this research didn't show a very strong effect. The article discusses some follow-up research that shows there is likely was no effect at all.

The takeaway is that it's still important to get lots of light in the morning, and not too much light late at night, but it's not worth worrying about what color the light is.


So if this research is true, I can get the benefits of better sleep without using https://justgetflux.com/news/pages/v4/welcome/ how would that work?

F.lux is fairly intrusive.


I for one welcome everyone to the tarpit where a normal person is seen as a robot in an endless poison pit and sounds like a Black Mirror television episode.


https://github.com/plastic-labs/honcho has the idea of one sided observations for RAG.


https://github.com/replikativ/stratum is apache2.0 license FOSS!

> Stratum is a columnar analytics engine that combines the performance of fused SIMD execution with the semantics of immutable data

What are your thoughts for investing in a columnar based database rather than a hybrid one?

I'm in the game development space.


Hey. Hybrid in which sense?

I have integrated Stratum's columnar indices as a secondary index in the new query engine of https://github.com/replikativ/datahike itself, so for numerical data you will be able to use Datalog/SQL to have combined (OLTP, OLAP, ...) processing. Same for proximum (persistent HNSW vector index) and scriptum (persistent Lucene).

Stratum already can be copy-on-write updated online with better write throughput than purely columnar alternatives (Stratum uses a persistent B-tree over column chunks) as far as I tested. I have not compared it in benchmarks yet though, DuckDB recommends to not update it online for instance. But it depends on the workload, if you do random access writes the columnar layout overhead will still be a slow-down compared to OLTP/Datahike's row/entity-wise indices. Also storing fully variable strings in a column is inefficient, for this you want the entity-wise indices.


Agree.

I don't write for sentimentality. I write so that my code designs can survive longer than my work on it.

No documentation is worse than deceit.

The emptiness and vastness of the void (entropy) is much deeper than humans or machines.

Google search says this philosphy is called https://plato.stanford.edu/entries/content-externalism/


At the Ise Jingu, the shrine is not built to last; it is built to be reconstructed from scratch every twenty years.

If we want our systems to last, we would need the "process knowledge"—the actual mastery of the craft—to be in human hands rather than decaying in a dead system.

I don't think we can afford to process-knowledge-transfer many of our essential systems... without machine assistance.


Is this about https://github.com/software-mansion/popcorn ?

I love their webassembly live demo https://popcorn.swmansion.com/#live-demo

Edited:

Oh this is using llm to write webassembly as an ISA ir (like byte code)?

See https://news.ycombinator.com/item?id=47118778


Most of the platforms were successfully petitioned to have rust sdk mandatory added so that rust code can be added to the platforms. The previously situation was rust was not allowed because the external dependency of the rust sdk was blocked.

Note that the rust having no stable api is not fixed, so I think there's a bunch of internal systems on each platform to hard lock the rust dependencies across multiple rust users.

There's some friction between platform packagers and the code that the author wrote exactly as it was written.


You've got a good point.

Why not directly have the llm write ISA assembly. We're still grading based on results / theory proofs and for example the certifications for cryptographical government use are based on binary codes and not sources.

Edited:

Why not go further and print the chips, pcbs directly via 3d printing with llm instructions.

Edited (joke):

Why not go the furthest and turn the entire earth into a computer and grey goo?


> Why not directly have the llm write ISA assembly.

The honest ernest answer to that is it's a bad idea because it is not portable. Unfortunately for Intel, m they don't have the dominance they once did, so you have to pick between ARM, x86, or something more exotic, and then be attached to that specific ISA. It's an interesting thought tho.


My approach for a game sandbox https://github.com/libriscv/godot-sandbox for user generated content was to experiment with standardizing on a riscv 64 bit linux ISA.

https://bellard.org/jslinux/ bellard is notable for this approach where you write a riscv execution layer and then write a windows / linux / dos etc emulator on top.

https://bellard.org/tinyemu/


We have different IR backends to make it portable btw. There are llvm, qbe, etc In the repo I mentioned in the link am goofing around exactly with llvm, to actually figure it out.


I think you, got the point. I am under impression that we are going to nowhere with these approach of that token generation is a solution for everything. For me for some reason it feels like bruteforce but stakes right now are too high, so step back is not an option for bigger players.


That might be good. Not seeing something as the answer to everything often means we are starting to pass the honeymoon stage.


Maaaybe, just mayyybe, in training weights there are not enough examples of producing valid assembly? And spitting something that was being fed by scraping oss repos is easy to impress for glorified autocomplete machinery?


Huh?

I thought the latest advance in computing (spring 2025 - last year) is self-play / reinforcement learning. Like we've ran out of training data a few years ago.

https://github.com/OpenPipe/ART

Reinforcement learning having the large language model devise puzzles that they solve via llm-as-judge.

The definition of llm-as-judge is your llm generate 8-12 trajectories and a different llm judges the result. I'd use an oracle like windows or linux operating system execution for the problem of ISA-assembly creation.

The winning entries are used to train the large language model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: