Hacker Newsnew | past | comments | ask | show | jobs | submit | andoando's commentslogin

Imagine you went back 100 years and someone was like "Come up with a mathematical system that can express any sequences of logical steps" do you imagine what you would deliver is a few primitives and a few simple rules and said "here you go!, this is fully complete". Its actually quite remarkable that Church/Turing didn't start off from primitives like if statements, loops, etc.

Lambda calculus is from the 1930s and predates computers, its point is that it is bare bones model of computation. It doesn't make much sense to compare it to modern languages in efficacy, as that seems to imagine someone came up with lambda calculus in 2010 along Java, C, Python, etc.

The super cool thing about it is that it is capable of expressing ALL the computation you know today, from the few primitives. An "If" statement for example is λb.λx.λy. b x y


You can simulate a trivial 'cond' with 'or' and 'and'.

You spin up a second host and load balance

With AI now writing queries is a joke. But you can just create a two column table: key, JSONB and call it a day and you get your easy document store + indexes, json search, relationsl goodness, and atomicity, consistency for free

We used DynamoDB pretty much exclusively at Tinder, cause it was the founders choice early on. Horrible horrible choice and after 4 years working on it I dont see why you would.

1. you have a limited number of global supported indexes, 5 iirc, which means your queries are very limited. If your use case ever expands beyond that you're pretty screwed. 2. You will have race conditions. Strong consistency is 2x the cost, and not supported on global indexes. 3. Data is split into 10GB partitions and all the read/write quotas are split evenly by the number of partitions. 100 reads you're paying for is actually 10 reads per partition if you have 10 partition. Hot sharding becomes a real problem.

Take your document data, stick it in a JSONB and you get the same performance way cheaper + query able/indexable columns. The only time Dynamo wins I think is it scales well globally, but you probably dont need it


IMO if you've got a use case that requires querying in so many ways that you need several indexes, then DynamoDB is probably the wrong choice. It excels at stuff like user specific histories that are well partitioned, read back in one way, and ideally can be written asynchronously by a separate writer process.

At the beginning there was only one query, it got expanded over time with new features. It wasnt well thought out, no.

If you need high scale globally distributed persistent data, uniform distribution of hash reads/writes, dont care for schema, and know your query will remain simple yeah its a fine choice.

I just wouldn't consider it outside of enterprise level


> you have a limited number of global supported indexes, 5 iirc

you can create 20 global (GSI) and 5 local (LSI) indexes per table[1], I think the number must have been lower at some point in the past, because it's not the first time I hear this complaint

[1] https://docs.aws.amazon.com/amazondynamodb/latest/developerg...


No I just misremembered and mixed up the global and local.

Uhh what, I speak to llms in broken english with minimal details and they figure it out better than I would have if you told me the same garbage


Yeah it is, I am just not sure if its worth doing so. I haven't got much feedback/interest.


Yes, with a lot of reviewing what its doing/asking questions, 100%


Because its SO much faster not to have to do all that. I think 10x is no joke, and if you're doing MVP, its just not worth the mental effort.


POC, sure (although 10x-ing a POC doesn't actually get you 10x velocity). MVP, though? No way. Today's frontier models are nowhere near smart enough to write a non-trivial product (i.e. something that others are meant to use), minimal or otherwise, without careful supervision. Anthropic weren't able to get agents to write even a usable C compiler (not a huge deal to begin with), even with a practically infeasible amount of preparatory work (write a full spec and a reference implementation, train the model on them as well as on relevant textbooks, write thousands of tests). The agents just make too many critical architectural mistakes that pretty much guarantee you won't be able to evolve the product for long, with or without their help. The software they write has an evolution horizon between zero days and about a year, after which the codebase is effectively bricked.


There is a million things in between a C compiler and a non-trivial product. They do make a ton of horrible architectural decisions, but I only need to review the output/ask questions to guide that, not review every diff.


A C compiler is a 10-50KLOC job, which the agents bricked in 0 days despite a full spec and thousands of hand-written tests, tests that the software passed until it collapsed beyond saving. Yes, smaller products will survive longer, but how would you know about the time bombs that agents like hiding in their code without looking? When I review the diffs I see things that, if had let in, the codebase would have died in 6-18 months.

BTW, one tip is to look at the size of the codebase. When you see 100KLOC for a first draft of a C compiler, you know something has gone horribly wrong. I would suggest that you at least compare the number of lines the agent produced to what you think the project should take. If it's more than double, the code is in serious, serious trouble. If it's in the <1.5x range, there's a chance it could be saved.

Asking the agent questions is good - as an aid to a review, not as a substitute. The agents lie with a high enough frequency to be a serious problem.

The models don't yet write code anywhere near human quality, so they require much closer supervision than a human programmer.


A C compiler with an existing C compiler as oracle, existing C compilers in the training set, and a formal spec, is already the easiest possible non-trivial product an agent could build without human review.

You could have it build something that takes fewer lines of code, but you aren’t gonna to find much with that level of specification and guardrails.


I don't either, but I really think Im just burnt out. The simplest things piss me off.


Totally agree, AI interfaces will become the norm.

Even all the websites, desktop/mobile apps will become obsolete.


AI won't kill apps, it will just change who 'clicks' the buttons. Even the most powerful AI needs a source of truth and a structured environment to pull data from. A world without websites is a world where AI has nothing to read and nowhere to execute. We aren’t deleting the UI. We’re just building the backends that feed the agents.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: