Hacker Newsnew | past | comments | ask | show | jobs | submit | danielvaughn's commentslogin

I also spent years on the same problem. I was creating a programming language for designers, which was supposed to abstract away the complexity of CSS. Long story short, I gave up.

Sorry to hear it. I didn't. And now our designers can tweak the code directly to adjust component styling. They can finally read and edit styles.

Actually, most importantly, AI can read and edit styles. CSS is hard enough on its own; adding overrides makes it so complex that it becomes unmanageable at scale.

What I want is a stateful file-writing layer that is aware of all clients (aka agents and humans) and their activity. It provides its own locking mechanisms, and prevents agents from overwriting each others work. That way you could have multiple agents operating on the same codebase, without having to futz with worktrees and all that.

I think it's fine, so long as the intent is to refine the thing after you've validated the product idea and direction. There are a million things to optimize in web pages, and AI can't simply one-shot good decisions yet.

Show HN submissions aren't for launching your startup in stealth mode. You shouldn't need to "validate the product idea and direction." It's supposed to be fun, not business.

I was thinking about this problem a few days ago, imagining a semi-online game where players could create a collective city by plotting buildings. The "grid" would be some kind of pre-determined voronoi pattern, in theory making occlusion culling easier.


If the main goal is to limit sight lines, there are a lot of potential tessellations that will give that effect, while also being predictable and infinitely repeatable. For example, imagine a regular city grid, except every roads is a wavy S-curve.

I bet there is a special math term for "tilings that do/don't contains infinite lines", but I wasn't able to find it quickly. It's not the same as (a)periodic, since a periodic tiling could block the lines, and an aperiodic tiling could have one giant seam down the middle.


I'm building a browser for designers: https://matry.design/


I also hate these sharp edges. After a long working session I have deep grooves in my wrists, and my skin is red with irritation. It's uncomfortable enough that it distracts me from work. It's the very antithesis of good design.


Disagree with the overall argument. Human effort is still a moat. I've been spending the past couple of months creating a codebase that is almost entirely AI-generated. I've gotten way further than I would have otherwise at this pace, but it was still a lot of effort, and I still wasted time going down rabbit holes on features that didn't work out.

There's some truth in there that judgement is as important as ever, though I'm not sure I'd call it taste. I'm finding that you have to have an extremely clear product vision, along with an extremely clear language used to describe that product, for AI to be used effectively. Know your terms, know how you want your features to be split up into modules, know what you want the interfaces of those modules to be.

Without the above, you run into the same issue devs would run into before AI - the codebase becomes an incoherent mess, and even AI can't untangle it because the confusion gets embedded into its own context.


I feel like you're pretty strongly agreeing that taste is important: " I'm finding that you have to have an extremely clear product vision...""

Clear production vision that you're building the right thing in the right way -- this involves a lot of taste to get right. Good PMs have this. Good enginers have this. Visionary leaders have this....

The execution of using AI to generate the code and other artifacts, is a matter of skill. But without the taste that you're building the right thing, with the right features, in a revolutionary way that will be delightful to use....

I've looked at three non-engineer vibe-coded businesses in the past month, and can tell that without taste, they're building a pretty mediocre product at best. The founders don't see it yet. And like the article says, they're just setting themselves up for mediocrity. I think any really good PM would be able to improve all these apps I looked at almost immediately.


The way I understood it, the original article is saying the _only_ remaining differentiator is taste and the comment you replied to is saying "wrong, there are also other things, such as effort".

I don't necessarily interpret the comment you replied to as saying that "taste is not important", which seems like what you are replying to, just that it's not the only remaining thing.

I agree that taste gets you far. And I agree with all the examples of good taste that you brought up.

But even with impeccable taste, you still need to learn, try things, have ideas, change your mind etc.. putting all of that in the bucket of "taste" is stretching it..

However, having good taste when putting in the effort, gets your further than with effort alone. In fact, effort alone gets you nowhere, and taste alone gets you nowhere. Once you marry the two you get somewhere.


Aren’t you just making their point stronger? Effort is what is being replaced here, with some taste and a pile of AI (formerly effort) you can go to the moon.


But you still need effort, its not only taste. "Only" means you can do it with no effort.


In other words, it requires a tremendous amount of effort to fully communicate your tastes to the AI. Not everybody wants to expend the time or mental effort doing this! (Once we have more direct brain/computer interfaces, this effort will go down, but I expect it will not be eliminated fully)


This is the second time in two days I've seen a subthread here with folks seemingly debating whether or not defining and communicating requirements counts as work if the target of those requirements is an LLM system.

I'm confused as to why this is even a question. We used to call this "systems analysis" and it was like... a whole-ass career. LLMs seem to be remarkably capable of using the output, but they're not even close to the first software systems sold as being able to take requirements and turn them into working code (for various definitions of "requirements" and "working").

I'm also skeptical that direct brain interfaces would make this any less work; I don't think "typing" or "english" are the major barriers here, anymore than "drafting" is the major barrier to folks designing their own cars and houses... Any fool thinks they know what they need!


Thinking might even be more difficult: Unfiltered thoughts, intrusive thoughts, people with no inner voice to encode as text...


At some point, just an idea will be enough for your Neurolink to spawn an agent to create 1000 different versions of your idea along with things that mimic your tendencies. There will be no effort, only choice.


As both a software engineer and a creative, I absolutely do not want 1,000 versions of what I am trying to make generated for me. I don't care if it's free or even cheap. I want to make things.

I know this is a concept deeply alien to a lot of HN's userbase but I did not get into programming or making art to have finished products; that's a necessary function that is lovely when it's reached, but ultimately, I derive my enjoyment from The Process. The process of finding a problem a user has, and solving it.

And yes I'm sure Claude could do it faster than me (and only at the cost of a few acres of rainforest!) but again, you're missing the point. I enjoy the work. That is not a downside to me.


Could I even remember 1000 versions of a thing and still distinctly know which one is which?


Deciding between 1000 different versions is a lot of effort IMO. With manual coding, you’re mostly deciding one decision point at a time, which is easier when you think about it. It just require foresight which comes from experience


That deciding between 1000 things is a lot of effort is so clear that I must wonder if the one you’re responding to was being ironic.


> Effort is what is being replaced here

Not really. The effort required to produce the same result has declined, but it has been on the decline for many decades already. That is nothing new. Of course, in the real world, nobody wants the same result over and over, so expectations will always expand to consume all of your available effort.

If there is some future where your effort has been replaced, it won't be AI that we're talking about.


Effort is still (and probably will always be) the hardest thing to replace.

Any time someone says AI can do this, and do that, and blah blah. I say ok, take the AI and go do that.. the barrier to entry is so low you should be able to do whatever you want. And they say, oh, no, I don't want to do that (or can't, or whatever). But it should be able to be done.. And I just nod, and sip my drink, and ...

.. and I'd like to point out these are seasoned professionals that I've seen put in effort into other things in their careers that have the capacity to literally do whatever is they want to do, especially now.. and they choose not to do so, at least not without someone guaranteeing them a paycheck or telling them they have to do it to survive.


“ I've looked at three non-engineer vibe-coded businesses in the past month, and can tell that without taste, they're building a pretty mediocre product at best.”

Are you doing this altruistically for friends - or as a consultant?


Both a) to help a friend out and b) to help non-technical founders I've meet at some Meetups/AI events to launch their product. My short-term goal is to put together a checklist/cheatsheet for all the technical things someone needs to do to launch a business because it's not just having a webapp running on Vercel with Supabase. And if they do have an app, is it a complete mess or not.

I think the solo-founder hype is an overplayed unless the person has the right skills, and even worked at a tech company, and knows what they're getting into. Alerting and monitoring for example is one of like 30 things they should be aware of.


> Disagree with the overall argument.

It's leaning in a good direction, but the author clearly lacks the language and understanding to articulate the actual problem, or a solution. They simply dont know what they dont know.

> Human effort is still a moat.

Also slightly off the mark. If I sat one down with all the equipment and supplies to make a pair of pants, the majority of you (by a massive margin) are going to produce a terrible pair of pants.

Thats not due to lack of effort, rather lack of skill.

> judgement is as important as ever,

Not important, critical. And it is a product of skill and experience.

Usability (a word often unused), cost, utility, are all the things that people want in a product. Reliability is a requirement: to quote the social network "we dont crash". And if you want to keep pace, maintainability.

> issue devs would run into before AI - the codebase becomes an incoherent mess

The big ball of mud (https://www.laputan.org/mud/ ) is 27 years old, and still applies. But all code bases have a tendency to acquire cruft (from edge cases) that don't have good in line explanations, that lack durable artifacts. Find me an old code base and I bet you that we can find a comment referencing a bug number in a system that no longer exists.

We might as an industry need to be honest that we need to be better librarians and archivists as well.

That having been said, the article should get credit, it is at least trying to start to have the conversations that we should be having and are not.


It doesn’t really matter how good your taste is if you are drowning in the ocean of crap.

Customers can’t find you


This is an underrated comment. You could have the best product out there, but AI has not only lowered the effort for competitors but has flooded traditional ways to get your product known, from outbound sales to content marketing. Sometimes make you question whether there are customers anymore.


You make a really salient point about having a clear vision and using clear language. Patrick Zgambo says that working with AI is spellcasting; you just need to know the magic words. The more I work with AI tools, the more I agree.

Now, figuring out those words? That's the hard part.


> Now, figuring out those words? That's the hard part.

To be clear, this is the hard part for comp sci majors who can't parse other disciplines. Language isn't a black box for everyone.


Jensen Huang said he commands thousands of AGIs but still feels pretty useful.

Founders and CEOs are still needed to set direction, bring unique vision to life, and build relationships for long-term partnerships—-as long as humans still control the economy, that is.


> Without the above, you run into the same issue devs would run into before AI - the codebase becomes an incoherent mess, and even AI can't untangle it because the confusion gets embedded into its own context.

We have a term for this and it is called "Comprehension Debt" [0] [1].

[0] https://arxiv.org/abs/2512.08942

[1] https://medium.com/@addyosmani/comprehension-debt-the-hidden...


I'm not sure I agree the term applies. Comprehension debt, as I understand it, is just the dependency trap mentioned in that arxiv paper you linked. It means that the AI might have written something coherent or not, but you as a human evaluator have little means to judge it. Because you've relied on it too much and the scope of the code has exceeded the feasibility of reading it manually.

When I talk about an incoherent mess, I'm talking about something different. I mean that as the codebase grows and matures, subtle details and assumptions naturally shift. But the AI isn't always cleaning up the code that expressed those prior assumptions. These issues compound to the point that the AI itself gets very confused. This is especially dangerous for teams of developers touching the same codebase.

I can't share too much detail here, but some personal experience I ran into recently: we had feature ABC in our platform. Eventually another developer came in, disagreed with the implementation, and combined some aspects of it into a new feature XYZ. Both were AI generated. What _should_ have happened is that feature ABC was deleted from the code or refactored into XYZ. But it wasn't, so now the codebase has two nearly identical modules ABC and XYZ. If you ask Claude to edit the feature, you've got a 50/50 shot on which one it chooses to target, even though feature ABC is now dead, unreachable code.

You might say that resolving the above issue is easy, but these inconsistencies become quite numerous and unsustainable in a codebase if you lean on AI too much, or aren't careful. This is why I say that having a super clear vision up front is important, because it reduces this kind of directional churn.


> This is why I say that having a super clear vision up front is important, because it reduces this kind of directional churn.

I'm on my 6th or 7th draft of a project. I've been picking away at this thing since the end of January; I keep restarting because the core abstractions get clearer and clearer as I go. AI has been great in this discovery process because it speeds iteration much more quickly. I know its starting to drift into a mess when I no longer have a clear grasp of the work its doing. To me, this indicates that some mental model I had and communicated was not sufficiently precise.


Yep, for sure. Restarting is the right choice IMO, it's way easier than trying to untangle from a previous iteration.


> I've gotten way further than I would have otherwise at this pace, but it was still a lot of effort, and I still wasted time going down rabbit holes on features that didn't work out.

By the time I'm done learning about the structure of the code that AI wrote, and reviewing it for correctness and completeness, it seems to be as much effort as if I had just written it myself. And I fear that will continue to be the reality until AIs can be trusted.


Well that is not how anyone is doing agentic coding though. That sounds like just a worse version of traditional coding. Most people are building test suites to verify correctness and not caring about the code


Test suites don't verify correctness. They just ensure that you haven't broke something so bad the specific instances that the tests assert have turned into a failure. You can have a factorial function and more likely the test cases will only be a few numbers. Which does not guarantee correctness as someone who know about the test cases can just put a switch and return the correct response for those specific cases.

The compromise is worth it in traditional coding, because someone will care about the implementation. The test cases are more like the canary in the coal mine. A failure warrants investigations but an all green is not a guarantee of success.


Regardless that is why most people are moving faster than writing by hand


Moving faster towards what? The biggest diff? The most tickets closed? Or a working software?


Working software/new features.


I think you're missing the point. Effort is a moat now because centaurs (human+AI) still beat AIs, but that gap gets smaller every year (and will ostensibly be closed).

The goal is to replicate human labor, and they're closing that gap. Once they do (maybe decades, but probably will happen), then only that "special something" will remain. Taste, vision... We shall all become Rick Rubins.

Until 2045, when they ship RubinGPT


> but that gap gets smaller every year (and will ostensibly be closed)

As long as you build software for humans (and all software we build is for humans, ultimately), you'll need humans at the helm to steer the ship towards a human-friendly solution.


The thing is, do humans _need_ most software? The less surfaces that need to interact with humans, the less you need humans in the loop to design those surfaces.

In a hypothetical world where maybe some AI agents or assistants do the vast majority of random tasks for you, does it matter how pleasing the doordash website looks to you? If anything, it should look "good" to an ai agent so that its easier to navigate. And maybe "looking good" just amounts to exposing some public API to do various things.

UIs are wrappers around APIs. Agents only need to use APIs.


> And maybe "looking good" just amounts to exposing some public API to do various things.

Maybe, but you still need humans to make that call. The software is still built for humans no matter how much indirection you add.

There is a conceivable day where that is no longer true, but when you have reached that point it is no longer AI.


> do humans _need_ most software?

Yes, if it's not redundant software. The ultimate utility is to a human. Sure, at some point humans stopped writing assembly language and employed a compiler instead, so the abstraction level and interfaces change, but it's all still there to serve humans.

To use your example, do you think humans will want to interact with AI agents using a chat interface only? For most tasks humans use computers today, that would be very unwieldy. So the UI will migrate from the website to the AI agent interface. It all transforms, becoming more powerful (hopefully!), but won't go away. And just how the advent of compilers led to an increase of programmers in the world, so will AI agents. This is connected with Javon's paradox as well.


Yeah. The UI will still be there, but it'll be a guy.

A little guy who lives in your phone and who's really good at APIs. (And, by that point, hopefully good at keeping track of things, too!)


I imagine that the gap with current work can largely be closed, but are we really confident that this will hold with the new work that pops up? Increasingly I think we’re lacking imagination as to what work can be in a post AI world. I.e. could an abacus wielder imagine all the post computer jobs?


do you need taste if you can massively parallel a/b test your way to something that is tasteful? say like you take your datacenter of geniuses and have a a rubin-loop supervising testing different directions. shouldn't that be close enough?


"taste" here is an intractable solution. Just take a look at how architecture has varied throughout the history of mankind, building materials, assembly, shape, flow, all of it boils down to taste. Some of it can be reduced to 'efficiency' -- like the 3 point system for designing kitchens, but even that is a matter of taste.

Find three professional chefs and they will give you three distinct visions for how a kitchen should be organized.

The same goes for any professional field, including software engineering.


Can infinite monkeys produce Shakespeare?


It was the best of times. It was the blurst of times.


That approach leads you to products like instagram.


Isn't this a temporary situation though.

Today: Ask AI to "do the thing", manual review because don't trust the AI

Tomorrow: Ask AI to "do the thing"

I'm just getting started on my AI journey. It didn't take long before I upgraded from the $17 a month claude plan to the $100 a month plan and I can see myself picking the $200 a month plan soon. This is for hobby projects.

At the moment I'm reviewing most of the code for what I'm working on, and I have tests and review those too. But, seeing how good it is (sometimes), I can imagine a future where the AI itself has both the tech chops and the taste and I can just say "Maybe me an app to edit photos" and it will spit out a user friendly clone of photoshop with good UX.

We already kind of see this with music - it's able to spit out "Bangers". How long until it can spit out hit rom-coms, crime shows, recipes, apps? I don't think the answer is "never". I think more likely the answer is in N years where N is probably a single digit.


No, I don't think it is temporary. As AI becomes more powerful, we'll simply ask it to do more difficult things. There's a level of complexity where "do the thing" is insufficient. We'll never be at a place where AI can infer vast amounts of nuance from simple human requests, which means that humans will always need to be able to describe precisely what they want. This has always been the core skill for software developers, and I just don't see that changing.


Do you believe a junior developer now will never surpass you?

Why couldn’t AI do the same?


It's not a matter of whether it surpasses me. In some respects it already has - I watch Claude Code spitting out long terminal commands that I've never even seen in my 15 year career.

The question is whether AI will ever become good enough to magically infer information where none is provided.

For instance, I've had this startup idea for an itemized physical storage company. We'll never reach a point where I can simply say "Hey AI, create all the software necessary for an itemized physical storage company". It's not because AI won't continue to improve, it's because there's literally not enough detail in that statement to understand what I mean. It's too vague. I'm sure the AI of tomorrow could do a pretty good job in guessing what I mean by it, but the chance of it capturing my vision is literally 0%.


It might have a better vision than you and pursue that vision instead. Why should the AI wait for your impetus when countless founders and CEOs didn’t?


AI has no intrinsic way to align its efforts to solve human problems. In order to solve that problem, you'd need an enormous amount of nearly real-time data feeding into the model. Then the model would need to routinely look for patterns and identify ways to improve human life in some way. It would make today's models look tiny by comparison.

What we're building today isn't even remotely close to that.


> We already kind of see this with music - it's able to spit out "Bangers"

“Bangers” being roughly equivalent to garbage mass marketed radio pop? Or “We are Charlie Kirk” lol


> ... for AI to be used effectively.

I'm continually fascinated by the huge differences in individual ability to produce successful results with AI. I always assumed that one of the benefits of AI was "anyone can do this". Then I realized a lot of people I interact with don't really understand the problem they're trying to solve all that well, and have some irrational belief that they can get AI to brute force their way to a solution.

For me I don't even use the more powerful models (just Sonnet 4.6) and have yet to have a project not come out fairly successful in a short period of time. This includes graded live coding examples for interviews, so there is at least some objective measurement that these are functional.

Strangely I find traditional software engineers, especially experienced ones, are generally the worst at achieving success. They often treat working with an agent too much like software engineering and end up building bad software rather than useful solutions to the core problem.


"AI" tools I've got at work (and am mandated to use, complete with usage tracking) aren't a wide-open field of options like what someone experimenting on their own time might have, so I'm stuck with whatever they give me. The projects are brown-field, integrate with obscure industry-specific systems, are heavy with access-control blockers, are already in-flight with near-term feature completion expectations that leave little time for going back and filling in the stuff LLMs need to operate well (extensive test suites, say), and must not wreck the various databases they need to interact with, most of which exist only as a production instance.

I'm sure I could hack together some simple SaaS products with goals and features I'm defining myself in a weekend with these tools all on my own (no communication/coordination overhead, too!), though. I mean for an awful lot of potential products I could do that with just Rails and some gems and no LLM any time I liked over the last 15+ years or whatever, but now I could do it in Typescript or Rust or Go et c. with LLMs, for whatever that's worth. At work, with totally different constraints, the results are far less dramatic and I can't even feasibly attempt to apply some of the (reputedly) most-productive patterns of working with these things.

Meanwhile, LLMs are making all the code-adjacent stuff like slide decks, diagrams, and ticket trackers, incredibly spammy.

[EDIT] Actually, I think the question "why didn't Rails' extreme productivity boost in greenfield tiny-team or solo projects translate into vastly-more-productive development across all sectors where it might have been relevant, and how will LLMs do significantly better than that?" is one I'd like to see, say, a panel of learned LLM boosters address. Not in a shitty troll sort of way, I mean their exploration of why it might play out differently would actually be interesting to me.


> The projects are brown-field, integrate with obscure industry-specific systems, are heavy with access-control blockers

These are cases where I've seen agentic solutions perform the best. My most successful and high impact projects have been at work, getting multiple "obscure industry-specific systems" talking to each other in ways that unblocks an incredible amount of project work.


> I always assumed that one of the benefits of AI was "anyone can do this". Then I realized a lot of people I interact with don't really understand the problem they're trying to solve all that well

I've been through a handful of "anyone can do this" epiphanies since the 90s and have come to realize the full statement should be "anyone can do this if they care about the problem space".


If every project you have tackled has come out successful, then you are managing to never tackle a problem that is secretly literally impossible, which is a property of whatever prefilter you are applying to potential problems. Given that your prefilter has no false positives, the main bit of missing information is how many false negatives it has.


> Strangely I find traditional software engineers, especially experienced ones, are generally the worst at achieving success. They often treat working with an agent too much like software engineering and end up building bad software rather than useful solutions to the core problem.

This feels a bit like a strawman. How do you assess it to be bad software without being an engineer yourself? What constitutes successful for you?

If anything, AI tools have revealed that a lot of people have hubris about building software. With non-engineers believing they're creating successful work without realizing it's a facade of a solution that's a ticking time bomb.


> without being an engineer yourself?

When did I say I'm not a software engineer? I have a software engineering background (I've written reasonably successful books on software), I've just done a lot of other stuff as well that people tend to find more valuable.

> What constitutes successful for you?

The problem I need to solve is solved? I'm not sure what other measure you could have. Honestly, people really misunderstand how to use agents. If you're aim is to "build software" you're going to get in trouble, if your aim is to "solve problems" then you're more aligned with where these tools work most effectively.


> graded live coding examples for interviews

Yeah, for those you can just relax and trust the vibes. It's for complex software projects you need those software engineering chops, otherwise you end up with a intractable mess.


If it's for a complex software project the first question you need to ask is "does this really need to be software at all?"

Honestly this is where most traditional engineers get stuck. They keep attacking the old problem with new tools and being frustrated. I agree that agents are not a great way to build "complex software projects" but I think the problem space that is best solved by a "complex software project" is rapidly shrinking.

I've had multiple vendors try to sell my team a product that we can build the core functionality of ourselves in an afternoon. We don't need that functionality to scale to multiple users, server a variety of needs, be adaptable to new use cases: we're not planning to build a SaaS company with it, we just need a simple problem solved.

But these comments are a treasure trove of anecdotes proving exactly my point.


I work with security researchers, so we've been on this since about an hour ago. One pain I've really come to feel is the complexity of Python environments. They've always been a pain, but in an incident like this, where you need to find whether an exact version of a package has ever been installed on your machine. All I can say is good luck.

The Python ecosystem provides too many nooks and crannies for malware to hide in.


Glad this was one of the objects captured, it's absolutely stunning to see in person: https://www.metmuseum.org/art/collection/search/24671

I wish they had captured one of their Faberge eggs; those are almost more impressive.


Incredible. Why isn't it in France?


Not sure, but there's also a Van Gogh in that 3D collection, you could ask the same question for that one.


Probably the same reason there are french imperial eagles in British museums.


The provenance according to the Met:

>Henry II, King of France (until d. 1559);

>Carl August, Grand Duke of Saxe-Weimar-Eisenach, Residenzschloss, Weimar (by 1804–d. 1828);

>by descent to Wilhelm Ernst, Grand Duke of Saxe-Weimar-Eisenach, Residenzschloss, Weimar, later Schloss Heinrichau, Lower Silesia, Germany (now Henryków, Poland) (1901–d. 1923);

>his widow, Feodora, Grand Duchess of Saxe-Weimar-Eisenach, Schloss Heinrichau (1923–1929;

>sold in May, 1929, to Kahlert & Sohn);

>[E. Kahlert & Sohn, Berlin, 1929;

>sold on December 14, 1929, for $135,000, to Sir Joseph Duveen for Mackay];

>Clarence H. Mackay, New York (1929–d. 1939; his estate, 1939, inv. no. A-17;

>sold through Jacques Seligmann & Co. on May 15, 1939, to MMA).

Unfortunately, this does not answer "why did it leave France?"

However, the book "Merchants of Art, 1880-1960: Eighty Years of Professional Collecting" (1961) by the rather famous art dealer Germain Seligman offers this missing link:

>Parade armor of King Henri II, embossed, damascened and gilded. Later presented by King Louis XIII to Bernhard von Weimar.


Thanks


The museum helpfully has a "Provenance" tab that gives you the answer to this question. (the answer in this case is market capitalism)


In addition, I think token efficiency will continue to be a problem. So you could imagine very terse programming languages that are roughly readable for a human, but optimized to be read by LLMs.


That's an interesting idea. But IMO the real 'token saver' isn't in the language keywords but it's in the naming of things like variables, classes, etc.

There are languages that are already pretty sparse with keywords. e.g in Go you can write 'func main() string', no need to define that it's public, or static etc. So combining a less verbose language with 'codegolfing' the variables might be enough.


I'm not an expert in LLMs, but I don't think character length matters. Text is deterministically tokenized into byte sequences before being fed as context to the LLM, so in theory `mySuperLongVariableName` uses the same number of tokens as `a`. Happy to be corrected here.


Running it through https://platform.openai.com/tokenizer "mySuperLongVariableName" takes 5 tokens. "a", takes 1. mediumvarname is 3 though. "though" is 1.


You're more likely to save tokens in the architecture than the language. A clean, extensible architecture will communicate intent more clearly, require fewer searches through the codebase, and take up less of the context window.


Go is one of the most verbose mainstream programming languages, so that's a pretty terrible example.


Maybe not a perfect example but it’s more lightweight than Java at least haha


If by lightweight you mean verbosity, then absolutely no.

In go every third line is a noisy if err check.


Well LLMs are made to be extremely verbose so it's a good match!


I think there's a huge range here - ChatGPT to me seems extra verbose on the web version, but when running with Codex it seems extra terse.

Claude seems more consistently _concise_ to me, both in web and cli versions. But who knows, after 12 months of stuff it could be me who is hallucinating...


To you maybe, but Go is running a large amount of internet infrastructure today.


How does that relate to Go being a verbose language?


Its not verbose to some of us. It is explicit in what it does, meaning I don't have to wonder if there's syntatic sugar hiding intent. Drastically more minimal than equivalent code in other languages.


Verbosity is an objective metric.

Code readability is another, correlating one, but this is more subjective. To me go scores pretty low here - code flow would be readable were it not for the huge amount of noise you get from error "handling" (it is mostly just syntactic ceremony, often failing to properly handle the error case, and people are desensitized to these blocks so code review are more likely to miss these).

For function signatures, they made it terser - in my subjective opinion - at the expense of readability. There were two very mainstream schools of thought with relation to type signature syntax, `type ident` and `ident : type`. Go opted for a third one that is unfamiliar to both bases, while not even having the benefits of the second syntax (e.g. easy type syntax, subjective but that : helps the eye "pattern match" these expressions).


Every time I hear complaints about error handling, I wonder if people have next to no try catch blocks or if they just do magic to hide that detail away in other languages? Because I still have to do error handling in other languages roughly the same? Am I missing something?


Exceptions travel up the stack on their own. Given that most error cases can't be handled immediately locally (otherwise it would be handled already and not return an error), but higher up (e.g. a web server deciding to return an error code) exceptions will save you a lot of boilerplate, you only have the throw at the source and the catch at the handler.

Meanwhile Go will have some boilerplate at every single level

Errors as values can be made ergonomic, there is the FP-heavy monadic solution with `do`, or just some macro like Rust. Go has none of these.


Lots of non-go code out there on the Internet if you ever decide you want to take a look.


You’re not missing anything. I’ve worked with many developers that are clueless about error handling; who treat it as a mostly optional side quest. It’s not surprising that folks sees the explicit error handling in Go as a grotesque interruption of the happy path.


That’s a pretty defensive take.

You don’t have to hate Go to agree that Rust’s `?` operator is much nicer when all you want to do is propagate the error.


I think I remember seeing research right here on HN that terse languages don't actually help all that much


I would be very interested in this research... I'm trying to write a language that is simple and concise like Python, but fast and statically typed. My gutfeeling is that more concise than Python (J, K, or some code golfing language) is bad for readability, but so is the verbosity of Rust, Zig, Java.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: