Hacker Newsnew | past | comments | ask | show | jobs | submit | CharlieDigital's commentslogin

Rings true because now teams end up building a lot of things that may or may not have alignment to customer/business needs.

The slow part has always been figuring out exactly what the customer/business actually needs, not the coding. Now teams are throwing money at tokens without solving the "who's buying this?" part appropriately and end up just building excess.

All judgement seems to have gone out the window.


At my last job, within our org, the director had 3 staff engineers building the same, but competing AI tool.

At the last all hands other teams announced their own similar AI engineer productivity tools.

I low-key regret now sticking around long enough to get a layoff package.


This rings a bell.

Now that you can just throw tokens at it, it seems like actually thinking about what is useful and productive is no longer a practical skill (it still is, just no one in leadership nor product wants to practice discipline any more).

I don't know what to say about it except that it legitimately feels like some folks have just shut off their inquisitiveness and willingness to investigate and think before acting.

Now it's act, waste tokens and time, only to learn that the result of the action was obviously bad from the start because of some real-world human nature that we now no longer stop to try to understand first before applying a technical solution.


I suppose you meant 'not' not 'now' yeah?

yeah, my bad. I typo'd on purpose to prove I'm not AI :P :'(

The LLM obliges and writes a lot of useless tests. I have asked devs to delete several tests in the last day alone.

"I don't trust this giant statistical model to generate correct code, so to fix it, I'm going to have this giant statistical model generate more code to confirm that the other code it generated is correct."

I swear I'm living through mass hysteria.


A lot of times the act of specifying test criteria prevents developers from accidentally vibe coding themselves into a bad implementation. You can then read the tests and verify that it does what you want it to. You can read the code!

I’m not saying that it’s all hunky dory, but you use AI for straight up test driven development to catch edge cases and correct sloppy implementations before they even get coded by your giant chaos machine.


Well, yeah, you don't just make it bang out a bunch of useless code without monitoring it.

You instruct it to write the code you want to be written. You still have to know how to develop, it just makes you faster.


You missed the point of the question: why write Bun in Rust when CC itself can be written in Rust ostensibly for even better perf.

Are you replying to the wrong comment? I clearly quoted which part I were replying to. I didn't attempt to answer the question "why write Bun in Rust when CC itself can be written in Rust."

What I said is that "they know that LLMs are not the right tool for this" is not the answer, as CC is already vibecoded so it'd be very weird to believe you can't vibecode a port of CC.

The actual answer is, of course, the whole discussion is just making a hill out of a mole. Bun is not committed to a Rust rewriting, vibed or not.


You might be underestimating the effect that corporate policies and culture have on the product.

Some teams have a push now to go all in on AI; don't even look at the code. I've seen this in action and the results are probably what you'd expect. Works great at some level, but as complexity accumulates (especially across a team with different "technical vocabularies"), the end result is compounding complexity and mistakes and no person or team knows how the software actually works.

No human testing of software or QA; unit + integration + give AI control over the browser/tool. Yes, this how some teams are moving forward now. So some of this may be that Anthropic's culture will end up causing shifts in how the Bun team operates and thinks.

If this type of culture and mindset becomes the norm, I think either the models have to get a lot better or the software quality is going to decline.

Matt Pocock has a great talk here: https://youtu.be/v4F1gFy-hqg

    "Code is not cheap. Bad code is the most expensive it's ever been. Because if you have a codebase that's hard to change, you're not able to take advantage of all of the bounty that AI can offer.  Because AI in a good codebase actually does really, really well."
Once bad code starts to compound on itself, it's going to be really hard to break out of it.

I don't disagree with the notion, but what is up with the dev community championing influencers that work no real jobs and just sell courses where they reread the docs to you at $500 a pop (this gent, $1k a pop)?

I have followed a simple rule in my career, if you offer training/courses I don’t listen to anything you say.

I consider this a hard rule, like ad-blocking (this is exactly that, blocking ads as each talk is an ad (or ad in disguise).


I'm not the biggest fan of the influencer community, but I think that it mostly boils down to many learners preferring video content over written material. I've gotten used to reading documentation now, but I remember it being extremely intimidating when I was first learning. It was nice to have someone break stuff down into simple terms for me.

To be fair to Matt Pocock, I know he worked for Vercel and Stately for a while before doing content full time. I can't say anything about his AI content, but I did some of his free lessons when I was learning TypeScript. They included interactive editor lessons and such, so it wasn't just empty videos and fluff like some of the influencers.


> but I think that it mostly boils down to many learners preferring video content over written material

99% of the times that's not learning, but productivity porn.


This. You can't learn by viewing, you learn by doing.

That bill is gonna come due at some point for "developers" leaning heavily on agents.


No, look into his actual work history (sorry being a paid marketer isn't working as a dev). Was only a dev consultant for like two years before pivoting into full time influencer. Trust me, I know more about these types than any normal human should.

Constraints are underrated.

The most elegant solutions typically arise not out of unbounded degrees of freedom, but building specifically with a constraint in mind.

I think that this goes with point 1: composing the one pager helps define those constraints.


    > The most creative act is this continual weaving of names that reveal the structure of the solution that maps clearly to the problem we are trying to solve.
From Confucius, The Analects, 13.3:

    If names are not rectified, then language will not be in accord with truth.  If language is not in accord with truth, then things cannot be accomplished.  If things cannot be accomplished, then ceremonies and music will not flourish.


The US no longer feels like a place where the rule of law applies.

For whatever you want to fault China with (human rights, personal freedoms, etc.), there is at least the facade of rule of law.

US is masks off and not even a thin veneer that rule of law applies any more.


    > The value-add that Microsoft brings to Github Copilot is near zero
You are not their target audience.

The value add is the GitHub integration. By far the best.

GH has cloud agents that can be kicked off from VS Code; deeply integrated with GH and very easy to set up. You can apply enterprise policies on model access, MCP white lists, model behavior, etc. from GitHub enterprise and layered down to org and repo (multiple layers of controls for enterprises and teams). It aggregates and collects metrics across the org.

It also has tight integration with Codespaces which is pretty damn amazing. `gh codespace code` and it's an entire standalone full-stack that runs our entire app on a unique URL and GH credentials flow through into the Codespace so everything "just works". Basically full preview environments for the full application at a unique URL conveniently integrated into GH. But also a better alternative to git worktrees. This is a pretty killer runtime environment for agents because you can fully preview and work on multiple streams at once in totally isolated environments.

If you are a solo engineer, none of this is relevant and probably doesn't make sense (except Codespaces, which is pretty sweet in any case), but for orgs using the GH stack is a huge, huge value add because Microsoft is going to have a better understanding of enterprise controls.

If you want to understand the value add of Copilot, I think you need to spend a bit of time digging into the enterprise account featureset in GH, try Codespaces, try Copilot cloud agents. Then it clicks.


I have found this at a different scale in our company: agents keep writing the same private static utility methods over and over again without checking for it in existing code.

Sometimes, I'll catch it writing the same logic 2x in the same PR (recent example: conversion of MIME type to extension for images). At our scale, it is still possible to catch this and have these pulled out or use existing ones.

I've been mulling whether microservices make more sense now as isolation boundaries for teams. If a team duplicates a capability internally within that boundary, is it a big deal? Not clear to me.


Let me preface this by saying I have been writing very exacting code for most of my career to a high standard. But with AI generated code, I’m not sure if all the same value-prop exists that were used to with traditional hand-written code.

For example if AI generates 2x of a utility function that does the same thing, yes that is not an ideal, but is also fairly minor in terms of tech debt. I think as long as all behaviors introduced by new code are comprehensively tested, it becomes less significant that there can be some level of code duplication.

It also is something that can be periodically caught and cleaned up fairly easily by an agent tasked to look for it as part of review and/or regular sessions to reduce tech debt.

A lot of this is adapting to new normal for me, and there is some level of discomfort here. But I think: if I were the director of an engineering org and I learned different teams under me had a number of duplicated utility functions (or even competing services in the same niche), would this bother me? Would it be a priority for me to fix? I think I’d prefer it weren’t so, but probably would not rise to the level of needing specific prioritization unless it impacted velocity and/or stability.


My take is similar.

Majority of the cases, I think this is harmless. In C#, for example, we have agents repeatedly generating switch expressions for file extensions to MIME-type.

This is harmless since there's no business logic.

But we also have some cases where phone number processing gets semi-duplicated. Here, it's a bit more nebulous since it looked like it was isolated, but still had some overlapping logic. What if we change vendors in the future and we need a different format? Have to find all the places it occurs and there's now no single entry point or specific pattern to search for.

Agents themselves may or may not find all the cases since it is using `grep` and doesn't have semantic understanding of the code. What if we ask for it to refactor and its `grep` misses some pattern?

Still uneasy, but yet to feel the pain on this one.


> For example if AI generates 2x of a utility function that does the same thing, yes that is not an ideal, but is also fairly minor in terms of tech debt. I think as long as all behaviors introduced by new code are comprehensively tested, it becomes less significant that there can be some level of code duplication.

We still run into the same issues that this brings about in the first place, AI or no AI. When requirements change will it update both functions? If it is rewriting them because it didn't see it existed in the first place, probably not. And there will likely be slight variations in function / components names, so it wouldn't be a clean grep to make the changes.

It may not impact velocity or stability in the exact moment, but in 6 months or a year - it likely will, the classic trope of tech debt.

I have no solution for this, it's definitely a tricky balance and one that we've been struggling with human written code since the dawn.


    > Scaling the workers sometimes exacerbates the problem because you run into connection limits or polling hammering the DB
Design question here (not familiar enough with this approach with Pg)

Would an alternative be to have a small pool of pollers that would "distribute" the records to a later pool of workers instead of having workers directly poll?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: