Hacker Newsnew | past | comments | ask | show | jobs | submit | ghosty141's commentslogin

Exactly my (and my coworkers) experience. AI generally amplifies the skillset, both in the good and the bad.

One fantastic usecase for me just recently was writing up a concept for an authentication daemon. With codex this is like a conversation where I pick from the suggestions, cross reference them with normal web-search and decide on a final draft which I then discuss with colleagues.

This "conversational" planning with integrated web-search (aka plan mode) is insanely useful. Also reviewing already written code with AI is purely beneficial in my opinion.

In my opinion the main caveat of AI is, you eventually have to be smarter than then tool. So for example if Codex suggests I should use tech-stack X then I must research and fully understand why this is actually good and still have to compare to other solutions. I think this is where the problem lies, some people skip this step which leads to so so many problems, and that's fatal. You MUST be smarter than the AI after your conversation and fully understand and be able to critique what it said.


The power of AI is it rewards due diligence.

The weakness of AI is that it is really easy to fall into lazy habits.

Something about having to talk to a machine like it's a human makes me fall for treating it like a human. I want to treat it as a probability engine that collapses to an answer based on input, but that input explicitly needs to be one that has it collapse to something a reasonably knowledgeable person would respond with, which more-or-less means talking to it like it is that kind of person.

I feel like it activates the social part of my brain and then I stop working with it properly. I'm still building the habit, though, only recently started taking the LLMs seriously as a tool.


Not an expert on mobile development but I doubt an android app has the low-level access needed to the wifi stack to do this.

Yeah. The real thing that creates pressure is the people applying that pressure if progress is not made. If people act that way the meeting is an effective way to do this on a weekly base instead of letting it languish over month(s).

If nobody in the meeting actually cares that the feature isn't getting finished, then the meetings value is rather small.


It's not the source of the pressure per say, but it's the transmission medium. There are other outlets for pressure like regular demo days and etc.

Ultimately yes the true source of the pressure is coming from the "tribe".


> largely because I had no expectation that it would provide additional benefit..

An interesting thing with ibuprofen is that at the regular dose of 400mg it inhibits pain but if you take 1600mg it doesn't inhibit much more pain than the 400mg dose, but the inflammatory effect does increase significantly. A lot of people don't know that and take too much thinking it scales linearly.


Some know that you can combine ibuprofen with paracetamol to get extra pain suppression.

And when you want to be gentle, you alternate between them.


What's the problem with SOLID? It's very very rare that I see a case where going against SOLID leads to better design.

SOLID tend to encourage premature abstraction, which is a root of evil that is more relevant today than optimization.

SOLID isn't bad, but like the idea of premature optimization, it can easily lead you into the wrong direction. You know how people make fun of enterprise code all the time, that's what you get when you take SOLID too far.

In practice, it tends to lead to a proliferation of interfaces, which is not only bad for performance but also result in code that is hard to follow. When you see a call through an interface, you don't know what code will be run unless you know how the object is initialized.


A lot of people make the mistake of thinking that if you just follow SOLID then you have good code, but plenty of teams follow it to the letter and still create complete messes.

The problem is that SOLID on its own does nothing for you. It's a set of (vague) rules, but not a full framework for how to design software. I would even argue that SOLID is actively harmful if used on its own.

Things like Clean Architecture and Domain-Driven Design are a lot closer to being true frameworks for software design, and a lot of their basic principles are actually really good (like the core of the application being made up of objects which perform calculations, validations and business rules with no side effects), but the complexity of those architectures is a problem in itself.

And, even aside from that, I think the industry in general reached a point where people decided that, principled object-oriented design is just not worth it. Why spend all this effort worrying about the software remaining maintainable for decades, when we could instead just throw together something that works, then IPO, then rewrite the whole thing once we have money.


In a way, SOLID is premature optimization. You are optimizing abstractions before knowing how the code is used in practice. Lots of code will be written and never changed again, but a minority will see changes quite a bit. Concentrate there. Like you don't need to optimize things that aren't in hot code (usually, omg experience will tell you that all rules have exceptions, including the exceptions).

> Lots of code will be written and never changed again, but a minority will see changes quite a bit. Concentrate there

I think the most important principle above all is knowing when not to stick to them.

For example if I know a piece of code is just some "dead end" in the application that almost nothing depends on then there is little point optimizing it (in an architectural and performance sense). But if I'm writing a core part of an application that will have lots of ties to the rest, it totally does make sense keeping an eye on SOLID for example.

I think the real error is taking these at face value and not factoring in the rest of your problem domain. It's way too simple to think SOLID = good, else bad.


here's a nice critique of SOLID principles:

https://www.tedinski.com/2019/04/02/solid-critique.html


They start by indicating people don't understand, “A module should have only one reason to change.”. Reading more of that article, it's clear the author doesn't understand much about software engineering and sounds more like a researcher who just graduated from putting together 2+2.

The great thing bout the net is also it's biggest problem. Anyone can write a blog and if it looks nice, sounds polished they could sway a large group. I roll my eyes so strong at folks that reject SOLID principles and design patterns.

I have seen way too often, advocates of SOLID and patterns to have religious arguments: I don’t like it. That being said, I think there is nothing bad in SOLID, as long as treated as principles and not religious dogmas. About patterns, I cannot really say as much positive. They are not bad per-se. But I’ve seen they have made lots of harm. In the gang of 4 book, in the preface, I think, says something like “this list is neither exhaustive, nor complete, and often inadequate” the problem is every single person I know who was exposed to the book, try to hammer every problem into one pattern (in the sense of [1]). Also insist in using the name everywhere like “facade_blabla” IMHO the pattern may be Façade, but putting that through the names of all classes and methods, is not good design.

[1] https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...


> That being said, I think there is nothing bad in SOLID, as long as treated as principles and not religious dogmas

This should be the header of the website. I think the core of all these arguments is people thinking they ARE laws that must be followed no matter what. And in that case, yeah that won't work.


Something, something, wrong abstractions are worse than no abstractions.

SOLID approaches aren't free... beyond that keeping code closer together by task/area is another approach. I'm not a fan of premature abstraction, and definitely prefer that code that relates to a feature live closer together as opposed to by the type of class or functional domain space.

For that matter, I think it's perfectly fine for a web endpoint handler to make and return a simple database query directly without 8 layers of interfaces/classes in between.

Beyond that, there are other approaches to software development that go beyond typical OOP practices. Something, something, everything looks like a nail.

The issues that I have with SOLID/CLEAN/ONION is that they tend to lead to inscrutable code bases that take an exponentially long amount of time for anyone to come close to learning and understanding... Let alone the decades of cruft and dead code paths that nobody bothered to clean up along the way.

The longest lived applications I've ever experienced tend to be either the simplest, easiest to replace or the most byzantine complex monstrosities... and I know which I'd rather work on and support. After three decades I tend to prioritize KISS/YAGNI over anything else... not that there aren't times where certain patterns are needed, so much as that there are more times where they aren't.

I've worked on one, singular, one application in three decades where the abstractions that tend to proliferate in SOLID/CLEAN/ONION actually made sense... it was a commercial application deployed to various govt agencies that had to support MS-SQL, Oracle and DB2 backends. Every, other, time I've seen an excess of database and interface abstractions have been instances that would have been better solved in other, less performance impacting ways. If you only have a single concrete implementation of an interface, you probably don't need that interface... You can inherit/override the class directly for testing.

And don't get me started on keeping unit tests in a completely separate project... .Net actually makes it painful to put your tests with your implementation code. It's one of my few actual critiques about the framework itself, not just how it's used/abused.


This doesn't seem to be a critique of the principles so much as a critique of their phrasing.

Even his "critique" of Demeter is, essentially, that it focuses on an inconsequential aspect of dysfunction—method chaining—which I consider to be just one sme that leads to the larger principle which—and we, apparently, both agree on this—is interface design.


It causes excessive abstraction, and more verbose code.

L and I are both pretty reasonable.

But S and D can easily be taken to excess.

And O seems to suggest OO-style polymorphism instead of ADTs.


This is similar to my view. All these "laws" should alwaye be used as guidance not as actual laws. Same with O. I think its good advice to design software so adding features that are orthogonal to other features don't require modifying much code.

That's how I view it. You should design your application such that extension involves little modifying of existing code as long as it's not necessary from a behavior or architectural standpoint.


Of course you can do that & still make a mess. E.g. by deciding that all your behavior will be "configurable" by coding inside strings in a YAML file, and what YAML files you load at runtime determine which features you get. Sure, they might conflict, but that's the fault of whoever wrote that "configuration" YAML. (Replace YAML with XML for a previous era version of this bad idea).

It only applies to the object oriented programming paradigm

Negative.

The only part of SOLID that is perhaps OO-only is Liskov Substitution.

L is still a good idea, but without object-inheritance, there's less chance of shooting yourself in the foot.


I go by a philosophy that Liskov Substitution is reeeally about referential transparency. I don't care about parent/child classes, I care about interfaces and implementations, and structural subtyping. Fix that, and it's great.

That's understating the problem. It mandates OOP.

If you follow SOLID, you'll write OOP only, with always present inheritance chains, factories for everything, and no clear relation between parameters and the procedures that use them.


This is only superficially true. Here's a fair discussion that could serve as a counterpoint: https://medium.com/@ignatovich.dm/applying-solid-principles-...

> you can both look as native as the other, doesn't the actual UX matter more than how the implementation was made?

An Electron app that draws all its components mostly like the native controls will still not be native and have the same integrations etc. that native apps usually get.

You could get close but some things like for example "ctrl+f" search have native widgets that work different/look different that an electron app realistically won't have. Or for example you will never get the same liquid glass materials that macOS uses in an electron app.

So yea, native in my books means using the platform native (UI) apis. On Ubuntu for examples thats GTK, on Windows its.... idk at this point, WinUI? and on KDE it would be Qt.


You can get all those things in a Rust application drawing with Cairo on macOS, but that isn't "native" according to you regardless, because it's using Cairo instead of AppKit/SwiftUI?

Again I don't understand the obsession with caring so deeply about the implementation, as long as the end results are the same, why it matters so much?


My point is practically you don't get the same results unless you use the native APIs the the platform provides.

Take my Liquid Glass for example, you simply won't be able to match the look in an electron app in practice.

Ofc if the result is the same it doesn't matter how, but in reality it's almost impossible to imitate the look and capabilities since it would require a Herculean effort to keep feature parity.


Right, but you could call native APIs from JavaScript or Java say, then in your world that's a "native" application because it uses the APIs the platform provides, regardless of how it actually was implemented? Meanwhile, an application could be implemented with Objective-C and/or Swift but not use Cacoa/AppKit/SwiftUI APIs, then that's not an native application because it doesn't look like one? Like games written with Vulkan/OpenGL aren't "as native" as one using Metal I'd presume?


you could call native APIs from JavaScript or Java say, then in your world that's a "native" application because it uses the APIs the platform provides

Yes, this is what we want.

an application could be implemented with Objective-C and/or Swift but not use Cacoa/AppKit/SwiftUI APIs, then that's not an native application

Correct. The toolkit matters, not the language. Native toolkits have very rich and subtle behavior that cannot be properly emulated. They also have a lot of features (someone mentioned input methods and accessibility) that wrappers or wannabe toolkits often lack. To get somewhat back on topic I notice and appreciate that Xilem mentions accessibility.

games written with Vulkan/OpenGL aren't "as native"...

Games are usually fullscreen and look nothing like desktop apps anyway so it doesn't matter what API they use.


You can technically get those platform native things by integrating with the native APIs. There's basically a full spectrum from "native" to "custom" rather than it being either-or.


> They're going to try to gradually push laws to make it so that you'll need a government issued signature to do anything. That's when they'll have total power over you because they can simply refuse to issue.

The more this signature is necessary the harder it becomes to deny issueing it to somebody.

I don't see how this changes much compared to nowadays. You can already require an ID for all kinds of these and the government already has total control over those. So what changes? China manages to ruin the lives of the people illegally born under the 1-child-policy for decades already, all without systems like eIDAS.

You can't protect yourself from authoritarian regimes with tech or good policy since those will just get ignored. Look at Trumps war with Iran, where did Congress agree to it?

I'm not a fan of these systems either, I also think software should be open and no vendor lock-in should exist. But I don't think this will change much to be honest.


It will matter a lot in the long run. I will outline one concrete way it will matter, which I think is the most critical, but there are other ways it will do damage besides this:

Right now, physical ID is only required for government services, for the most part. But digital signatures can be extended later to gate all services and purchases, both online and physical, including non-government ones. For example, you can't host a website without a gov approved signature for each website.

Under a system like that, you would rarely find out when the gov refuses to issue a signature, or when any kind of injustice happens, really. Websites where people can talk about bad things happening to them will simply be denied a signature to legally operate, so they're given the ultimatum to "voluntarily" censor posts, or be shut down. It becomes impossible to have this very conversation on a public platform with any kind of meaningful reach. And they already have this kind of system in China, since you brought it up. In fact, they have domestic surveillance systems that make the Snowden disclosures look cute.


In his case, I'm pretty sure 20 y/o data is pretty useless nowadays in terms of fingerprinting and usage heuristics.


Oh this is really cool, I did it and I landed on the font I've been using for years now: "Fira Code".


The quote of Bjarne is a bit out of context. It was made after an hour long talk about the pitfalls and problems of contracts in c++26: https://youtu.be/tzXu5KZGMJk

This should also clarify the complexity issue.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: