Hacker Newsnew | past | comments | ask | show | jobs | submit | crispyambulance's commentslogin

I think USB-C is certainly a step in the right direction.

The remaining problem is the lack of CLEAR, easy to understand markings on the cable that indicate whether it’s intended as power delivery cable or as a 10Gbps data cable or as a thunderbolt-capable cable or any of many combinations in between those. This should not be limited to physical markings on the cable itself but also in the form of electronic self-identification so that you could plug in a cable and have the OS tell you exactly what cable you plugged in. Why not? We have power-delivery protocols, adding cable self-id would be a trivial addition.

I suspect the vendors of these, and perhaps the designers of the spec too, have deliberately made this confusion an integral part of the standard. It creates churn and consumers buying more cables than they need.


Apparently, you can use it in RPN mode!

RIP. He was an amazing human. I worked for a time at JCVI when it was in Rockville, shortly after he had left Celera Genomics. He led a team that did something which was considered intractably difficult-- sequencing whole genomes. Then he did it again with global ocean sampling and synthetic genomics and other things. That is not to say that "he did it single-handedly", Venter was a hybrid of scientific and organizational talent that was able to make this stuff happen by coordinating stuff that's super hard to coordinate.

It's "complex enough" to be notable for it's complexity and thus a good example for considering the character and economics of complex machinery.

It's kind of pointless to fret about whether it's "the most complex" like there's an objective 1-dimensional ranking that even has utility.


When practitioners say "PCR" they don't (usually) just mean amplifying DNA for use as part of the input to another process.

What they usually mean is PCR with chemistry that selectively amplifies some specific sequence of DNA. This chemistry has dyes in it which fluoresce when illuminated at some specific wavelength. The point of all this is to answer a "yes/no" question for the presence of some DNA sequence in the sample. This is done at scale with multiple chemistries looking for different DNA sequences. This is also known as "real-time PCR".

It's sort of like the biological-assay version of the kid's game "20-questions". If you do it right, it's an enormously powerful detection technique for medical purposes. It gives you your "answer" in a reasonable amount of time on your desk while you wait.

That said, there are biological assays that don't need the thermo cycling anymore. These newer assays use more sophisticated chemistry that amplifies at a constant heating temperature. In the simplest terms, they're just heaters combined with a fluorometer. It's potentially MUCH faster than realtime-PCR.

In any case, the only real serious money-making business for these instruments is in-vitro diagnostics. That requires FDA approval, and that means a ~10K minimum for the instrument and tens of dollars for the consumables containing the assays, and definitely a pricey service agreement for the instrument (eg Bio-Rad instruments).

A distant second money-making business would be research-use-only instruments, but these are not going to be inexpensive little devices.


> When practitioners say "PCR" they don't (usually) just mean amplifying DNA for use as part of the input to another process.

I definitely do.

> What they usually mean is PCR with chemistry that selectively amplifies some specific sequence of DNA. This chemistry has dyes in it which fluoresce when illuminated at some specific wavelengt

This is called qPCR (and qRT-PCR, and RT-PCR and ‘Taqman assay’... But it's not called PCR because it's not just PCR).. It has uses outside of diagnostics (which is what it seems you're most familiar with).

Either way, the article is not about qPCR.


In normal biology labs real-time PCR is used much less than normal PCR, I'd guess 5% of PCRs across labs are run in real-time machines.

This is well said and good illustration of why optimality a fragile concept. High impact improvements often involve reframing the goal.

That font, and how it's integrated with the math looks amazing. Katex for the math?

Seems like Katex from the scripts getting loaded. I love the design too, kinda medieval-chic.

Looks more early modern to me. :)

> I'm not saying that readability can't be a consideration when making documentation. I am saying that if you discard accuracy in the process, you've fucked up quite badly.

You're right to elevate accuracy to a high level of importance, but that is NOT ENOUGH if the thing is has poor readability. The audience has to be able to understand the document if the document is to be useable.

There's only a certain amount of effort anyone can deliver in producing a document. But if the author can't deliver readability, they need to follow up the document with a lot of support and/or get some help to make it useable.


I've struggled through some absolutely awful documentation over the years. I'll put up with incredibly broken English and other problems as long as the accuracy is there. Just last week I encountered a pinout diagram that used emojis to indicate which pins related to which data channel. Not a choice I would have made, and I found it made the diagram harder to read. But it was accurate - I wired it up per the diagram and everything worked as intended.

Documentation lacking accuracy is useless. It can be the most readable thing ever produced, but if it describes a different thing than what was intended to be documented, it's trash. Documentation that is hard to read but is accurate still has value.

Regarding "follow up the document with a lot of support" - did you catch the part of the anecdote where the author is having to deal with support requests because of the inaccuracies?


  > The documentation was complete, correct, and relatively terse. Less than a page.
No, that's YOUR IMPRESSION of your own writing.

There are many reasons why others might not find what you wrote sufficient to understand it. You boss ran it through AI for a reason and that reason was most likely because it the document was not understandable or perhaps confusing.

Did the document have usage examples? Did it explain context and background? Did it use "precise" jargon that not everyone knows? Did you follow up the documentation writing with a meeting with stakeholders/users to see if they had questions?

It sounds like you just "threw it over the wall" like you were done with it and left your boss to figure out how to get others to use it. If you find that you have "near constant" struggle to communicate, there is a strong possibility that the problem is yours and not everyone else.


How can you be so critical of a stranger's work given that you haven't even seen it?

"that reason was most likely because" -> Bear in mind you do not actually know the given situation.


None of us knows the exact situation but the fact that the person said his documentation was "complete, correct, and relatively terse" is a red-flag. It seems to me like smug over-confidence.

If the document really was so clear and error-free, then why would the boss try to "fix it"?


Assuming you're correct that the commenter is unaware of their communications deficiencies, then as much of your confident criticism should be directed at a manager who would silently change a spec sheet for some reason, and not coach the employee on why that was needed.

If it was truly a manager, where the main role of their job is to manage the performance of their employees, then they failed here.


The boss also tried to fix it in the lowest effort manner possible, without even checking the results.

People try to fix things that are perfectly fine all the time.

People often apply nonsensical standards to things.


Who knows why? That's my point, its not us

You are making a bunch of claims about a situation you know nothing about.

GP made a claim about the precision of his language that is incompatible with natural language.

This is already known: GP is wildly overconfident in their communication skills.


The OP just told us all what it was about. You don't know any more or less than I do.

I simply am skeptical of their smug take on it.


> There are many reasons why others might not find what you wrote sufficient to understand it. You boss ran it through AI for a reason and that reason was most likely because it the document was not understandable or perhaps confusing.

It could also be because their manager is less technical. It's not unusual in my life for a PM to try to "rephrase" or restate things I've written in order to make them "easier to understand" in a way that in fact falsifies them or makes them more difficult to understand for the people who will actually have to work on/with it.


PM: "X party needs to know about Y thing"

"Tell them [very specific answer targeted at X party]"

PM: "They are still asking about Y, see their response with the follow up question"

Then in the original send of [specific thing] PM has transformed it into [something else]. X party has followed up with a question that was answered by [specific thing]. Yes PM you might have been confused but you weren't the target.

This cycle happens very often.


Spherical harmonics are basically a fourier series. They're a complete orthonormal set of basis functions for functions for the unit sphere. Whereas the fourier series from calc 101 is a complete orthonormal set of basis functions on the unit interval (eg [0,1]).

In other words you can express any reasonable function on the unit sphere as a series of spherical harmonic terms. That makes them ideal for working with differential equations (eg schrodinger's equation for the hydrogen atom, or, emission from an arbitrary light source).


And the number of terms you need to get a good approximation is related to the frequency. Low frequency signals like lighting work well.


this is all so interesting.. Are there any particular functions / parameters that are typically used, that say replicates 3 point light setups?

I guess at a certain point the number of terms becomes so large that it makes sense to just use a cube map?


In the era im familiar with (ps3, 360) everyone used the first 9 coefficients. You can read the original Ramamoorthi paper for better theory applied to lighting.

But yes it’s an approximation. If you have a ton of terms it looks like a bitmap like you said.


I am convinced that the vast majority of professionals simply don't bother to remember and, ESPECIALLY WITH GIT, just look stuff up every single time the workflow deviates from their daily usage.

At this point perhaps a million person-years have been sacrificed to the semantically incoherent shit UX of git. I have loathed git from the beginning but there's effectively no other choice.

That said, the OP's commands are useful, I am copying them (because obviously I won't ever memorize them).


> I am convinced that the vast majority of professionals simply don't bother to remember and, ESPECIALLY WITH GIT, just look stuff up every single time the workflow deviates from their daily usage.

I wrote a cheat sheet in my notes of common commands, until they stuck in my head and I haven't needed it now for a decade or more. I also lean heavily on aliases and "self-documenting" things in my .bashrc file. Curious how others handle it. A search every time I need to do something would be too much friction for me to stand.


I just use Claude Code as a terminal for git these days. It writes up better commit messages than I would write anyway. No more "git commit -m fix"


That could work if Claude Code made the code changes, but if you made them and only asked Claude to commit them, how does it know "why" you made those changes? Does it have access to your bug tracking system, for example?


> but if you made them and only asked Claude to commit them, how does it know "why" you made those changes?

It's an LLM. It can diff and figure out why I did what I did, in most cases

> Does it have access to your bug tracking system, for example?

You can give it access and tell it to look there


If Claude was used in the creation of the change, there's usually some dialogue for Claude to use.

FWIW i use Claude to help with code changes, then give the diff to Gemini to review/ create meaningful commit messages


I just wrapped these 5 diagnostic commands into a Claude Code skill. Because the post is useful but I'm not sure I can remember these git commands all the time... https://github.com/yujiachen-y/codebase-recon-skill


indeed, I held off for a while but finally caved because I got sick of seeing commits with `git commit -m .` littered in there. These are personal projects so I'm the only one dev-ing on them, but still so nice to have commit messages.


I refuse to have alises and other custom commands. Either it is useful for everyone and so I make a change to the upstream project (I have never done this), or it won't exist next time I change my system so there is no point. I do have some custom tools that I am working on that haven't been released yet, but the long term goal is either delete them or release them to more people who will use them so I know it will be there next time I use a different system.


> I refuse to have alises and other custom commands.

I am the same way, and have caught much flack for it over the years.

But when I sit down at a foreign system (foreign in the sense that I haven't used it before) because something is broken and my help was requested, I don't have any need to lean on aliases.

I worked with someone once that had a very impressive bashrc, and it was very effective for them... on their workstation. Plop them in front of a production system, they can't even remember how to remount / rw because they've been using an alias for so long.

This is also why I learned vi, having started with emacs 30 years ago initially, as it was first taught to me. I know it'll be there, and I know how to use it.


You don’t need aliases when you have fzf fuzzy history search with ctrl-r


it's a tradeoff for sure. With dig especially I can't ever remember the normal syntax because I have aliases and scripts for things. I feel the aliases are wroth it since I'm on my own machine(s) 99.5% of the time, but it does suck to be handicapped


Absolutely, and I think aliases are great and should be used. I, personally, worked in a handful of environments that made me realize it was infeasible to lean on aliases and helper scripts. Like bluGill said, if I need it in a real way, I'll try and upstream it.

What I resent is someone telling me how to use a computer. I've got that bit mostly down at this point.


> At this point perhaps a million person-years have been sacrificed to the semantically incoherent shit UX of git. I have loathed git from the beginning but there's effectively no other choice.

Yes! We mostly wouldn’t tolerate the complexity and the terrible UX of a tool we use everyday--but there's enough Stockholm Syndrome out there where most of us are willing to tolerate it.


Unless you're aware that such powerful commands are something you need once in a blue moon, and then you're grateful that the tool is flexible enough to allow them in the first place.

Git may be sharp and unwieldy, but it's also one of the decreasing amount of tools we still use - the trend of turning tools into toys consumed the regular user market and is eating into tech software as well.


Tools, done right, are a joy to use and allow you to be expressive and precise while also saving you labor. Good tools promote mastery and creative inquiry.

Git is NOT that.

Git is something you use to get stuff done, until it becomes an irritating obstacle course of incidental complexity.


Hg is a joy to use compared to git. Sure wish hg had won.


> Sure wish hg had won.

To me, it's more like GitHub won; Git came along for the ride.

Back in the day when companies evaluated Git and Mercurial (Facebook, Google, Microsoft), they decided Mercurial was better. Mozilla used Mercurial for a long time until switching to Git fairly recently.

But once GitHub took off and became the center of gravity for developers, it became the de facto standard.

It also explains why there have been several attempts (Sapling, JJ) to use Mercurial's semantics as a front-end for Git.


Why should there be tolerance? You look it up once, then write a script or an alias if it's part of your workflow. Or made a note if it's worth that. I use magit and I get quick action and contextual help at every step of my interaction with git.


That's why I really like lazygit, I don't need to remember much because all the keymaps are shown in the UI. I like those kinds of Ui like whichkeys in neovim, or helix, or Doom Emacs.


I just use my ide integrations for git. I absolutely love the way pycharm/jetbrains does it, and I'm starting to be ok with how vscode does. Remembering git commands besides the basics is just pointless. If I need to do something that the gui doesn't handle, I'll look it up and put it in a script.


>I am convinced that the vast majority of professionals simply don't bother to remember and, ESPECIALLY WITH GIT, just look stuff up every single time the workflow deviates from their daily usage

Partly that, but for me at least, I have a bunch of simple bash scripts and aliases for things I do frequently. Git makes this really easy because you can set aliases for lots of custom commands in the .gitconfig file.


I don't even think git cli UX is that bad. Didn't git pioneer this sub-command style? Much better than like e.g. ffmpeg. Sure some aspects are confusing. I still don't understand why `checkout` is both for changing branches and clearing uncommitted changes. But overall I think the tool is amazing. I've not observed a bug in git once.


> I still don't understand why `checkout` is both for changing branches and clearing uncommitted changes.

Because `checkout` is for getting the working directory to the state of a specific revision. Which both means switching branches (which are just pointer to revisions) and clearing changes (and get back to the starting revision). In both cases, you "check out" the version of the file at a specific commit or HEAD.


> Didn't git pioneer this sub-command style?

No, various other tools used it before git, e.g. openssl.


sure but it certainly popularized it


`git change` can switch branches too if thats easier to grasp :)


I think this is where LLMs shine. I experience the same difficulty with a lot of command line tools, .e.g find is a mystery to me after all these years. Whatever the syntax is, it just doesn't stick in my memory. Since recently I just tell the model what search I want and it gives me the command.


The relevant XKCD comic https://xkcd.com/1597/

FWIW I too was once a "memorised a few commands and that was it" type of dev, then I read 3 chapters of the Git book https://git-scm.com/book/en/v2 (well really two, the first chapter was a "these are things you already know") and wow did my life with git change.


I’ve recently been looking into some tools that provide quick or painless help like pop up snippets with descriptions and cheat sheets, got any recommendations?


Navi is good for generating personal cheatsheets:

https://github.com/denisidoro/navi

But for Git, I can't recommend lazygit enough. It's an incredible piece of software:

https://github.com/jesseduffield/lazygit


I've found tldr to be useful

https://github.com/tldr-pages/tldr



Just handroll one, I wrote one in python, use an sqlite db, call out to fzf and voila you have the perfect tool. Codex can prob one shot it


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: