A politician voting for a bill is legal. Giving money to a politician is legal. But giving money to a politician so he'll vote for a bill is not legal.
"It's a big club and you ain't in it". Obviously the problem is the club is too small, that's why for most of the people it is true that they are not part of it.
"Half the population is stupider than how stupid the average person is". As if somehow there's not a single person exactly on the median. In fact there is probably a huge number of people there, and within a margin of error of it.
How do you figure? I don't have a problem with Carlin, but with people who quote him as a source of wisdom.
The commenter who quoted him here in the thread meant to make a joke and I didn't get it? I thought he quoted him as a point against the law we are discussing.
> "Selling is legal, and fucking is legal; but selling fucking is not legal."
I don't get it. The literal interpretation is a clear joke, as you say. So what's the point that it is making?
To be clear, I think the law discussed is stupid. I also think the argument that if both parts are legal they should also be legal together is wrong. What am I avoiding?
I am quite acquainted with Carlin. If there's anyone that can have their absurd logic repeated back to them, it would be a comedian. And That Right Soon.
I loved the look of the fonts on DOS after I upgraded from a C64. My favorite was the exclamation mark on 1024x768 (VGA). It had curves! Pointy at the bottom right above the dot, and the rounded curve at the top. I've never found a monospace non-bitmapped font that had the same character. (ha!)
I have encountered the exact same kind of frustration, and no amount of prompting seems to prevent it from "randomly" happening.
`the error is on line #145 fix it with XYZ and add a check that no string should ever be blank`
It's the randomness that is frustrating, and that the fix would be quicker to manually input that drives me crazy. I fear that all the "rules" I add to claude.md is wasting my available tokens it won't have enough room to process my request.
Yup, this is why i firmly believe true productivity, as in, it aiding you to make you faster, is limited by the speed of review.
I think Claude makes me faster, but the struggle is always centered around retaining own context and reviewing code fully. Reviewing code fully to make sure it’s correct and the way I want it, retaining my own context to speed up reviews and not get lost.
I firmly believe people who are seeing massive gains are simply ignoring x% lines of code. There’s an argument to be made for that being acceptable, but it’s a risk analysis problem currently. Not one I subscribe to.
If you're going to crochet the result, I don't think you really want 256 colors. A 16 color pallete is probably acheivable if annoying.
Seems like if you print the image, then print a grid on a transparency sheet, you could mark up the sheet with colors until it looks good.
Maybe tracing paper (can you print a grid on tracing paper? Do you want to hand mark a grid on tracing paper?)
I don't use art tools, but you should be able to do something in software too, layer the grid on top, leave it transparent to the image until you pick a color for each square.
Drawing grids over an image is easy. Choosing which colour to drop in is impossibly hard and where the art is. Nearest Neighbour and Average of N Points are some algorithms than can be used but don’t take the overall style of the image into account. For example, one pixel could cover a part of the nose and a part of the eye and averaging them makes a blurry mess.
Problem is that like vector fonts without hinting, naive automatic "pixelation" of images does a poor job. You have to work with the limitations of the medium, and sometimes it entails drawing something in a very different shape than if you had more resolution and color. There are image gen models that do an okay job at pixel art these days though.
Could you please describe your use-case? How do you use it? How do you make use of it?
I use Claude code, so I understand that paradigm; I don't grok this though. Is it any different then going to a web page i.e. gemini.google.com and typing your query there?
Could this side bar have been a "search bar" at the top?
Now that I say it out lou, adding them to the 'search providers' isn't a bad idea.
Generally speaking I am against this being shoved at us, but I find it as a useful tool in a limited number of areas.
I always have my browser open, so having it just one click away and not interrupting with whatever else the browser is showing feels convenient.
I'm using linux, so there are no official desktop apps I could use instead. Had there been, perhaps I'd have had a different opinion about the AI sidebar.
I use it to communicate with AI about content I'm reading without having to navigate away from the content and breaking flow. On a 4k screen there's plenty of horizontal space to have the AI sidebar and display a web page.
I've been a fan of all rust-based utilities that I've used. I am worried that 20+ (??) years of bug fixes and edge-case improvements can't be accounted for by simply using a newer/better code-base.
A lot of bug fixes/exploits are _CAUSED_ by the C+ core, but still... Tried & true vs new hotness?
I do get what you mean, but Rust has been baking for a decade, finally took off after 10 years of baking, and now that is been repeatedly tried and tested it is eating the world, as some developers suggested it could eventually do so. I however do think this shows a different problem:
If nobody writes unit tests, how do you write them when you port over projects to ensure your new language doesn't introduce regressions. All rewrites should be preceded by strong useful unit tests.
Ideally, but if a project wasn't written with tests at the time then finding a working time machine can be a challenge. If you try to add them later you won't capture all the nuance that went into the original program. After all, if the implementation code was expressive enough to capture that nuance, you'd already have your test suite, so to speak. Tests are written to fill in the details that the rest of the code isn't able to express.
Tests are written for various goals: integration testing, to prevent regressions, and in the same effort to prevent regressions to protect mission critical / business logic code. If all those nuances are captured by good tests, you arguably have "100%" test coverage, you don't need to test every single line of code ever written to have 100% test coverage in my eyes. But then when you go to translate your project to a new language, you port the tests first, then test against those tests.
I was born in 1990 so I get it! I still say 21 when people ask me how old I am... Aka how old do I need to say I am to be able to drink alcohol LOL I don't drink that often mind you. I just don't really think about my age a whole lot...
Rust has editions for strong stability guarantees, and has had them for nearly a decade i believe. Besides, tech backing has grown way past the risky point.
FWIW, the GP comment's claim that you're lucky if you can compile 2-year-old code is exaggerated, but so is yours. Rust does not offer "strong stability guarantees". Adding a new method to a standard type or trait can break method inference, and the Rust standard library does that all the time.
In C or C++, this isn't supposed to happen: a conformant implementation claiming to support e.g. C++17 would use ifdefs to gate off new C++20 library functions when compiling in C++17 mode.
> and the Rust standard library does that all the time.
I don't doubt this is true, but do you have an example? I think I haven't run into a build breaking like this in std in like maybe seven/eight years. In my experience breaking changes/experimental apis are typically ensconced in features or gated by editions.
Granted, it'd be nice to be able to enforce abi stability at the crate level, but managing that is its own can of worms.
I did find that the breakage rfc allows for breaking inference, which tbh seems quite reasonable... inference is opt-in.
Almost every major release of rust stabilizes new library methods. For example, the latest major release (1.93) stabilized Vec::into_raw_parts. This isn’t gated by an edition. So if you had a trait with a method “into_raw_parts” which you had defined on Vec, after updating to 1.93 or later your code will either fail to compile, or start running different code when that method is called.
Sorry, I meant to write “method resolution”, not inference. This isn’t the same issue as type inference (though indeed, stdlib changes can break that too)
> years of bug fixes and edge-case improvements can't be accounted for by simply using a newer/better code-base.
Partially is in fact true: Just because the Rust use a better type system (after ML) + better resource model (aka borrow checker), and if you are decently good, you eliminate, forever!, tons of problems.
It can't solve things that arise by complex interactions or just lack of port subtle details like in parsing poor inputs (like html) but is true that changing the language in fact solve tons of things.
A link to the comment in question would be good. I had a quick go at finding it, but couldn't (though not to say I spent a huge amount of effort on this project).
But I agree that it's useful to highlight this kind of thing. If you disagree with the author, you probably won't then want to give them your time and attention, let alone your money. But, on the other hand, if you agree, maybe you'll feel the other way! Which is a roundabout way of describing my favourite thing about the internet, since I'm old enough to remember what it was like Before: whoever your people are, for good or for ill, you'll be able to find them.
yeah, I am a trans lady who was kinda tempted to give this guy six bucks for this insanely overproduced version of a primitive video game, and now I am not gonna do that. <3
> I'm old enough to remember what it was like Before: whoever your people are, for good or for ill, you'll be able to find them.
it's still like this, you just have to look a little harder, and be more wary of nazi recruiters than you used to.
I meant a link to the blog post including the comment quoted. (I suppose "comment" vs "post" could be a bit ambiguous. Sorry about that.) Anyway, for the record, it's here: https://kodiak64.co.uk/microblog/Parallaxian_Cancelled
Canada should re-enact the AutoPact [0] (tldr: I don't see this in the wiki article, but the real benefit was; for every 3 cars sold in Canada, 1 had to be 'made' in Canada). This was ruled as unfair under NAFTA and thus terminated. It also had the effect of incredible auto-industry cutbacks.
BUT, with a new contender (China); we could re-enact it, rebuild our diminished blue-collar manufacturing base; and hasten the rollout of EV vehicles. Which is the real objective here.
But are Chinese EVs attractive to consumers if they are built in Canada with union wages? At that point people will just keep buying Toyotas/Hondas that are also built in Canada.
I'd expect quite a few consumers would still want them. Canada has cheap electricity and expensive gasoline. For those who don't live in some part of Canada so cold that the efficiency of an EV drops massively due to heating an EV can save quite a bit on energy costs.
Around 65-75% of Canadians live in parts of Canada that have winter temperatures similar to those of Norway's major cities and EVs perform fine in Norway so will probably also be fine in Canada.
The US and Japanese and Korean car companies are putting most of their EV effort, at least in the US and Canada, into the more expensive models. They don't have much that is the EV equivalent of a Toyota Corolla or a Honda Civic for non-SUVs, or the equivalent of a RAV4 or CR-V for EVs.
Honda for example only has the Prologue, which is built on top of GM's Equinox EV platform and starts at about $15k more than an Equinox EV.
The Chinese EV companies seem more willing to address that segment. Even if they have to pay union wages to build them there will be demand because it will still be cheaper than the EVs that are aimed at a more upscale market the other companies are mostly making.
Or you just setup lower price limits for cars like Europe did with China. So that state support is not affecting the market. Because guess what: producing a car in far far away land and then ship it around the world and pay some 10% tariff is also not that cheap.
the current deal is for 45000 cars, which they think will be all sold in 90 days or less, then there is mention of BYD building a plant in Canada, with whatever balance of imports and domestic production gets agreed on, so there is room and time for something like Autopact with China
Nova Scotia here, off grid, realy want to build a new bigger solar pv set up with sodium batteries, and design for whole house, shop, and car charging.
Time for that is looking like now!
The time to negotiate that would have been before this announcement. Carney has doomed Canada's auto industry because he is negotiating with his emotions.
The deal allows up to 70,000 cars a year by 2030 to be imported at the reduced tariff. Canadians buy 1.5-2 million cars per year, and roughly a quarter million EVs per year.
If this deal as reported somehow manages to doom the Canadian auto industry, then our auto industry was probably somehow doomed anyways.
I don’t see how. Chinese manufacturers aren’t going to setup multi billion dollar plants without some market presence, that comes after.
Letting in some small amount of Chinese EVs for so they can test the waters seems sensible all around. If they are popular then negotiate on local manufacturing to allow a larger market share.
"Selling is legal, and fucking is legal; but selling fucking is not legal."