I hope this is not trolling so I'll bite. It is incredibly natural to represent an object, such as an email, as an Email class in object oriented languages like C++. It'd then have a constructor that accepts a string and constructs the email object from said string, or maybe a parse(string) -> Option<Email> thingy. The type system then ensures the checks are present whenever they have to be, and nowhere else.
Tl;dr: there's nothing extra that functional or OO programming give you here. Both allow you to represent the problem in a properly typed fashion. Why would you represent an email as a string unless you are a) deeply inexperienced or b) have some really good reason to drop all the benefits of a strongly typed language?
I completely agree with you but I think sometimes folks carry some piece of data around as a string or int instead of something more concrete like a class or a strongly typed enum etc purely out of laziness!
I think the old Lisp tradition of using lists for everything is related to this somehow. On the other hand, in Common Lisp programmers can define custom types that have to fulfill a predicate function. Then, if they declare the types of their functions, most implementations will generate type-checking code unless instructed not to. So in Common Lisp you can use lists for everything but still have type-checking, at some cost to efficiency. :D
Well, in C++ the constructor must return a value of its class type - you can't return an Option<T> from a constructor on T, for example, and since constructors are the canonical way to construct an object, it creates stylistic and idiomatic friction when you start using free functions to create a Maybe<T> instead of constructors.
Slight tangent: the post says that github is crumbling. Can someone get me up to date on what's going on please? Admittedly I'm not following tech drama particularly closely, but I thought I'd have heard if a major thing like github was going down the chute.
So there has been increasing issues form the github side for the past year and I believe they also just lost alot of customer/user data on top of several critical vulnribilities and bugs in base service and in actions.
My POV: Github actions are inconsistent in billing, security and require alot of attention to do right. Github has worse uptime than alot of free online videogame services, when most enterprise and business world leans on it for developers. Leaving a lot of users with terrible experience the past year having to constantly examine github firefighting for issues around availability, security, and billing instead of doing work that makes the company/people money.
Example walk through of securing github actions for ci/cd and managing SBOM python dependancy/supply chains (giant complexity) [1], Github has remote code execution[2], Uptime by 3rd party tracker shows 86% past 90 days. (First quarter in 2 years where they didn't have atleast one month above 90% uptime) [3]
> but I thought I'd have heard if a major thing like github was going down the chute.
Wow, it was a really long time ago it started going down the lane of the chute, can't believe someone missed it, made big news at the time back in 2018! This was the turning point: https://news.ycombinator.com/item?id=17221527
This is happening quite a lot actually. People just feed an existing project into their agent harness and have it regenerate more or less the same with a few tweaks and then they publish it.
I'm not sure how this works in the legal sense. A human could ostensibly study an existing project and then rewrite it from scratch. The original work's license shouldn't apply as long as code wasn't copy & pasted, right?
What happens when an automated tool does the same? It's basically just a complicated copy & paste job.
It's actually unbelievable that this would be taught as anything but a cautionary tale of survivorship bias.
The FedEx founder got lucky. The countless others who tried a similar gamble didn't and unfortunately their story doesn't seem to be taught because "desperate founder gambled the employees salary and lost" just doesn't have the same ring.
I see where you are going with this, but IMO this is not a technical problem but a legal problem.
Who will be held responsible when an AI agent messes up the HR system and the company is exposed to losses due to a mistake? Who is going to be responsible when your SEO agent overspends?
Ultimately, it's going to be you most likely, because I can't see AI firms taking this responsibility.
You might argue that right now it also falls on the employer, since employees are rarely held responsible for genuine mistakes, even if it ends in disaster, however you have a lot of agency over what an employee is doing. Their motivation is generally correlated with doing well, because past success ensures future career growth.
An AI agent has no such incentives. The AI company will just charge you some minimal fee to provide the service, and if it messes up, will wash their hands of responsibility and tell you that you should've been more careful in using it.
I dislike Taleb for various reasons, but using AI agents is basically the definition of a fragile system. It works 99% of the time, lulling people into this sense of security where they can just offload all their work very conveniently. And then 1% of the time (or 0.01% of the time), it ends in utter disaster, which people are very bad at dealing with.
I think it will move most critical due dilligence to the tools / HR system themselves.
Encoding more rules, more precise rules and alerting a human in case it thinks its off. Like salary increase by 20% gets flagged automatically. Revenue drop bey x % too.
It could even go so far that the maker of these systems will insure you for their use.
It just needs to be cheaper than all the humans in the loop and if you train it once, you can copy it unlimited time. Scaling effect of software for tasks we need to train a human again and again.
It could also be agent systems which do this. Like a company building and designing the HR USA Healthcare agent specialized in SAP HR. Another one for HR Brazil Healthcare agent specialized in another HR software.
Humans are really expensive and you have to train them regularly and every single on of them.
Parent is pointing at the fact that the relationship between our perception of MS products and their financial success is highly inelastic. The bottom line isn't impervious to bad product decisions, but there can be a large number of user hostile decisions that PMs push through that still increase revenue on the whole even at the cost of user satisfaction, before they move past the optimal point in the payoff curve.
How would this really help python though? This doesn't solve the difficult problem, which is that python objects don't support parallel access by multiple threads/processes, no? Concurrent threads, yes, but only one thread can be operating on a python object at a time (I'm simplifying here for brevity).
There are already means of passing around bulk data with zero copy characteristics in python, but there's a lot of bureaucracy around it. A true solution must work with the GIL (or remove it altogether), no?
I'm not familiar with CPython GC internals, but I there there are mechanisms for Python objects to be safely handed to C,C++ libraries and used there in parallel? Perhaps one could implement a handoff mechanism that uses those same mechanisms? Interesting idea!
Virtual environments have been always associated with projects in your use case I guess.
In my use case, they almost never are. Most people in my industry have 1-2 venvs that they use across all their projects, and uv forcing it into a single project directory made it quite inconvenient and unnecessary duplication of the same sets of libraries.
I dislike conda not because of the centralized venvs, but because it's bloated, poorly engineered, slow and inconvenient to use.
At the end of the day, this gives us choice. People can use uv or they can use fyn and have both use cases covered.
> and uv forcing it into a single project directory made it quite inconvenient and unnecessary duplication of the same sets of libraries.
Actually, uv intelligently uses hardlinks or reflinks
to avoid file duplication. On the surface, venvs in different projects are duplicate, but in reality they reference the same files in the uv's cache.
BTW, pixi does the same. And `pixi global` allows you to create global environments in central location if you prefer this workflow.
EDIT: I forgot to mention an elephant in the room. With agentic AI coding you do want all your dependencies to be under your project root. AI agents run in sandboxes and I do not want to give them extra permissions pocking around in my entire storage. I start an agent in the project root and all my code and .venv are there. This provides sense of locality to the agent. They only need to pock around under the project root and nowhere else.
This is actually the feature that initially drew me towards uv. I never have to worry about where venvs live while suffering literally zero downsides. It's blazing fast, uses minimal storage, and version conflicts are virtually impossible.
Do you only work on projects individually? Without project-specific environments I don’t know how you could share code with someone else without frequent breakages.
Just how much effort even went into this? The project is LLM generated, the blog post is LLM generated. It produced something that is really annoying to deal with as a consumer. The last thing I want to talk with when calling a boutique garage is some AI receptionist.
Tl;dr: there's nothing extra that functional or OO programming give you here. Both allow you to represent the problem in a properly typed fashion. Why would you represent an email as a string unless you are a) deeply inexperienced or b) have some really good reason to drop all the benefits of a strongly typed language?
reply