> Google now pulls the rug on Android which is a whole different story because it used to be open. The whole idea of Android was to be open.
This is the narrative for us in developed nations, but the majority of users today are people who were in developing countries and got a mid-tier smartphone to chat with friends and do banking with the same values as Apple users.
My intuitive understanding about double descent is that
1. Older ML models encoded in their architecture and lack of expressivity a bias to simplicity; which aided interpolation.
2. Overparameterized models instead use regularization to nudge parameters to simpler and more robust representations, while still memorizing the noise. In this manner, we still achieve generalization performance OOD. Moreover, the softer nudging and fundamental architectural expressivity allows for "data-specific" generalizations and representations that may be impossible to represent in small models.
3. At the critical point between the two regimes, the model is expressive enough to memorize; but not expressive enough to simultaneously both do that and encode general patterns.
I wonder how this understanding translates to these researchers' models of deep learning.
I don't understand the locus of the arrangement/decision that you find dehumanizing. There are several distinct ways I perceive how someone might find aspects of such an arrangement and change of arrangement dehumanizing, and I shall list them out, though I may or may not subscribe to them (for the purpose of this comment, I am assuming Filipino call center contractors, though one may substitute in any other country where the population knows English and jobs are outsourced to):
- Is it dehumanizing to Filipinos that Filipinos probably now do their job more efficiently without having to learn an accent that they are not exposed to?
- Is it dehumanizing to Filipinos that they no longer enjoy having their accent heard as a externality of a counterfactual arrangement?
- Is it dehumanizing to the customers that the company does not expect their customers to be cosmopolitan enough to understand a foreign accent with ease?
- Is it dehumanizing to the customers that the customers are now more sensorily shielded from a current-day reality regarding globalized providers of service?
- Is it dehumanizing, not due to this decision itself; but the globalized arrangement, to Canadians that they cannot expect to hold such a job and get by in Canada? Or perhaps to Filipinos, that such a job might be low-paying in their own country (or in respect to non-domestic goods that need to be purchased from outside their polity)?
- Is it dehumanizing, regarding not this decision, but the offshoring decision, that such decisions can be made without consent by employees and contractors?
It's mostly the "voice smoothing" part of this technology that I have morality issues with.
This isn't any different from (usually white) teachers telling (usually black) kids to "speak better" simply because they consider the way they speak "wrong." Or, since TELUS is in Canada, like the Residential school system [^0] that their First People were forced to attend that did the same thing.
I believe that your voice makes you _you_. Taking that away because some people have trouble understanding dialects is literally taking away a foundational piece of one's humanity.
It's also a slippery slope: what's there to stop companies that do this from going straight to making everyone sound like a collection of voice profiles? Such a move would only make it easier to justify gutting customer service departments entirely.
I am not impacted by this issue on either side, but I am in the "dehumanising" camp, so here are my opinions:
> Is it dehumanizing to Filipinos that Filipinos probably now do their job more efficiently without having to learn an accent that they are not exposed to?
It's already demeaning to expect them to "learn an accent", unless their job description is to literally pretend they are from a different culture (e.g. if they were actors). Introducing an "AI" middleman to change their voice is demeaning and dehumanising.
> Is it dehumanizing to Filipinos that they no longer enjoy having their accent heard as a externality of a counterfactual arrangement?
It is dehumanising to any person that their own human voice is no longer heard when performing a job involving human contact.
> Is it dehumanizing to the customers that the company does not expect their customers to be cosmopolitan enough to understand a foreign accent with ease?
Not quite dehumanising, but it is certainly patronising that the company has an opinion as to what voice their customers can or cannot understand. And if the company is hiring customer service agents whose accents are a serious hinderance to understanding, I would argue that their hires are not likely to accurately understand the very customers they are supposed to assist.
>Is it dehumanizing to the customers that the customers are now more sensorily shielded from a current-day reality regarding globalized providers of service?
Not dehumanising, but again patronising, and also disrespectful and borderline dishonest.
I won't get into the final two points, as those are prior to the accent-middleman "AI".
> It's already demeaning to expect them to "learn an accent"
The concept of an accent is broad, but at least part of it you need to learn together with the language, as speaking a non-native language with a thick accent is partly based on the fact that you have yet to learn.
Without being exhaustive, things that might fall into the "speaks with an accent" concept in this thread:
- Prosody. Prosody can vary per region but a distinctly alien prosody to a language is a barrier for the receptor of the message, that expects a given language and a range of prosodies. E.g. as I know french quite well, hearing english with a heavy french accent makes my brain try to understand what's being said as said in french, and interferes a lot.
- Sound shifts for particular phonemes. While some of it might be local to the language in certain registers (idea --> /ide"er"/, three --> /free/), others are clearly issues in the target language pronunciation (eg. japanese people having trouble with the l phoneme, spanish people adding an /e/ sound prior to an s-mobile, or v versus b for spanish people also).
- Connected speech. Where do you end words, how do you omit sounds, etc. Also massive hindrance to understanding.
- Grammar. Alien grammar is a hindrance to communication. You need to learn that.
> It's already demeaning to expect them to "learn an accent"
Uh, what? Excuse me?
The purpose of spoken language is communication. Accents can frustrate or enhance communication. In this case, conforming to the accent of the client enhances communication, because it is what the client is familiar with.
You do realize that the obligations of service are on the agent, right? It is the agent, as representative of the company providing a service, who is serving the client. If the aim of an agent is to assist a client, then using an accent that is more intelligible to the client is part of serving them.
You might as well claim that - given that language is part of culture - learning to speak another language at all is "pretending" that you're from a different culture. It's a ridiculous take.
> It is dehumanising to any person that their own human voice is no longer heard when performing a job involving human contact.
What does this even mean? What is your "own human voice" here? Accents are learned. They are conventional, even if they have objective properties that allow them to be compared. An agent's job isn't about him; it is about the client. It's not about "being heard" (whatever that means), but being understood by the client within the context of the purpose of the job.
Imagine if diplomats thought the way you do. Diplomats serve and represent their country, just as agents serve and represent their company. It is in the interest of the diplomat, his country, and the other party to communicate as effectively as possible with the other party.
> Not quite dehumanising, but it is certainly patronising that the company has an opinion as to what voice their customers can or cannot understand.
This, too, is nonsensical. Given that companies record calls, it is fair to assume that the company has statistical evidence concerning the accents of their agents and how well they're understood by their clients.
Now, if you want to criticize the use of AI in such cases on independent grounds, maybe you can make a case. I don't think it would be a very strong case, as this is such a trivial matter. But you cannot claim that learning accents is "dehumanizing". Accent is part of language. If you wish to communicate with a people, you need to speak a common language. That generally means learning their language. The better you speak that language, the better you can communicate with them. If you are serving, the burden is on you to speak in a way that can assist understanding. It's that simple.
But accent and pronunciation are different things, and the fact that you don't have a particular accent doesn't mean that you don't speak the language well, what matter most is pronunciation. Sometimes it can get ridiculous like when Trump had a interpreter for a guy that was native in English but had an accent or that other leader from africa that Trump asked where he learned English when it was it's native language.
Coming back to accent is different than pronunciation, in any English test like IELTS or Cambridge accent won't be qualified
> But accent and pronunciation are different things [...]
This isn't true in the way you are thinking of. An accent can pronounce words the same way that another accent distinguishes. An accent can pronounce word x that another accent pronounces word y. What comes to mind immediately: in Indian English accents, RP/GA fricative "th" is pronounced as the aspirate, while the RP/GA aspirated "t" is pronounced retroflex, so naively, "three" can be misheard as "tree".
The working-class accent that I use where I'm from (not India) is syllable-timed (stress does not lengthen the duration of a syllable), and uses pitch lexical stress, rather than intensity/loudness for it, and stress itself is frequently very differently located compared to RP or GA. For "th" as well, we collapse it into t/d.
All in all, for someone who has heard it for the first time or rarely, it can be extremely disorienting to listen to a very distant foreign accent.
I don't think you need to go that deep. This technology is literally dehumanizing: it's replacing individual human aspects of someone's voice with a computer-generated facsimile.
By that same argument, taken naively, film and video are dehumanizing, but not deplorably so: certainly the intensity of emotion and experience through film is far less present than say immersive theater, but we may be more comfortable with this modality, and also, benefit from the economies of scale.
Similarly, a call center worker may not care about having their accent being heard, but wants to get their numbers up, without struggling with a customer that isn't familiar with their accent, and enjoys the ease of speaking in their own accent than having to use one that distant customers are accustomed to. Likewise a customer probably just wants their problem fixed, without the effort of getting accustomed to an accent that they rarely encounter. This meets your definition of deplorable, but analogous to the former scenario, perhaps not deplorably so.
My position regarding devices is that only 2 out of 3 should be satisfied:
1. Used as a proof of identity (for banks, govt services, etc.)
2. Is distributed to laypeople who have more pressing concerns in their lives than security.
3. Is an open platform where you can download apps arbitrarily from the Internet that can read your data and exfiltrate them to a malicious actor.
The mainstream today chooses 1&2. Novelty, underpowered devices choose 2&3. Hobbyists have option 3 (and those who like to live dangerously 1&3) with some inconvenience. You can still run GrapheneOS... and the mainstream apps that expect your device to be a proof of your identity won't work... and I find that quite reasonable.
I take issue with the idea that openness and freedom to install arbitrary software cannot occur without strong safety mechanisms. Android/GrapheneOS/iOS have sandboxing and permissions systems that put most desktop OSes to shame. The base platform can control apps' access to every resource, and an app store can put its own caveats and reminders to users for what kind of access is needed for the functions of a given app.
Sandboxing and permissions provide a different type of security than application signatures. Sandboxing can limit app capabilities, but it doesn't change the fact that you can accidentally grant a malicious application permissions.
Application signatures and developer identification bring a different kind of application security. It provides the security of societal legal systems and legal ramifications for malicious actors.
In the end, you still have the choice to trust the "system" or your own judgment.
This is not really a complex question as much as it is an analogy demonstrating that allowing third parties to dictate how you live leads to a huge loss of your freedom with bad consequences on your independence and control. But you are right: I could say this in my above comment.
It's a number of false choices. Google has complete control over Android and they could easily implement 1, 2, and 3 if they wanted. It's not as if they couldn't provide the means for certified secure enclave apps in addition to normal ones.
Companies might bet that it is safer to base their businesses on more fungible explicated domain knowledge rather than knowledge that is siloed in human brains.
In my mind this is it, the colloquial seasons, and with vague boundaries depending on feeling, whereas the calendar "seasons" are there just to quarter the year artificially.
It's so funny to me that you compare a decapitation strike with the stated aim of regime change to vandalism; I'd compare the actions taken to Iran in 2025 to vandalism over this.
There are areas of mathematics where the standard proofs are very interesting and require insight, often new statements and definitions and theorems for their sake, but the theorems and definitions are banal. For an extreme example, consider Fermat's Last Theorem.
Note on the other hand that proving standard properties of many computer programs are frequently just tedious and should be automated.
Yes, but > 90% of the proof work to be done is not that interesting insightful stuff. It is rather pattern matching from existing proofs to find what works for the proof you are currently working on.
If you've ever worked on a proof for formal verification, then its...work...and the nature of the proof probably (most probably) is not going to be something new and interesting for other people to read about, it is just work that you have to do.
I think I have your new build(s) as I can play from the archive, but the contrast is still too low for me. Note that displays are different, so it's likely that things are more indistinct for me than for you. I've just played through the archive, and all my mistakes came from off-by-one errors because the grid lines were indistinct and I misplaced the holes.
Hmm I think you made the grid lines thicker? I prefer it, but the thickness looks inconsistent (several divisions of the paper do not seem have thicker grid lines on my display)
This is the narrative for us in developed nations, but the majority of users today are people who were in developing countries and got a mid-tier smartphone to chat with friends and do banking with the same values as Apple users.
reply