my first post to HN but I wanna save you guys the click because I think the message is more important.
In a system that can't see you, your behavioural patterns become the things that are used to discriminate against you. That's all a digital system can really see about you. Physical attributes don't mean anything in a digital world, things like your ethnicity, gender, age, etc., aren't relevant to digital systems in the same way they are to bigots. But they can be inferred based on your behavioural patterns.
As systems increasingly categorize us based on our behavioural patterns they force us into different digital classes, some are ideal users, some are problematic. If you use a system in a way that doesn't violate any terms but that the operators didn't intend, you face increasing algorithmic discrimination in that system as you try to navigate it. These systems are the systems that we use in every day life, your bank accounts, social media's, iCloud's and Google Drives — you could be restricted or completely cut off at the whim of an algorithm, stuck without appeal because you won't even know which rule you broke.
And nobody knows what the rules are except for the ones who wrote 'em.
I've got the weird problem that Meta won't let me create an Instagram account which is a real head-scratcher for me because I've never done anything obnoxious on a Meta property as far as I can tell.
It follows me around to different places: my home is weird because I have a load balancer, but I have the problem at work and at the gas station, on my tablet, laptop, and more than one desktop computer. It also affects me if I try to use one of several email addresses instead of my Facebook account -- the only thing I can think of is maybe I have to supply a mobile number from a real carrier and not a Skype number to verify by text, but it doesn't outright say that.
It is not usually a good idea to use skin color in your analogy, because racism still exists. Since not everyone experiences racism equally, you lose a crucial variable in the analysis. For example, while it's true that having a medical issue can be unpleasant/scary/costly, being white with a medical issue is not the same as being black with a medical issue. Any discrimination based on digital behavioral patterns will compound existing discrimination, not replace it.
In a system that can't see you, your behavioural patterns become the things that are used to discriminate against you. That's all a digital system can really see about you. Physical attributes don't mean anything in a digital world, things like your ethnicity, gender, age, etc., aren't relevant to digital systems in the same way they are to bigots. But they can be inferred based on your behavioural patterns.
As systems increasingly categorize us based on our behavioural patterns they force us into different digital classes, some are ideal users, some are problematic. If you use a system in a way that doesn't violate any terms but that the operators didn't intend, you face increasing algorithmic discrimination in that system as you try to navigate it. These systems are the systems that we use in every day life, your bank accounts, social media's, iCloud's and Google Drives — you could be restricted or completely cut off at the whim of an algorithm, stuck without appeal because you won't even know which rule you broke.
And nobody knows what the rules are except for the ones who wrote 'em.