> Actual language used by the ML researchers: "Intuitive physics learning in a deep-learning model inspired by developmental psychology"
In my opinion, this is still anthropomorphizing the algorithms. The term deep-learning is a poor representation of what actually goes on. Someone please correct me if I'm wrong, but all ML does is statistical regressions (in essence). It doesn't "learn" like a person learns. Neural networks are not actually like brains (as far as we understand how the brain works).
I feel like the whole industry is inundated with aphorisms that are kind of true, but not wholly true. Evolutionary algorithms, neural networks, deep learning, deep mind, this stuff all reeks of anthropomorphizing fundamentally mathematical processes. I get it, it's a lot easier to get the gist of "the computer is learning/training" than "the computer is refining the weights and biases to try to optimize the output".
Well it's not called person-learning-machine is it? Why would something have to learn "like a person" to be able to use the word "learn", those two concepts are not attached one to the other. If they were, saying "learn like a person" would be a pleonasm, yet it isn't.
IMHO "learning" is a fine term, it conveys the idea of what is happening effectively and quickly.
Also, we don't know how a person learns anyway, it might very well be a similar process, just way more efficient and complex.
> Evolutionary algorithms
How would you propose calling them? You have a generation of agents, each with their own specificities, and from the agents most successful to accomplish the task at hand, we derive a new generation, slightly modified from their parent's.
It seems to me "evolution" is again the most suitable and efficient way of describing what is happening.
While I agree that there is definitely too much anthropomorphizing surrounding AI, I feel you are going way too far in the opposite direction. Not every word that can be composed with natural process/humans should be banned from being used anywhere else.
Who cares? Why should you hold the idea that it would? They are systems moulded after data, 'learn' seemed to be a decent label. If it is not, it is because "learning" is _active_, by philological analysis, and happily consistently with an aim of AGI (intelligent entities learn actively).
A computer does not compute like a human would. Yet, no problem.
For that matter, you are using 'person' in a very individual way - not even "personal" (a "person" learns according to individual nature, while you are using it as a collective term).
As already expressed - nearby I wrote 'biomimicry' -, what you are calling "anthropomorphizing" is a wrong direction: "evolutionary algorithms" were born out of keys after the observation of the natural world, and the terms express that - it is not that you saw the algorithm and went "It looks like my uncle Oscar"¹ (this side is active - it "learns").
(¹Those anthropomorphizing Hollywood cultists and all that sculpture...)
> a "person" learns according to individual nature, while you are using it as a collective term
There's a very specific definition for learn:
>> to gain knowledge or understanding of or skill in by study, instruction, or experience[0]
There's a few more, but none of the definitions treat learn in a non-collective way. I guess meriam Websters dictionary doesn't like treating people as individuals or something lol.
Additionally, all the definitions there are speaking in human contexts. They talk about learning in the sense of being taught, or gaining experience, or gaining knowledge. Sure a computer kind of does this stuff, but it doesn't really. And that falls into the category of attributing human characteristics to an inanimate object.
I probably shouldn't have said that everything in the short list I wrote reeked of anthropomorphizing processes. But the evolutionary algorithm was more in line with what I mentioned immediately before. My whole comment read:
> I feel like the whole industry is inundated with aphorisms that are kind of true, but not wholly true. Evolutionary algorithms, neural networks, deep learning, deep mind, this stuff all reeks of anthropomorphizing fundamentally mathematical processes.
An evolutionary algorithm definitely falls into the category of kind of true but not wholly true. But it's not anthropomorphic.
> intelligent entities learn actively
Also, this is a very loaded statement. What is an intelligent entity? If you Google "is a computer intelligent" there are various papers, articles, and other pieces of media all claiming that we can't call a computer intelligent, and some claiming that we can consider certain algorithms somewhat intelligent. This is anything but an accepted standard today.
Give us an example of some relevant label that would be "«wholly true»" instead of "«just kind of true»". Because metaphors, and the whole system of fuzzy pattern relations, are based on fuzzy pattern relations.
> none of the definitions treat learn in a non-collective way
You have misunderstood my post. I would prefer that you read it again.
You are complaining about loose use of the language: I noted that you yourself used the term 'person' more than loosely, with a dubious jump. When one says '«like a person learns»', that is supposed to be "like a specific individual in his own individual characteristics will learn" - instead you used to say "like people in general learn". A "person" is a "definite form", not a general individual representing common features - it is the opposite.
> very loaded statement
Which you are taking out of context. I said that you have to call the moulding of your functions something, and that "learn" seems a very acceptable term, since it is bottom-up instead of top-down, it is automated instead of encoded: it is developed against data, it "learns". And that if the term is disliked, there could be a very good reason, because 'learn' was born as a sort of a hunting term¹ - it really means something like "investigate" -, which is a happy coincidence because what is largely missing in AI is critical thinking, part of the active process of learning ("learning" is active as investigation is). And the day John will have to check «accepted standard[s]» to see how things are, I will be willing to comply to his sad request for mercy.²
¹Irregardless of what the Merriam-Webster will write, because you get a none-the-wiser relative notion but not knowledge from a "dictionary of use", as at the paragraph for 'life' you will not find the meaning of life.
²John must be, tautologically, an "active learner". (He will check personally.)
You're right that journalists use anthropomorphization much more. But AI researchers also have a long history of choosing terms that are anthropomorphizing or animating. Here the name PLATO -- which evokes an image of an ancient philosopher, a human, who is by cultural tradition considered smart -- is used in the original journal article.
Terms like "neural network" and "artificial intelligence" are frequently used by AI engineers and researchers despite the obvious image they evoke. Sometimes they even call their creations "brains". Also note the name DeepMind.
To add to that, often EDITORS are the ones who come up with the titles, for reasons beyond being clear, like using words that draw attention and to fit in a specific space.
ML researchers don't write articles, journalists do.
Actual language used by the ML researchers: "Intuitive physics learning in a deep-learning model inspired by developmental psychology" [1]
[1]: https://www.nature.com/articles/s41562-022-01394-8