there's probably quite a lot we unconsciously pick up from others, even things we think are uniquely "ours"
For example, when I first heard a deaf person laughing or talking, I probably internally noticed how... different the sound was. I'm guessing most hearing-abled people have a similar experience. It's very... unfiltered? It made me wonder how much even my own laugh was sculpted by my environment. If I relax my voice, I notice my voice becoming much more booming and obnoxious than my "normal" speaking voice.
Anecdotally, I've noticed Japanese people are much more likely to have a sort of stifled, raspy, restrained laugh, even when they're in a situation I might expect an American to have a belly laugh.
A lot of cultural values are encoded in language too, and in turn, the languages we speak can affect how we think or interact. Anecdotally, my personality is a bit different depending on what language I speak. I think the concept of what's actually encoded in language is being explored with regards to how/why LLMs "feel" smarter than they really ought to, or seem to show intelligence beyond simple "stochastic parroting"
Somewhat more obviously, most people will "code switch" not only their language, but vocal tone or even demeanor, depending on our current "persona" or our audience. Recently, Paris Hilton entertainingly demonstrated this: https://www.youtube.com/shorts/g9pal1ConNU
And this is more of a half-baked personal speculation based on a scattering of theories and case studies, but the environments we live in, the narratives we expose ourselves to, the people we surround ourselves with all probably very heavily define a lot of our values, beliefs, and even personal preferences, to an extent that would disturb a lot of people. Self-serving biases, post-hoc justifications, and confabulations give us convenient and creative ways to validate our own free will, volition, and independence, but I often wonder how much of our supposedly great human intelligence is an uncomfortably thin veneer on a largely automatic pattern-absorbing sponge.
so many fails in such a short response. Assuming someone's profession based on almost nothing, general stereotyping, armchair mental diagnosis, insult based on 'diagnosis' that's needless and honestly irrelevant. Reminds me of the common backhanded insult on reddit, "you must be a blast at parties"
What kind of pilot would be taken seriously if they mix up such a basic aspect of professional knowledge? Jets and prop planes are very different beasts.
I can't think of any job where you can just casually mix up different classes of objects and not eventually have it result in some significant failure.
"Why did you feed the mules and not the horses?" "Don't be so autistic. I bet you're a blast a parties"
"Why did you give me cheeseburgers? The customer ordered plain burgers." "Don't be so autistic. I bet you're a blast at parties"
Yes, being specific about the things you work with is generally a sign that someone is good at their job, but it generally doesn't have much to do with autism. But I guess being condescending is what makes a good cocktail party companion?
While LLMs and people are questionable in their ability for one-shot answers to complex things, with an LLM you can at least ask questions ad-nauseum all the way down the tree, ask for sources, ask it to be self-critical, think step-by-step, etc. From there, you'll at least be armed with more knowledge to ask better questions, whether to an LLM or a person. I think it's also a good exercise in figuring out how to break complex things down into smaller parts, and figuring out what questions to ask -- especially important if it's something where you barely know where to start.
Humans have a tendency to over-value their personal experience and cite their limited knowledge sets, beliefs, and intuitions as fact, and will probably tend to only show you the info that aligns with that.
I guess I'm biased, but for the most part, I don't think the error rates between people and LLMs are significant enough for me to want to deal with the human ego, versus an AI with infinite patience. There are certainly equally intelligent and gracious people, but I don't think they hang out much on Stack Exchange (or much of the popular internet, really)
>This raises the question of whether the emergence of the ability to produce coherent English text only occurs at larger scales (with hundreds of millions of parameters or more) and complex architectures (with many layers of global attention).
>In this work, we introduce TinyStories, a synthetic dataset of short stories that only contain words that a typical 3 to 4-year-olds usually understand, generated by GPT-3.5 and GPT-4. We show that TinyStories can be used to train and evaluate LMs that are much smaller than the state-of-the-art models (below 10 million total parameters), or have much simpler architectures (with only one transformer block), yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar, and demonstrate reasoning capabilities.
The point of TinyStories isn't to serve as an example of a sophisticated model, but rather to show that the emergent ability of producing coherent language can happen at smaller scales, and from a synthetic data set, no less. TinyStories is essentially the language model equivalent of a young child, and it's producing coherent language -- it's not producing grammatically correct nonsense like the famous "colorless green ideas sleep furiously" phrase from Chomsky.
>but I haven't came across many synthetic datasets that are of high quality
I'm not really sure what your personal experience has to do with the viability of synthetic data; it's already been proven to be a useful resource. For example, Meta directly stated this upon the release of their Llama 3 model:
>We found that previous generations of Llama are good at identifying high-quality data, so we used Llama 2 to help build the text-quality classifiers that are powering Llama 3. We also leveraged synthetic data to train in areas such as coding, reasoning, and long context. For example, we used synthetic data to create longer documents to train on.
>Wheel of Fortune hostess Vanna White had established herself as a TV personality, and consequently appeared as a spokesperson for advertisers. Samsung produced a television commercial advertising its VCRs, showing a robot wearing a dress and with other similarities to White standing beside a Wheel of Fortune game board. Samsung, in their own internal documents, called this the "Vanna White ad". White sued Samsung for violations of California Civil Code section 3344, California common law right of publicity, and the federal Lanham Act. The United States District Court for the Southern District of California granted summary judgment against White on all counts, and White appealed.
>The Ninth Circuit reversed the District Court, finding that White had a cause of action based on the value of her image, and that Samsung had appropriated this image. Samsung's assertion that this was a parody was found to be unavailing, as the intent of the ad was not to make fun of White's characteristics, but to sell VCRs.
Maybe it depends on which court will handle the case, but OpenAI's core intent isn't parody, but rather to use someone's likeness as a way to make money.
Scarlett voiced Samantha, an AI in the movie "Her"
Considering the movie's 11 years old, it's surprisingly on-point with depictions of AI/human interactions, relations, and societal acceptance. It does get a bit speculative and imaginative at the end though...
But I imagine that movie did/does spark the imagination of many people, and I guess Sam just couldn't let it go.
It's not just that. Originally the AI voice in Her was played by someone else, but Spike Jonze felt strongly that the movie wasn't working and recast the part to Johansson. The movie immediately worked much better and became a sleeper hit. Johansson just has a much better fitting voice and higher skill in voice acting for this kind of role, to the extent that it maybe was a make/break choice for the movie. It isn't a surprise that after having created the exact tech from the movie, OpenAI wanted it to have the same success that Jonze had with his character.
It's funny that just seven days ago I was speculating that they deliberately picked someone whose voice is very close to Scarlett's and was told right here on HN, by someone who works in AI, that the Sky voice doesn't sound anything like Scarlett and it is just a generic female voice:
In economics there's supposedly something called "induced demand" that also applies to a lot of human behavior in general. Basically, give more resources and capacity to someone, and they'll just fill the space, and not necessarily more efficiently.
AI-assisted answer:
Busy roads, so we build more roads, right? Except no, it just makes more traffic.
Give more time for people to complete a project, and they'll still just end up with a crunch time at the end anyways. "Work expands to fill the time available."
People make more money, then they just spend more money. Buy a bigger house, and it gets filled with more dubious value stuff (conversely, move into a smaller home and realize how much pointless junk you bought) "lifestyle inflation" or "consumption smoothing"
Project behind schedule? Throw more money and people at it, right? But does it help much?
A lot of modern software is arguably suffering from major inefficiency bloat, both in file size and hardware requirements.
So it's probably not quite as "obvious" of a solution to just build more power -- there has to be some incentives to encourage efficient usage instead of just throwing more resources at a problem, otherwise it encourages a long-term build-up of inefficiency.
Here's the other side of that, which is equally as somewhat counter-intuitive:
There's also the matter of the potential inefficiencies of a plant that produces way more than is actually being produced, in which case it's a very expensive waste -- afaik most power plants can't just dial up/down their output to a large degree. Then there are environmental, social, and civic problems, which I guess are easier to bulldoze over in countries that might give less consideration to its citizens. I'm sure there are plenty of other considerations, which you can probably get a good critique on from your favorite AI service.
This assumes we've already achieved, or can achieve through efficiency gains that can be developed faster than generating capacity can be built, an optimal amount and cost for electricity. I'd like to see anyone argue that.
If electricity were more abundant and cheaper we could achieve some incredible things that would drastically improve everyone's lives.
If you're concerned about CO2 then cheap carbon free energy can pull it out of the air or recycle chemical compounds that can. Same for steel and concrete production which are big carbon producers, cheap electricity means no more coal burning to melt steel or produce cement. If it's cheap enough then the price of those commodities would drop making construction less expensive and stainless steel an even better replacement for many plastic products.
Concerned about agricultural pollution, land use, water use or cost of food? Cheap power means cheap glass, aluminum, lighting and heating allowing for expanded use of very large greenhouses to grow commercial crops with near zero wasted water and fertilizer while massively increasing yields by removing seasonal limits, supplementing daylight hours and drastically reducing the need for pesticides and herbicides and they could be built almost anywhere.
The largest cost of desalinization is energy, drive the cost down and anywhere with a coastline has an infinite supply of water for people, industry and agriculture.
The list goes on but even things that now would seem excessively wasteful like using embedded nichrome wire or hydronic pipes or even plain radiant heat to melt ice and snow on roads and sidewalks could have huge benefits by reducing injuries, car accidents, and just generally improving quality of life for everyone in cold climates. Similarly, AC could be even more widely employed than it is now. No one would have to risk their health due to concerns about the electricity bill.
Everything uses electricity or heat produced by fossil fuels in some way. Manufacturing obviously but also construction materials like wood which must be dried in kilns or fresh food that is transported in refrigerated trailers and stored in climate controlled warehouses and supermarket freezers. Everything would be less expensive and that would mean everyone would be richer for it both in cash and in the availability of those goods.
Life without or with too little electricity is miserable, cold, exhausting and dangerous. You might feel comfortable with what you have now but there are many people who do not have access to that comfort or even to the basics. We would all be better off with more.
>Occam's razor is the problem-solving principle that recommends searching for explanations constructed with the smallest possible set of elements
it's a general principle or _recommendation_, not some law of the universe. Some people might argue that "God wills it" is the simplest explanation for many things, but that doesn't mean it's true. The simplicity of an explanation doesn't necessarily have anything to do with its validity.
In addition, the introduction of panpsychism, just like the introduction of God into any argument, brings up a whole other set of questions that need to be answered -- additional complexity, which is the opposite of Occam's Razor.
Emergence out of complex systems is arguably the simpler explanation, because it's something that's already been observed, measured, and studied, like storms emerging from simpler principles of weather systems.
Or take your computer or smartphone -- do you truly understand all the mechanisms from which we go from "shocking rocks" to create series of on/off signals, to things like communicating on the internet, or watching videos? Is computing and mathematics some inherent property of silicon? Nearly part of a computer, on some fundamental level, is a relatively simple mechanism, and has an almost useless function on its own. Even for engineers who understand every level of abstraction, it must still be near-miraculous that any of this works, even though these emergent properties are deliberately crafted, well-documented, and understood.
His ability to not only understand, but to also effectively communicate these concepts, I'd say makes him one of the smarter people out there. And yet, he remarks, "I don't know about you, but it really doesn't feel like this should actually work." There are still things people don't understand about why AI works the way it does, despite the fact that we built and trained them -- feel free to hit up Claude or your favorite resource for examples on emergent properties. LLMs can be passably apt at things they weren't trained for, and exhibit behaviors weirdly similar to people (like confabulation), despite the fact that their exposure to the world is literally only text.
I'm already imagining ways people could twist this into proof of panpsychism. But the point I'm getting to is that the human body is an absurdly, stupidly complex system of 37 trillion cells. The Milky Way is estimated to have 400 billion stars, at most. Like LLMs, we understand some things about our brains... but the complex interaction of many parts is less easy to understand. The purpose and value of feeling and awareness as a function of survival isn't a "tall order" -- it's just difficult for the human brain to grasp so many moving parts simultaneously. For some people, the complexity of the eyeball alone is proof that there must be a god -- the sheer magnitude of billions of years of brute force trial-and-error is difficult to comprehend.
Human intuition: a potentially powerful, but simultaneously and often error-prone weak force of the human brain.
>Panpsychism requires that the universe updates its state by conscious choice, which we already know happens
[citation overdue]
I think there are are at least two levels of logical fallacy here, not to mention avenues of undefined and fuzzy circular logic, but I've already spent too much time on this. I'd say try pasting that into Claude or another "big AI" and see what their critique is.
It isn't just about the simplicity of Panpsychism, friend, it's about the big things we have to explain if we rule it out that we are so far from being able to explain as to make us look foolish despite all our supposed knowledge. We don't even have a clue how to explain them, despite building atom bombs and space flight and getting close to artificial intelligence. That to me speaks volumes.
Please explain to me how emergence could create new "dimensions" that didn't exist before. Every emergent system we've ever observed creates unexpected complexity __WITHIN THE CONFINES AND STRUCTURE OF THE SYSTEM__. What you're describing is like if a flock of seagulls moved in unity then teleported to the other side of the earth they were so united - it makes zero sense within the framework, and only by ejecting from the framework can you salvage the notion.
I don't perfectly understand all the steps from zero to smartphone, but I have had enough education to have a decent overview, and I can gain that knowledge if I seek it. What will you study to understand consciousness?
The "emergence" you're describing from LLM behavior is a jump in capabilities that occurs due to complexity, but the LLM is just getting better at what it does, it isn't magically developing the ability to levitate researchers due to emergence, which is what "dumb" matter becoming conscious would be like.
The whole "god in the universe" angle is overblown, the root of panpsychism really is this: We and the rest of the stuff in the universe can perceive, feels, has free will, and makes decisions.
I understand it's hard to let go of your humancentric fallacies. The history of science has been brave men having to fight the power to point out the ways in which humans aren't unique or the center of the universe. Particularly if you're Christian, the idea that everything the bible said about man being god's chosen is bullshit must be a bitter pill to swallow.
I think most custom fine-tunes and merges on HuggingFace will do this unless they specifically mention it being censored. Even the lower param models have been surprisingly good, with relatively fast progress being made in the 7b and 11b models.
My "daily driver" is Fimbulvetr v2 11b, surprisingly slapped together by an EMT. Kunoichi 7b seems to be a pretty popular model too. These can be run locally with as little as 8 GB free RAM (preferably VRAM) with an easy install solution like LMStudio or Faraday.
You can generally find a lot of recommendations in places like SillyTavernAI or LocalLLaMa on reddit:
For example, when I first heard a deaf person laughing or talking, I probably internally noticed how... different the sound was. I'm guessing most hearing-abled people have a similar experience. It's very... unfiltered? It made me wonder how much even my own laugh was sculpted by my environment. If I relax my voice, I notice my voice becoming much more booming and obnoxious than my "normal" speaking voice.
Anecdotally, I've noticed Japanese people are much more likely to have a sort of stifled, raspy, restrained laugh, even when they're in a situation I might expect an American to have a belly laugh.
A lot of cultural values are encoded in language too, and in turn, the languages we speak can affect how we think or interact. Anecdotally, my personality is a bit different depending on what language I speak. I think the concept of what's actually encoded in language is being explored with regards to how/why LLMs "feel" smarter than they really ought to, or seem to show intelligence beyond simple "stochastic parroting"
Somewhat more obviously, most people will "code switch" not only their language, but vocal tone or even demeanor, depending on our current "persona" or our audience. Recently, Paris Hilton entertainingly demonstrated this: https://www.youtube.com/shorts/g9pal1ConNU
And this is more of a half-baked personal speculation based on a scattering of theories and case studies, but the environments we live in, the narratives we expose ourselves to, the people we surround ourselves with all probably very heavily define a lot of our values, beliefs, and even personal preferences, to an extent that would disturb a lot of people. Self-serving biases, post-hoc justifications, and confabulations give us convenient and creative ways to validate our own free will, volition, and independence, but I often wonder how much of our supposedly great human intelligence is an uncomfortably thin veneer on a largely automatic pattern-absorbing sponge.