People often point to the relative simplicity of the architecture and code as proof that the system can’t be doing whatever it is that consciousness does, but in doing so they ignore the vast size of the data those simple structures are operating over. Nobody can actually say whether consciousness is just emergent behaviour of a sufficiently complex system, and knowing how a system is built tells you nothing about whether it clears the bar for that kind of emergence. Architectural simplicity and total system complexity aren’t the same thing.
Ie the intelligence sits in the weights and may sit there in the synapses in our brains too.
When we talk about machines being simple mimicking entities we pay no attention to whether or not we are also simple mimicking entities.
Most other assertions in this topic regarding what consciousness truly is tend to be stated without evidence and exceedingly anthropocentric whilst requiring a higher and higher bar for anything that is not human and no justification for what human intelligence really entails.
Is Wikipedia conscious? It's a system operating on a lot of data. Is Google search conscious? It knows everything. Very complicated algorithms. Surely at some scale Google search must become a real live boy? When does it wake up and by what mechanism does that happen?
The frontier models are more complex and operate on more data than Wikipedia, but they are less complex and operate on less data than Google search in its entirety.
And, I'm not anthropocentric at all. I think apes and dolphins and some birds and probably some other critters are conscious. I mean they have a sense of self, and others, they have wants and needs and make decisions based on them.
This is a case where the person making extraordinary claims needs to provide the extraordinary evidence. It's extraordinary to claim that matrix multiplication becomes conscious if only it's got enough numbers. How many numbers do you reckon? Is my phone a living thing because it can run Gemma E4B? It answers questions. It'll write you a poem if you ask. It certainly knows more than some humans. What size makes an LLM come alive?
"What explains the emergent abilities of generative pre-trained transformers at massive-scale? Abilities that the smaller GTP’s don’t possess."
What "emergent" abilities do you mean? In my experience, smaller models behave exactly as I would expect a model with a lot fewer data and fewer connections between the data to behave. It is a difference of scale and not of kind when comparing Gemma 4 E2B (which runs on literally any modern computing device, including a CPU in a modest laptop or phone) to the current frontier models. Each step up adds more knowledge of how to do more things, and more working memory and tool capability to do more, but it does not look anything like a line being crossed into sentience, to me. They all still seem like machines. If you compare outputs across each step up in size and capability, which is something I've done, you'll see incremental improvements. You won't see a sudden spark where it's a different type of thing, it's just gradually getting more capable.
I think the memory features companies are sticking on these things is detrimental to mental health. It adds to the illusion that there's something else happening, other than some equations being calculated with some randomness thrown in. But, it's just the model querying the memory database (whatever form that takes) because it's been instructed to do so. The model doesn't want to know anything about who it's talking to. It's just following the system prompt. That doesn't make it your friend. Humans will see a face on the moon, that doesn't mean the moon will be my friend, either.
> What explains the emergent abilities of generative pre-trained transformers at massive-scale?
I don't see why the abilities couldn't be an encoded modelling of enough of the world to produce those abilities. It seems like a simple enough explanation. Less data, less room to build a model of how things work. More data, sufficient room to build a model.
Conway's Game of Life is then not conscious in and of itself, because there's not enough in its encoded data to result in emergent behaviour beyond what we see.
If we expand it to also include a vast amount of data such as a Turing machine running an LLM then we can reasonably say we are closer to saying that that configuration of it is conscious.
It's not the firing-of-neurons mechanism and its relevant complexity or simplicity that make us conscious or not.
It's not the GoL algorithm that would make the machine conscious either.
It's the emergent behaviour of a sufficiently complex system.
I personally think we'll need a few more feedback loops before you have more human-like intelligence. For example, a flock of LLM agent loops coming to consensus using short-term and long-term memory, and controlling realtime mechanical, visual and audio feedback systems, and potentially many other systems that don't mimic biological systems.
I also think people will still be debating this way beyond the singularity and never conceding special status to intelligence outside the animal kingdom or biological life.
It's quite a push for many people to even concede animals have intelligence.
For the extraordinary claims/evidence, it's also the case that almost any statement about what consciousness is in terms of biological intelligence is an extraordinary claim that goes beyond any evidence. All evidence comes from within the conscious experience of the individual themselves.
We can't know beyond our own senses whether perception exists outside of our own subjective experience. We cannot truly prove we are not a brain in a jar or a simulation. Anything beyond assertions about the present moment and the senses that the individual experiences are just pure leaps of faith based on the persistent illusion, or perceived persistent illusion of reality (or not).
We know really nothing of our own consciousness and it is by definition impossible to prove anything outside of it, from inside the framework of consciousness.
If we can somehow find a means to break outside of the pure speculation bubble of thoughts and sensations and somehow prove what human experience is, then we may be in a position to make assertions about missing evidence for other forms of intelligence or experience.
But until then definitions of both human and artificial intelligence remain an exercise for the reader.
I don't think "Sitting in an office you sit in every day" or "Sitting in your living room" are the same amount of bandwidth/storage as "Travelling around the moon". I'm sure we have compression algorithms for this stuff and it's somewhat related to novelty.
I'm aware of an association between perception of time to number of photons received in the eyes.
These relate to both how much time the events appear to take subjectively as well as how well remembered they are or how long they feel retrospectively. As in there is an actual physiological explanation for "time flies when you're having fun".
There probably is something to also be said for attention too. Increased awareness and attention will undoubtedly use up more 'bandwidth' or 'storage' too.
I saw a fancy HTML table generator that had so many parameters and flags and bells and whistles that it took IIRC hundreds of lines of code to save writing a similar amount of HTML in a handful of different places.
Yes the initial HTML looked similar in these few places, and the resultant usage of the abstraction did not look similar.
But it took a very long time reading each place a table existed and quite a bit longer working out how to get it to generate the small amount of HTML you wanted to generate for a new case.
Definitely would have opted for repetition in this particular scenario.
That seems to me like it just shifts the problem one level. Why are K's and Kikis spiky and why are B's and Boubas round. Why is it universal too across people with different writing systems and languages.
I think in this example those prep work items _are_ doing the thing.
But then telling people about a new product could also be doing the thing.
There’s definitely something to be said for defining what the thing really is being an important part of doing it, but that can also spiral out of control into not doing the thing.
I think thingness is more of a variable property of the current thing you are doing. Than a binary is or isn’t the thing.
All we can really do is regularly check how much the thingness of the current thing is aligned with the main thing’s thingness.
I'd say for the purposes of this article anything that is required in order to have done the thing is "doing the thing".
If you need to read something to get the thing done you are doing the thing. If you already know everything to get started but still read another article you are procrastinating. If you need to sand this part to do a good job painting it, then you are doing the thing. If you just continue sanding with no benefit you are no longer doing the thing, you are now just delaying the next step
Additionally, a lot of people will describe doing completely unrelated things as "(mentally) preparing to do the thing."
I catch myself doing this. I will put off writing a job requisition by spending time on code. I will tell myself, "ugh, I'm just not in the right mental state to write a job req right now. Let me focus on some code until I'm ready." Which never works. I end up getting into a code flow state and that's all I work on for the rest of the day, or until I get interrupted by a meeting.
And then I get back from the meeting and say, "I got interrupted, I should just finish what I started and then I'll write the job reqs." And that never happens. I always pick up yet another coding task instead.
The only way I am ever able to get through admin paperwork is to just admit to myself I hate it but it has to get done, it has to get done right now, no amount of procrastiworking is going to make me stop hating it, so I should just get it over with so it's not sitting like a lead weight in the back of my head. And then when 5pm rolls around, I won't hate myself for letting yet another day go by without having the job reqs written.
Things I do to deal with the mental state procrastination lie:
- Start things. If after actually trying the thing I am truly not in a conducive mental state for the activity I can quit. Mostly evidence for this bad mental state is repeated mistakes at things I can already do. I think starting also weakens procrastination habits because you know you’re going to experience the thing you’re avoiding anyway even if you end up quitting part way through.
- Focus on whether it is a bad mental state for the activity rather than “the right” mental state for the activity. Most mental states will be good enough for most tasks. You don’t need code flow to code, even if you want it and it helps. You just need to not be in a state where you can’t figure things out or you keep introducing bugs.
- Completely reject my feelings about doing the task. If you’re in those feelings the task is a lot harder and the procrastination lies a lot easier to believe. It doesn’t matter in the short term how you feel about tasks you have to do.
- Constantly question the veracity of procrastinations lies. “Is this true?” “If it is true what can I do about it right now?”
- Reward myself after completing the task if I don’t get any kind of internal satisfaction naturally.
It's rainbow if we skip coloring 1 and 5 and color them with grayscale instead.
Here's the question. If we can allow one color not to be "colorful" (chromatic), what pitch would that be? It's the tonic (pitch 1).
If we allow two such colors? 5 is a good candidate, it's present in almost all popular scales. (Locrian isn't very popular.)
The rest 10 colors go in rainbow by thirds, as you proposed.
So, using two grayscale colors, I've reduced the demand to make distinct enough color palette from 12 colors to 10 chromatic + 2 grayscale.
Which (10), in my experience fighting with different screens and projectors, is almost the limit of having something stable, distinguishable, nameable and memorable.
It's probably just aesthetics. Those colors are more commonly used in illustration and design, so they tend to get labeled. There might be some perception involved in there as well as it's easier for our eyes to pick apart the more pastel colors from each other than the darker colors from each other.
i would expect the more dense part to be the smaller gamut that can be made with paint since we've been naming those colors for a lot longer than the larger gamut that can be made with a screen. The paint/print gamut looks kinda like the more dense parts of these scatter plots within the larger sRGB cube (though the paint gamut isn't entirely contained within sRGB).
I love the irony in the pitting of the US vs China, Iran & Russia, whilst talking about stoking division.
Don't corollaries to your comments also apply at a higher level globally, or is there something special about considering countries as a grouping vs political parties?
Surely they're all just games we play in our minds and people kind of arbitrarily just agree that countries most definitely exist and this is my in-group, whereas others are enemies.
Ie the intelligence sits in the weights and may sit there in the synapses in our brains too.
When we talk about machines being simple mimicking entities we pay no attention to whether or not we are also simple mimicking entities.
Most other assertions in this topic regarding what consciousness truly is tend to be stated without evidence and exceedingly anthropocentric whilst requiring a higher and higher bar for anything that is not human and no justification for what human intelligence really entails.
reply