Interesting! I've been thinking about how you'd best teach sentence patterns for a long time, happy someone is making progress on it.
I'm a bit too beginner to effectively use the app, sadly, but it's quite an interesting flow.
As an aside, a while ago, I sort of prototyped the reverse: Learn vocab to understand a natural language sentence, translate the sentence (https://infi.koljasam.com)
Thanks for your feedback! You can use it as a total beginner too, set your level to hsk 1 and use pinyin. The backend should still give you valuable feedback. I would start learning to form sentences right from the start, it’s more important in Mandarin than in e.g. European languages imo.
On the same lines: "hi, let's get started" and it begins understanding my setup and as per the initialization protocol, it has the latest state:
This is the main hook in AGENT.md file:
## 3. Initialization Protocol (Read This First)
Upon being activated in this directory, you MUST:
1. **Read `CURRENT.md`** to understand the current topic under study. The current topic's content and files are under its dedicated directory. **Ignore** all other topic directories.
2. **Read the topic-specific `README.md`**: (e.g., in `/mcp/`) to understand the specific mental models and technical constraints of the current subject.
3. **Read `PROGRESS.md`**: To understand exactly where the last session ended and what the "Current Frontier" is.
4. **Acknowledge State**: Start the session by briefly confirming the current learning milestone you've identified from the files.
Since the agent makes live notes inside PROGRESS.md, no matter where you left off, it is all taken care of every time you start afresh.
Mostly, the agent concludes the session once it guages that 15-20 minute sprint-worthy concepts have been covered. Try it. If you want, I can share the full notes sans-my personal info through gist.
Instead of having that one god-author who has to keep maintaining everything, I think a better option may be to have the whole comprehensively community-maintained. Which opens up the question: How do you open source structured data and maintenance?
There was some overlap between the webring age and the early search age, but once search became entrenched and useful, webrings faded. Blogrolls survived for a little bit longer, but it was search.
Specifically, once search became the way you found the first page/site to begin with. Before search as default, you found sites in a bunch of scattershot ways: advertisements, word of mouth, lists in books, or lists on websites that updated periodically (that you had to have found/heard about one of the other ways) for example. Then you crawled out from there because that was the only way to find things. You had to either know the URL or use a link. And not all of the links/sites in the webrings were good.
Once search got good enough, people found the initial site via search and instead of taking their time clicking through a webring which might at any point lead them somewhere dead or useless, it was quicker to go back to the search page to find something else.
Page access went from being a chained together web of back and forth links to a 2 step process of search -> page.
Can you elaborate what right now is in the app that goes beyond Zoom? Always some effort to get started with a new communication app, and even more so to convince others to do so.
From the landing page, this seems very much...zoom.
Variations of this are a common talking point in the self-help world, and while it's a powerful antidote against "I'm sure some day this giant thing will suddenly be easy and I'll just do it", it's not a silver bullet. Here are some counter-considerations:
- Doing anything usually involves prep work. Want to take a step? First put on your shoes (literally or figuratively, depending). If your attempted habit is 70% prep, your brain will somewhat rightfully conclude "this is stupid" fairly quickly.
- "Just do X every day for [long time period]" has an inherent falsification problem: You aren't "allowed" to argue against it until you tried it. Stopped after 2 years because you saw no change (and 5 was recommended)? You are still not allowed to argue against the strategy!
- You can actually make steps so small that they're useless. I once set out to have (at least) one github commit online per day (going for that green tile!). This led to my brain finding hacks like rephrasing one sentence of an old blog post. Doing that for 20 days is way less effective than one single coding session, at 20 times the emotional cost.
- Doing something daily for a long time is extremely hard to achieve, especially if it's not the main thing you're doing. It's rare in the wild. You will find piano virtuosos who play piano daily, but not piano virtuosos who also go to the gym daily.
> Doing anything usually involves prep work. Want to take a step? First put on your shoes (literally or figuratively, depending). If your attempted habit is 70% prep, your brain will somewhat rightfully conclude "this is stupid" fairly quickly.
Note that this is also something that can be weaponized. Recently I've learned to draw and I found I kept having great difficulty just starting. To get over that I made the agreement with myself that at least once every two days, I would grab a pencil and page through my sketchbook. I'd find myself on the first blank page holding a pencil.
Turns out your brain thinking prep work without actual work is stupid really helps here. Once you've tricked yourself into doing the prep work, you might as well do the work-work.
e.g. for distance running: just make the deal with yourself that putting on your running clothes/shoes/etc and taking one step outside counts as having ran that day. You'll find yourself going for a run anyways once you get outside, because you might as well.
> "Just do X every day for [long time period]" has an inherent falsification problem
Very true, but unfortunately a lot of things worth doing require that sort of investment. When learning to draw I hated every single second for the first ~two months or so. And then like a switch getting flipped I started having fun.
> You can actually make steps so small that they're useless.
You should take the biggest steps you can actually keep yourself to. Maybe that leads to steps that are sub-optimally small, but taking useless steps is still doing more than taking no steps.
> Doing something daily for a long time is extremely hard to achieve
Oh for real, especially once you factor in force majeure. Hence why I went with "draw at least once every two days". That gives you wiggle room to plan around life events.
Turns out building habits is incredibly hard and no amount of seeking advise will do it for you. It's a slog and you gotta overcome that yourself one way or another.
I love that in these discussions every piece of art is always high art and some comment on the human condition, never just grunt-work filler, or some crappy display ad.
Code can be artisanal and beautiful, or it can be plumbing. The same is true for art assets.
Exactly! Europa Universalis is a work of art, and I couldn't care less if the horse that you can get as one of your rulers is aigen or not. The art is in the fact that you can get a horse as your ruler.
I agree, computer graphics and art were sloppified, copied and corporate way before AI, so pulling a casablanca "I'm shocked, shocked to find that AI is going on in here!" is just hypocritical and quite annoying.
That's a fun framing. Let me try using it to define art.
Art is an abstract way of manipulating aesthetics so that the person feels or thinks a thing.
Doesn't sound very elusive nor wrong to me, while remaining remarkably similar to your coding definition.
> while asking questions about what it means to be human
I'd argue that's more Philosophy's territory. Art only really goes there to the extent coding does with creativity, which is to say
> the machine does a thing
to the extent a programmer has to first invent this thing. It's a bit like saying my body is a machine that exists to consume water and expel piss. It's not wrong, just you know, proportions and timing.
This isn't to say I classify coding and art as the same thing either. I think one can even say that it is because art speaks to the person while code speaks to the machine, that people are so much more uppity about it. Doesn't really hit the same as the way you framed this though, does it?
Are you telling me that, for example, rock texture used in a wall is "asking questions about what it means to be human"?
If some creator with intentionality uses an AI generated rock texture in a scene where dialogue, events, characters and angles interact to tell a story, the work does not ask questions about what it means to be human anymore because the rock texture was not made by him?
And in the same vein, all code is soldering cables so the machine does a thing? Intentionality of game mechanics represented in code, the technical bits to adhere or work around technical constraints, none of it matters?
Your argument was so bad that it made me reflexively defend Gen AI, a technology that for multiple reasons I think is extremely damaging. Bad rationale is still bad rationale though.
> Art eludes definition while asking questions about what it means to be human.
All art? Those CDs full of clip art from the 90's? The stock assets in Unity? The icons on your computer screen? The designs on your wrapping paper? Some art surely does "[elude] definition while asking questions about what it means to be human", and some is the same uninspired filler that humans have been producing ever since the first the first teenagers realized they could draw penis graffiti. And everything else is somewhere in between.
Is anyone else detecting a phase shift in LLM criticism?
Of course you could always find opinion pieces, blogs and nerdy forum comments that disliked AI; but it appears to me that hate for AI gen content is now hitting mainstream contexts, normie contexts. Feels like my grandma may soon have an opinion on this.
No idea what the implications are or even if this is actually something that's happening, but I think it's fascinating
You’re reading it wrong: rather, AI hype had been common (but not the majority position) in tech contexts for a while, especially from those that have something to sell you.
What you derogatorily call normies are the rest of the world caring about their business until one day some tech wiz came around to say “hey, I have built a machine to replace all of you! Our next goal is to invent something even smarter under our control. Wouldn’t that be neat?” No wonder the average person isn’t really keen on this sort of development.
> AI hype had been common (but not the majority position) in tech contexts for a while, especially from those that have something to sell you.
There's a whole lot of that for quite a long time targeting normie contexts, too; in fact, the hate in normie contexts is directly responsive to it, because the hype in normie contexts is a lot of particularly clumsy grifting plus the nontechnical PR of the big AI vendors (which categories overlap quite a bit, especially in Sam Altman’s case), and the hate in normy contexts shows basically zero understanding of even what AI is beyond what could be gleaned from that hyper plus some critical pieces on broad (e.g., total water and energy use, RAM price) and localized (e.g., from fossil fuel power plants in poor neighborhoods directly tied to demand from data centers) economic and environmental impacts.
> What you derogatorily call normies
I am not using “normie” derogatorily, I am using it to contrast to tech contexts.
The most typical reactions I see outside of techie and arty spaces where people are most polarised about it are:
- annoyance at stupid AI features being pushed on them
- Playing around with them like a toy (especially image generation)
- Using them for work (usually writing tasks), to varying degrees of effectiveness to pretty helpful to actively harmful depending on how much of a clue they have in the first place.
Discussion or angst about the morality of training or threats to jobs doesn't really enter much into it. I think this apathy is also reflected in how this has not seemingly affected the sales of this game at all in the months that it has been reported on in the video game press. I also think this is informed by how most people using them can fairly plainly see they aren't really a complete replacement for what they actually do.
They don't call normies derogatorily, they just use it as proxy for "non-tech people"
> “hey, I have built a machine to replace all of you! Our next goal is to invent something even smarter under our control. Wouldn’t that be neat?” No wonder the average person isn’t really keen on this sort of development.
Nope, most are just annoyed from AI slop bombarding them at every corner, AI scams getting news of claiming another poor grandma, and AI tech industry making shit expensive. Most people's job are not in current direct threat of being employed, unless you work in tech or art.
LLMs has had a couple of years by now to show their usefulness, and while hype can drive it for a while, it's now getting to the point where hype alone can't. It needs to provide a tangible result for people.
If that tangible result doesn't occur, then people will begin to criticize everything. Rightfully so.
I.e., the future of LLMs is now wobbly. That doesn't necessarily mean a phase shift in opinion, but wobbly is a prerequisite for a phase shift.
(Personal opinion at the moment: LLMs needs a couple of miracles in the same vein as the discovery/invention of transformers. Otherwise, they won't be able to break through the current fault-barrier which is too low at the moment for anything useful.)
It is fascinating. It's showing of course that AI has gone mainstream.
There was a time that I remember when you could gripe at a party about banner ads showing up on the internet and have a lot of blank stares. Or ask someone for their email address and get a quizzical look.
I pointed my dad to ChatGPT a few days ago and instructed him on how to upload/create an AI image. He was delighted to later show me his AI "American Gothic" version of a photo of him and his current wife. This was all new to him.
The pushback though I think is going to be short-lived in a way other push-backs were short-lived. (I remember the self-checkout kiosk in grocery stores were initially a hard sell as an example.)
How many American Gothic AI fake photos do you think he'll make. Sounds like a novelty experience to me. I also loved my first day in Apple's Vision Pro. It was mind blowing. On the 4th day I returned it. Novelty wears off, no matter how cool it might seem initially.
Oh, not disagreeing with you. A strange thing has happened inn the past when the what was novel also becomes the commonplace. Not in all cases, of course (and I personally also believe VR is one of those things that will never become commonplace).
Just like feminism when it was starting, back then millions of women believed it was silly for them to vote, and those who believed otherwise had to get loud to get more on their side, and that's one example, similar things have happened with hundreds other things that we now take for granted, so it's value as judgment measure it's very low by itself alone.
We’ve observed this in AI gen ads (or “creatives” as ad people call them)
They work really well, EXCEPT if there is a comment option next to the ad - if people see others calling the art “AI crap” the click rate drops drastically :)
If I was vegan and found out after the fact that a meal that I enjoyed contained animal products in it that doesn't mean I'm some hypocrite for consuming it at the time. Whether I enjoyed it or not at the time it still breaches some ethical standard I have, abstaining from it from then on would be the expected outcome.
The same works the other way, and actually a lot better IMO.
Let's imagine a scenario with two identical restaurants with the exact same quality of food.
One sells their dish as a fully vegan option, but doesn't tell the customers.
Hardline "oorah, meat only for me" dude walks in and eats the dish, loves it.
If he goes to the other restaurant and is told beforehand that "sir, this dish is fully vegan" - do you think they'd enjoy it as much?
Prejudices steer people's opinions, a lot. Just like people stop enjoying movies and games due to some weird online witch-hunt that might later on turn out to be either a complete willful misunderstanding of the whole premise (Ghost in the Shell) or a targeted hate campaign (Marvels and many many other movies starring a prominent feminist woman).
I think that's a hint that people already dislike AI ads on principle but it's good enough now to fool them, and the comment section provides transparency.
Just as they were told to like them in the first place. A lot of this is driven that way because most of the public only has a surface-level understanding of the issues.
Look at how easy it is to make the argument in the other direction:
> People were told by large companies to like LLMs and so they did, then told other people themselves.
Those add nothing to the discussion. Treat others like human beings. Every other person on the planet has an inner life as rich as yours and the same ability to think for themselves (and inability to perceive their own bias) that you do.
LLM hate for use in art has been pretty mainstream from the start. The difference in criticism between use in code generation and use in art generation is palpable. I dont think anyone took kindly to the discourse of movie producers buying actor likeness rights and having perpetually young old actors for all future movies.
Programmers criticized the code output. Artists and art enjoyers criticized cutting out the artist.
It’s the usual “I don’t like it, I’m against, but it’s okay if I use it” thing. People understand the advantage it gives a person over another one, so they will still use it here and there. You’ll have some people who will be vehemently against it, but it will be the same as people who categorically against having smartphones, or avoiding using any Meta products because of tracking and etc.
It's because amount of AI slop bombarding people from every side increased and created knee-jerk reaction to anything AI, even if it is actually the "remove the boring part of work"
The issue with "removing the boring part of work" is that which part of the work is "boring" is subjective. There are going to be plenty of people that don't think that what they do is the "boring stuff that should be automated away." Whether this is genuine enjoyment for what they do or just an attempt to protect their career, both are valid feelings to have.
The art bubble is generally considered more "normie" than the tech bubble and they've been strongly anti AI art for longer than even the introduction of the original GitHub copilot
It feels like a similar trend to the one that NFTs followed: huge initial hype, stoked up by tech bros and swallowed by a general public lacking a deep understanding, tempered over time as that public learns more of the problematic aspects that detractors publicise.
I don't feel NFTs ever really had much interest among the general public - average reaction just being "I don't get it, that sounds pointless".
Whereas AI seemed to have a pretty good run for around a decade, with lots of positive press around breakthroughs and genuine interest if you showed someone AI Dungeon, DALL-E 2, etc. before it split into polarized topic.
NFTs have way less downsides than LLMs and GenAI, since the main downside was just wasting electricity. I didn't have to worry about someone cloning my voice and begging my mom on the phone for money.
If you look at daytime TV in the UK, there are a lot of ads targeting the elderly talking about funeral cover and life assurance and so on.
I for one cannot wait for a future where grandparents get targeted ads showing their grandchildren, urging them to buy some product or service so their loved ones have something to remember them by...
If you have two modes of spending your time, one being work that you only do because you are paid for it, and the other being feeding into an addiction, the conversations you should be having are not about where to use AI.
I'm a bit too beginner to effectively use the app, sadly, but it's quite an interesting flow.
As an aside, a while ago, I sort of prototyped the reverse: Learn vocab to understand a natural language sentence, translate the sentence (https://infi.koljasam.com)
Happy to talk and exchange notes, if you want :)
reply