Hacker Newsnew | past | comments | ask | show | jobs | submit | illwrks's commentslogin

I’ve been building a small ‘agent’ using copilot at work, partly a learning exercise as well as testing it in a small use case.

My personal opinion is that AI and agents are being misrepresented… The amount of setup, guidance and testing that’s required to create smarter version of a form is insane.

At the moment my small test is: Compressed instructions (to fit within the 8k limit) 9 different types of policies to guide the agent (json) 3 actual documents outlining domain knowledge (json) 8 Topics (hint harvesting, guide rails, and the pieces of information prepared as adaptive cards for the user) 3 Tools (to allow for connectors)

The whole thing is as robust as I can make it but it still feels like a house of cards and I expect some random hiccup will cause a failure.


I wonder if that's why she had caught it so easily, not many people are visiting the UK for it's sunny climate.


The UK is sometimes warmer in winter than other european countries further south because of the gulf stream.


I hope this assertion ages well.



Yeah, but the water temperature at this time of year is still pretty cold.


I've been tinkering with some models and I'm currently progressing through a few personal projects with Gemma in Antigravity. I'm not an engineer, but I have a very good technical understanding, I'm competent enough to build something by myself.

I've been going though my personal projects feature by feature. So far I've had good success, and as I'm doing it step by step I'm checking what's being created. 90% of the time it's correct and when bugs occur I can work through them and identify the issue, and then explain it to the agent to fix.

I don't think you could ever just set an agent off to create something by itself, unless you have a very detailed comprehensive technical document for it to follow along outlining the big picture and all details within - even then I think the context window wouldn't be enough and it may start tripping up.

The projects I've tried to date: - A love2D game (success) - Buildroot linux for an SBC with above game embedded (success, but with several issues related to the framebuffer, other drivers etc. Fixing this took about an hour of my time and burnt through all of the available thinking model tokens in two sessions. - A few offline web projects (ongoing, success when going feature by feature) - A micro controller project (ongoing)


With a web based project, you'll need to know how to setup a server and all of that jazz too, I don't know if an agent can help you with this.


Coding agents can absolutely help you with:

Setting up an Ubuntu server Configuring everything a web server needs Deploying to the server through GitHub or other platforms Maintaining the server, improving security, and so on

Coding agents are amazing for mature, well-understood technologies.

BUT BUT BUT, if you have zero understanding of web technology, there’s still a chance you’ll fall into a “forever failure” loop.

It’s a matter of probability.


Very nice. I had a quick look at the data source and I wonder if the more recent data is more sensitive/better quality since 2020? There's a clear trend of the oceans getting warmer but recently it seems like there's more and more heat retained.

"CRW's first-generation global monitoring products were operational at NOAA until April 30, 2020, when they were officially retired, and succeeded by CRW's next-generation operational daily monitoring products."


As said by someone else, the temperature of the oceans rose significantly more after the low sulphur regulation went into effect. See https://www.imo.org/en/mediacentre/hottopics/pages/sulphur-2... for the regulation.


It does. I've been tinkering with Copilot Studio Agents and you can hit a 8k character limit quickly. By taking your instructions and asking Copilot to compress the information down, while ensuring they are still human readable, you can cut it back to about 5k characters. The information is more dense and functionally the same and the agent is just as consistent as before.


You can guaranteed that this is a data protection and anti-scraping measure. I used to work with some recruiters and there are several third party recruiter tools that scrape linked in data to the third party sites database under the guise of supporting recruiters. I would rather this than having my LinkedIn data siphoned off other parts of the internet for god know what purpose.


So if the same exact action is done paying a fee to linkedin it is then ok?


LinkedIn is in a different category than your standard website or social site due to the amount of PII. I'm not saying it's right to scan your environment, but as someone with a LinkedIn account I would prefer they tried to protect my data than be lazy about it. You have more to worry about from Adobe's online tracking than LinkedIn checking your installed extension for scrapers.


Anthropic launched in the UK recently (Feb I think) so I expect it’s as a consequence of that.


Yes it could be possible


This has to be a joke. April fool


I’ve been saying the same thing to people I work with for the past few months. When everything is labelled as copilot it creates such confused ideas when someone says they have created something with copilot… or created a copilot agent. It always invoked 20 questions to interrogate what actually was created, and with what ‘version’ of copilot.

MS really needs to distinguish between them all.


Yes. Most people have close to zero understanding of tech, and are instead very heavily influenced by what they hear and see online, on social etc. They won’t think too deeply about the detail and therefore will make false assumptions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: