The dual-watermark theory makes alot of sense for defensive engineering. You always assume your outer layer will be broken and so keep a second layer that isn't publicly testable. Same as defence in depth anywhere else. I'm curious - as new models are being built constantly and they're naturally non-deterministic, do you think it's possible for end users to prove that?
> I'm curious - as new models are being built constantly and they're naturally non-deterministic, do you think it's possible for end users to prove that?
How is the model relevant? The models are proprietary and you never see any of its outputs that haven't been watermarked.
In app builders using LLM's you would expect proper prompt injection procedures to be in place - but surprise surprise, it's not usually the case. AI tools tend to ship fast and security is alwasy an aferthought.
I see this pattern constantly in my day job (I work in cyber for a FTSE 100 bank). I keep seeing tools that just prioritise developer experience over actual input validation, then act surprised when someone exploits it.
I've also been building a drop in solution for this exact issue outside of work. Happy to see this stuff (in the best way possible) as it acts as affirmation that what I'm doing is valuable.
This is interesting timing. I'm currently using EFS with Fargate for persistent storage and the NFS performance has been the biggest pain point - WAL mode SQLite on EFS works but deploys cause downtime because you can't run two containers writing to the same database file simultaneously.
Curious whether S3 mount points handle concurrent access any better than EFS or if it's the same underlying constraint. S3's consistency model improved mass
S3 if you're already in the AWS ecosystem. R2 if egress costs are killing you.
I run multi-region on AWS and S3 is deeply integrated with everything - IAM, CloudFront, ECS, Lambda. Switching to R2 would save on egress but I'd lose the tight integration with the rest of my stack. That tradeoff isn't worth it unless bandwidth is a significant line item.
R2's zero egress pricing is genuinely compelling for anything serving large files publicly - media, assets, user uploads. If your use case is "store stuff and serve it to users," R2 wins on cost. If your use case is "store stuff and process it with other AWS services," S3 wins on friction.
Yes! The "Paid Plan only" rule is officially old news for Durable Objects. As of last year, Free Tier Support for Durable Objects on Cloudflare.
You can now use Durable Objects on the Workers Free plan, provided you use the SQLite storage backend. The legacy Key-Value backend still requires the $5/mo Paid plan.