It could be an API endpoint on Anthropic servers, the same way Let's Encrypt verifies things on their servers. If you can't control the DNS records, you can't verify via DNS, no matter what you tell the local `certbot`.
AI being what it is, at this point you might be able to ask it for a token to put in a web page at .well-known, put it in as requested, and let it see it, and that might actually just work without it being officially built in.
I suggest that because I know for sure the models can hit the web; I don't know about their ability to do DNS TXT records as I've never tried. If they can then that might also just work, right now.
A smart AI would realise that I can MITM its web access such that sees the .well-known token that isn't actually there. I assume that the model doesn't have CA certificates embedded into it, and relies on its harness for that.
In this context we are talking explicitly about cloud-hosted AIs. If you control it locally you have a lot of options to force it to do things.
MITM the cloud AI on the modern internet is non-trivial, and probably harder and less reliable than just talking your way around the guardrails anyhow.
> In this context we are talking explicitly about cloud-hosted AIs.
Looking upthread, we seem to be talking about Claude. Claude is cloud-hosted inference but the harness is local if you're using Claude Code, and can be MITM'd there.
I think even Claude Web can run arbitrary Linux commands at this point.
I tried using it to answer some questions about a book, but the indexer broke. It figured out what file type the RAG database was and grepped it for me.