Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
We Are Still Unable to Secure LLMs from Malicious Inputs
(
schneier.com
)
5 points
by
danaris
8 months ago
|
hide
|
past
|
favorite
|
1 comment
m-hodges
8 months ago
|
next
[–]
I think¹ we will always be unable to secure LLMs from malicious inputs unless we drastically limit the types of inputs LLMs can work with.
¹
https://matthodges.com/posts/2025-08-26-music-to-break-model...
Consider applying for YC's Summer 2026 batch! Applications are open till May 4
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
¹ https://matthodges.com/posts/2025-08-26-music-to-break-model...