- Expose the entire language and then there's not much you can do to prevent something harmful.
- Expose a subset of the language perhaps through parsing for harmful patterns or by somehow making potentially harmful functions unreachable.
- Expose a more restricted language (scripting like c/lua) that you control and know its full capabilities.
The only difference I can think of is that it's easier to write flexible and dynamically modifiable software in lisp than in most mainstream languages.
All access to anything important (network, files, other commands, etc) have to happen through opaque handles called capabilities. Think of them as objects that you can't look for.
When you call code, you pass it a set of capabilities that are available. That is all it can ever access.
But the whole language and environment needs to be defined from the bottom up to enable this.
The end result has been years of (practical) research going into an actually safe, actually sandboxed, actually secure cross-platform platform: https://developer.chrome.com/native-client
You either
- Expose the entire language and then there's not much you can do to prevent something harmful.
- Expose a subset of the language perhaps through parsing for harmful patterns or by somehow making potentially harmful functions unreachable.
- Expose a more restricted language (scripting like c/lua) that you control and know its full capabilities.
The only difference I can think of is that it's easier to write flexible and dynamically modifiable software in lisp than in most mainstream languages.