Sandboxing evaluated JavaScript code
Describe your feature request here.
I would like to run "somewhat untrusted" JavaScript code in PythonMonkey in a sandbox with restrictions on what it is allowed to do. For example, I would like to run code which should not be able to:
- Access the
this.pythonobject - Make network requests via
XMLHttpRequestorfetch - Do anything with the
bootstrapobject, such as making local file requests
Is this something that could possibly be done? I assume such options would go in the evalOpts argument of eval (documentation of the evalOpts argument would be helpful to have).
Currently I am using this crude approach that just deletes the objects from the globalThis object, but I do not know if it is secure in any way:
for key in ['python', 'bootstrap', 'pmEval', 'XMLHttpRequestEventTarget', 'XMLHttpRequestUpload', 'XMLHttpRequest']:
del pm.globalThis[key]
Code example
No response
Hi @mashdragon , this is a really cool feature request - but probably outside the scope of PythonMonkey.
Interestingly, our main product at Distributive is a distributed compute platform which utilizes edge computing to execute JavaScript / WebAssembly in parallel. Anyone can contribute their compute to our network by running a worker (https://dcp.work/)
In order to do this we have to evaluate JavaScript code within a sandbox https://github.com/Distributed-Compute-Labs/dcp-client/tree/release/libexec/sandbox - you might find that code interesting or relevant
Might be solved by https://github.com/Distributive-Network/PythonMonkey/issues/208
I won't stake my life on it, but removing everything from the global object is PROBABLY okay for what you're after, as that is how we "give" capabilities to JS in the first place.
Where this might prove vulnerable is attacks on the python engine, accessing it either by walking the prototype chain of supplied methods (eg setTimeout, console.log) or Python type wrappers. Securing Python is completely out of scope for us.