Session watcher should not rely on pid
Is your feature request related to a problem? Please describe. I use an HPC compute cluster(linux), and have the vscode-server running on the "head" node, while the R process runs on a "compute" node. These are actually different computers, but access the same filesystem. They have different process tables, ie, the pid of a process on the compute node can't be monitored from the head node. I feel this cluster setup is pretty common in the research community, so this issue doesn't just affect me.
Describe the solution you'd like
As the nodes on the cluster use the same filesystem, the current temp file solution should work, but there needs to be some other mechanism other than pid to monitor the status of the process, ie don't cleanup the session if the pid isn't visible to process.kill(pid, 0).
Describe alternatives you've considered Unfortunately I can't just ssh directly into a compute node and run vscode-server there, I've definitely tried.
Additional context
I launch the R session in a compute node using a custom script that I load using the r.rterm.linux setting. It almost works, except for the above pid problem. Essentially, .vsc.attach() works perfectly, but then the vscode-R process can't see the pid, so it immediately cleans the session.
Thanks for reporting! #1321 introduces a mechanism to detach from non-existing process. It will definitely be useful if we let session watcher be aware of the scenario where the process is not directly accessible from the server running session watcher.
After some attempts, it think that the way the extension currently works makes my particular use case impossible. I found out that the nodes don't share tmp directories, so that's a non-starter.
If the nodes share a file system which supports file watcher, then it might be possible to make session watcher work by making ~/.vscode-R a symlink to a user folder in the shared file system in all cluster nodes.
If the nodes share a file system which supports file watcher, then it might be possible to make session watcher work by making
~/.vscode-Ra symlink to a user folder in the shared file system in all cluster nodes.
Please notice that this way is problematic, since typically (e.g. on NFS in linux) there will be no notification of the os when a remote file is changed (aka. in linux iNotify event).
Hope that a solution to #1359 will fix it.
This issue is stale because it has been open for 365 days with no activity.
Any update?
I tried to look into this too. So far, I managed to view plots and html plots.
To view the plots, I needed to:
- Create forward porting to the compute instance
- Setup
httpgdusing the command:hgd(host = "0.0.0.0", port = PORT, silent = TRUE) - Get link using
gsub("http://.*:", "http://127.0.0.1:", hgd_url()) - To view the plots open VSCodes built-in Simple Browser. Access it by pressing "Ctrl + shift + p" and write: ">Simple Browser: Show"
To view the html plots, I also had the issue with a non-shared /tmp.
So my workaround was to change R's temporary directory setting the following in my .Renviron
TMPDIR=/path/to/shared/dir
TMP=path/to/shared/dir
TEMP=path/to/shared/dir
Now, whenever I would create a plot using e.g. plotly, the ~/.vscode-R/request.log file would get updated with a link to the newly generated file. I currently view them with Microsoft Live Preview.
I have however not figured out how to properly view the created .json files for list and data frames.
And ideally it would be nice to use the web-viewer/html-viewer/json-viewer that this extension already uses.
So would it be possible to add a feature, that would:
- Check the file:
~/.vscode-R/request.logand use the already existing functions/commands to open the request? - Enable the functionality to use a specific port to enable the port forwarding?
This would enable the users to add a custom keybinding to check the request log manually.