Table of Contents |
guest 2025-04-30 |
4.19.0-17-amd64
Install gcc:sudo apt-get install gcc
Gcc version:gcc (Debian 8.3.0-6) 8.3.0
LIBC version:ldd (Debian GLIBC 2.28-10) 2.28
Install headers:sudo apt-get install linux-headers-$(uname -r)
Get and install CUDA (downloads ~ 1 GB):sudo shutdown -r now
Adjust paths:sudo make altinstall
More infor here. Requirements are met since we are running debian10.
Uninstall potentially conflicting version of docker
:
sudo apt-get remove docker docker-engine docker.io containerd runc
Prepare to update repositories:
sudo apt-get update
In my case, errors of the type
Repository 'http://deb.debian.org/debian buster InRelease' changed its 'Version' value from '10.10' to '10.11'
cropped up, and apparently the cure is to allow release info change:
sudo apt-get --allow-releaseinfo-change update
I also had to remove/comment out an old docker release source in /etc/apt.d/sources.list
for sudo apt-get update
to return no error or warnings.
Install dependencies:
sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release
(all were already satisfied)
Add docker repository (debian!)
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt
/sources.list.d/docker.list > /dev/null
Neat. The docker repository is in an aptly named file docker.list under /etc/apt/sources.list.d
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
Check that it is running:
sudo docker run hello-world
Success.
First create a virtual environment:
virtualenv ~/venv/nnUNet -p python3
Activate the virtual environment:
. ~/venv/nnUNet/activate
Install all the required packages - they are written in requirments.txt, found here :
pip install ~/software/src/venv/nnUNet/requirements.txt
In order for all the packages to match, we have to install these separately:
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
Than also install nnUNet:
pip install nnunet
And you are done! You have created nnUNet virutal environment!
To deactivate virtual environment: deactivate
Copy tgz from slicer3d.org. The link in browser works, but doesn't get renamed with wget, so do it manually afterwards.
wget https://slicer-packages.kitware.com/api/v1/item/60add706ae4540bf6a89bf98/download
mv download Slicer-4.11.20210226-linux-amd64.tar.gz
From webpage, the revision number of the download was 29738. Typically, Slicer will first complain of missing libraries.
sudo apt-get install libglu1-mesa libpulse-mainloop-glib0
But later, an error like the following may be seen
Could not load the Qt platform plugin "xcb" in "" even though it was found
This is in fact the same error as above, save that it is a bit harder to decipher the missing libraries. Following suggestion re-run with QT_DEBUG_PLUGINS
flag set:
export QT_DEBUG_PLUGINS=1
~/software/install/Slicer/Slicer
In my case, libxcb-icccm
is missing:
Cannot load library /home/studen/software/install/Slicer-4.11.20210226-linux-amd64/lib/QtPlugins/platforms/libqxcb.so:
(libxcb-icccm.so.4: cannot open shared object file: No such file or directory)
Install the missing libraries:
sudo apt-get install libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
libxcb-render-util0 libxcb-xinerama0 libxcb-xkb1 libxkbcommon-x11-0
To run Slicer, an X11 incarnation is required. By default, processing PCs have no X11 interface, and an alternative is required.
sudo apt-get install xpra
XPRA has many interesting characteristics which makes it excellent to use as a dummy X server. See more in documentation. Usage examples:
xpra start :210
~studen/software/install/Slicer/Slicer
Now use attach
method below to connect to this session
xpra start ssh://vangogh/210 --start-child=/home/studen/software/install/Slicer/Slicer
. Detach with Ctrl-C
or using the system tray menu.xpra attach ssh://vangogh/210
. This works for sessions started remotely or from your laptop.Mode | SSH |
Server | username @ 172.16.21.37:22 |
Display | 210 |
Command | /home/studen/software/install/Slicer/Slicer |
See [screenshot][image].
I used 210 to be consistent with *nx setup, but display should be unique. No password - rubens is configured to use ssh keys. Disconnect is hidden in system tray - right click on XPRA system tray icon reveals it. Disconnecting leaves the process running on vangogh and later the same session can be joined via Connect button - make sure you remember your display id
For developers. A remote username must be configured and Slicer should be started in that remote unit to isolate it from other users. The remote unit must contain user credentials to access LabKey. Such credentials must be copied (once?) from local machine. This must be done using complicated tools like WinSCP, scp from command window or similar, that require coarse knowledge of *nix that might exceed users interest. Other data (images, etc.) are then loaded via LabKey. Ideal workflow - User starts xpra using the instructions above and gets an operating Slicer window which is in the background already connected to LabKey. Solution: the user directory comes with preloaded zip file, which is then imported via LabKey.
[image]: 'Screenshot at 2021-11-15 09-25-56.png'
LabKey was modified to initiate pipeline jobs on local machine using trigger script machinery. The process was split between two pieces of software:
Processing PCs are kept distinct from the database PCs. LabKey requires such processing PCs to run an equivalent LabKey software, which makes the infrastructure overwhelming. Some thoughts:
analysisModule
formats the call, but sends it to socket rather than executes it directly. Since combining sockets over multiple programming language may be cumbersome, probably best to still start a shell command, but there should be a flag that tells analysisInterface
to start the job remotely.analysisInterface
; this is identical to current system call performed by analysisModule
, except in python, which might enable some shortcuts, ie starting from python. The nice thing about shells is that they run as new threads. However, previous item already has analysisInterface
running socket client, so it might be best for analysisInterface
to use sockets directly.analysisInterface
runs on processing PC and manages the (local) logs and execution. The status on the initiating job is updated via embedded labkey/python interface and no additional sockets are needed. Log file can be transmitted at the end of the job, although running updates might be of interest, which may be handled by analysisInterface
using smart uploading strategies that append rather than transmit full files.analysisInterface
to make the socket itself as transparent as possible. Which begs the question on how could processes initiated by different users be aware of each other. But wait - the user running the socket will be the single user that will execute the code, hence a plain json database is fine. Speaking of databases - it might as well use the originating database, which will have to be modified anyhow also as a queue, eliminating the need for local json or other overhead.analysisInterface
gets a submit job request via socket. It checks back to server whether it has any jobs running. Here we could apply filters that would allow multiple non-interfering jobs to be run simultaneously, but prevent interfering jobs to be started. The python instance that waits in a low budget loop and checks whether its turn has come. To perserve order all jobs issued previously must reach a conclusive state (DONE/FAILED) and no QUEUED job should be ahead in queue. Then the loop completes and shell command is issued, the loop is switched to wait for completion, part of which a potential log update could be. Once job is completed, status should be changed, now critical, since further jobs might await that flag.ss -tulpn | grep LISTEN
iptables -I INPUT -p tcp -s X.X.X.X/32 --dport 8765 -j ACCEPT
iptables -A INPUT -p tcp -s 0.0.0.0/0 --dport 8765 -j DROP
sudo iptables -D INPUT -m conntrack --ctstate INVALID -j DROP
analysisInterface
should hold a mapping of server-configuration maps. Does websockets report caller id? It does and can be used - websocket.remote_address[0]
Processor side:
serviceScripts/env.sh
to change IPSERVER
and IPCLIENT
.labkey/setup.json
for proper paths and venv. Particularly, if softwareSrc
is set in paths.$HOME/software/src/websocket/serviceScripts/start.sh
sudo $HOME/software/src/websocket/serviceScripts/open_port.sh
Client (labkey) side:
pip3 install websockets
and running: $HOME/software/src/websocket/send.py AA.BB.CC.DD:TEST:X
, where AA.BB.CC.DD
is the ip address of the server or its name, if set by DNS.tomcat8
user.Check iptables!
To generate SSH key for Van Gogh access, do
ssh-keygen
This will generate a pair of private/public ssh keys that your program will use to access a remote resource. To generate an account, the system administrator will require the public part of the key, while the private stays with you. The public key is stored in .ssh/id_rsa.pub
, please attach its content to an email and send it to the administrator. The content is a single line starting with ssh-rsa, a long string of random symbols and a name of the key, typically a username and the name of the PC.
# cat ~/.ssh/id_rsa.pub
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQDSxY3O+S1EKJ/Dye0GxcW8mdM7ulmZmD+uQ/iG9UElTu
8szDiqKOCA+moLEgOWkwTZL3mUbfVBhwEo0ThP+IKFX2J9NmwVEQvUTH2gCtSWoyA4JeZ4xBhh
hHc2GVVmEo85a5ZmBAnD3rqHLO5ElIV84sqHgace3kxEHos0CgqZgUSVHjuAS529VZyr4AKlIY
liMdmJ6vR9Fn+W0aDaBvkTjhP/QcIobI3VmUguxRqcTZfsl5+qwrQRf/ayho3Tqytxv3R2laQb
lDUn858nElMLmatV5MQ7a9FloPNr+VyTOnQN7QYxrglA+nLn+waUGKP/ue9setPYXNXdOconfFfx
studen@DESKTOP-MJ4P6MG
This will work in Linux terminal or a program like MobaXterm used to access remote PCs in Windows.
Author: Martin Horvat, Luciano Rivetti, Andrej Studen, October 2023
apt-get install nodejs npm
python3 -m pip install jupyterhub
npm install -g configurable-http-proxy
python3 -m pip install jupyterlab notebook
mkdir -p /etc/jupyterhub/systemd
cd /etc/jupyterhub
jupyterhub --generate-config
Edit /etc/jupyterhub/systemd/jupyterhub.service
to read
----
[Unit]
Description=JupyterHub
After=syslog.target network.target
[Service]
User=root
WorkingDirectory=/etc/jupyterhub
ExecStart=jupyterhub -f /etc/jupyterhub/jupyterhub_config.py
[Install]
WantedBy=multi-user.target
Perform:
ln -s /etc/jupyterhub/systemd/jupyterhub.service /etc/systemd/system/jupyterhub.service
systemctl enable jupyterhub.service
systemctl start jupyterhub.service
Standard setup, including certbot
to generate certificate. Add instructions from documentation, the only
apache module missing was the proxy_wstunnel
.
a2enmod ssl rewrite proxy headers proxy_http proxy_wstunnel
Also, add
RewriteEngine On
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule /(.*) ws://vangogh.fmf.uni-lj.si:8000/$1 [P,L]
to apache site configuration file under <VirtualHost>
, and under <Location>
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}