Table of Contents

guest
2025-04-30
Van Gogh installation
   Docker
   Installing nnUNet
   Slicer
   Remote Pipeline
   SSH Key generation
   Jupyter-Hub

Van Gogh installation


Disk partition layout:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file > <system> <mount point> &l;type> <options> <dump> <pass>
# / was on /dev/nvme0n1p1 during installation
UUID=c65d2374-1fba-4bf9-a028-9852c189a71e / ext4 errors=remount-ro 0 1
# /data was on /dev/nvme0n1p4 during installation
UUID=ac4f0e46-b874-4d00-8c90-120ba241f8fd /data ext4 defaults 0 2
# /data1 was on /dev/sdb1 during installation
UUID=8566257a-3500-4ab5-a12f-6768baea74f3 /data1 ext4 defaults 0 2
# /home was on /dev/nvme0n1p3 during installation
UUID=21f313b2-eb50-474e-b86f-fc9f784099c5 /home ext4 defaults 0 2
# swap was on /dev/nvme0n1p2 during installation
UUID=cef07542-2d79-4d02-9eaa-865a39ed6e7c none swap sw 0 0
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 9.4M 13G 1% /run
/dev/nvme0n1p1 59G 1.3G 55G 3% /
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/nvme0n1p3 234G 61M 222G 1% /home
/dev/nvme0n1p4 564G 73M 536G 1% /data
/dev/sda1 5.5T 89M 5.2T 1% /data1
tmpfs 13G 0 13G 0% /run/user/1000
Prevent password and root login:
  • Install sudo (apt-get install sudo)
  • Add user to sudo list (usermod -G sudo user)
  • in /etc/passwd update root's shell to /usr/sbin/nologin
Prevent remote login:
  • Copy public key from remote account to the server (cat id_rsa.pub at remote; copy key to .ssh/authorized_keys on server)
  • Edit /etc/ssh/sshd_config
    ChallengeResponseAuthentication no
    #in newer sshd, this changed to
    KbdInteractiveAuthentication no
    PasswordAuthentication no
    #UsePAM no
    PermitRootLogin no
    #PermitRootLogin prohibit-password
  • Restart sshd; /etc/init.d/sshd restart
Kernel version:

4.19.0-17-amd64

Install gcc:

sudo apt-get install gcc

Gcc version:

gcc (Debian 8.3.0-6) 8.3.0

LIBC version:

ldd (Debian GLIBC 2.28-10) 2.28

Install headers:

sudo apt-get install linux-headers-$(uname -r)

Get and install CUDA (downloads ~ 1 GB):
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/debian10/x86_64/ /"
sudo apt-get install gnupg
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/debian10/x86_64/7fa2af80.pub
sudo add-apt-repository contrib
sudo apt-get update
sudo apt-get -y install cuda
To get nvidia drivers up and running over noveau driver,

sudo shutdown -r now

Adjust paths:
CUDA=/usr/local/cuda-11.4
export PATH=${CUDA}/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=${CUDA}/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
Check driver:
studen@vangogh:~$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module 470.42.01 Tue Jun 15 21:26:37 UTC 2021
GCC version: gcc version 8.3.0 (Debian 8.3.0-6)
Check compiler:
studen@vangogh:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Wed_Jun__2_19:15:15_PDT_2021
Cuda compilation tools, release 11.4, V11.4.48
Build cuda_11.4.r11.4/compiler.30033411_0
Compile examples in NVIDIA_CUDA-11.4_Samples:
cuda-install-samples-11.4.sh .
cd NVIDIA_CUDA-11.4_Samples
make
Check deviceQuery:
studen@vangogh:~/NVIDIA_CUDA-11.4_Samples$ bin/x86_64/linux/release/deviceQuery
bin/x86_64/linux/release/deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 2 CUDA Capable device(s)

Device 0: "NVIDIA RTX A5000"
CUDA Driver Version / Runtime Version 11.4 / 11.4
CUDA Capability Major/Minor version number: 8.6
Total amount of global memory: 24256 MBytes (25434259456 bytes)
(064) Multiprocessors, (128) CUDA Cores/MP: 8192 CUDA Cores
GPU Max Clock rate: 1695 MHz (1.70 GHz)
Memory Clock rate: 8001 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 6291456 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 102400 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 1: "NVIDIA RTX A5000"
CUDA Driver Version / Runtime Version 11.4 / 11.4
CUDA Capability Major/Minor version number: 8.6
Total amount of global memory: 24256 MBytes (25434652672 bytes)
(064) Multiprocessors, (128) CUDA Cores/MP: 8192 CUDA Cores
GPU Max Clock rate: 1695 MHz (1.70 GHz)
Memory Clock rate: 8001 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 6291456 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 102400 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 33 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
> Peer access from NVIDIA RTX A5000 (GPU0) -> NVIDIA RTX A5000 (GPU1) : Yes
> Peer access from NVIDIA RTX A5000 (GPU1) -> NVIDIA RTX A5000 (GPU0) : Yes

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 11.4, NumDevs = 2
Result = PASS
Bandwidth test:
studen@vangogh:~/NVIDIA_CUDA-11.4_Samples$ bin/x86_64/linux/release/bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...

Device 0: NVIDIA RTX A5000
Quick Mode

Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 26.2

Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 27.1

Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 649.0

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Get CUDA10.0 (not part of core installation, will be dropped):
wget https://developer.nvidia.com/compute/cuda/10.0/Prod/local_installers/cuda-repo-ubuntu1804-10-0-local-10.0.130-410.48_1.0-1_amd64
mv cuda-repo-ubuntu1804-10-0-local-10.0.130-410.48_1.0-1_amd64 cuda-repo-ubuntu1804-10-0-local-10.0.130-410.48_1.0-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1804-10-0-local-10.0.130-410.48_1.0-1_amd64.deb
sudo apt-key add /var/cuda-repo-10-0-local-10.0.130-410.48/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda-libraries-10-0
Use python3.6 (deepMedic stuff, will be dropped):
sudo apt install wget build-essential libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev libffi-dev zlib1g-dev libgdbm-compat-dev liblzma-dev
wget https://www.python.org/ftp/python/3.6.14/Python-3.6.14.tgz
tar xvzf Python-3.6.14.tgz
cd Python-3.6.14
./configure --enable-optimizations
make -j 4
Failed or tests not run:
26 test_asyncore skipped
53 test_cmd_line failed
90 test_curses skipped (resource denied)
101 test_devpoll skipped
149 test_gdb skipped
170 test httplib:ConnectionResetError: [Errno 104] Connection reset by peer
173 test_imaplib:ConnectionResetError: [Errno 104] Connection reset by peer
192 test_kqueue skipped
216 test_msilib skipped
219 test_multiprocessing_fork skipped
220 test_multiprocessing_forkserver skipped
221 test_multiprocessing_main_handling skipped
222 test_multiprocessing_spawn skipped
234 test_ossaudiodev skipped (resource denied)
298 test_smtpnet skipped (resource denied)
301 test_socketserver skipped (resource denied)
307 test_startfile skipped
320 test_subprocess skipped
345 test_timeout skipped (resource denied)
346 test_tix skipped (resource denied)
347 test_tk skipped (resource denied)
353 test_ttk_guionly skipped (resource denied)
test_urllib2_localnet: [Errno 104] Connection reset by peer
test_urllib2net skipped (resource denied)
test_urllibnet skipped (resource denied)
test_winconsoleio skipped
test_warnings failed
test_winreg test_winsound skipped (resource denied)
test_winsound skipped (resource denied)
400 test_xmlrpc_net skipped (resource denied)
404 test_zipfile64 skipped (resource denied)

sudo make altinstall




Docker


Installation of Docker

More infor here. Requirements are met since we are running debian10.

Uninstall potentially conflicting version of docker:

sudo apt-get remove docker docker-engine docker.io containerd runc

Prepare to update repositories:

sudo apt-get update

In my case, errors of the type

Repository 'http://deb.debian.org/debian buster InRelease' changed its 'Version' value from '10.10' to '10.11'

cropped up, and apparently the cure is to allow release info change:

sudo apt-get --allow-releaseinfo-change update

I also had to remove/comment out an old docker release source in /etc/apt.d/sources.list for sudo apt-get update to return no error or warnings.

Install dependencies:

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release

(all were already satisfied)

Add docker repository (debian!)

curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt
/sources.list.d/docker.list > /dev/null

Neat. The docker repository is in an aptly named file docker.list under /etc/apt/sources.list.d

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

Check that it is running:

sudo docker run hello-world

Success.




Installing nnUNet


First create a virtual environment:
virtualenv ~/venv/nnUNet -p python3

Activate the virtual environment:
. ~/venv/nnUNet/activate

Install all the required packages - they are written in requirments.txt, found here :
pip install ~/software/src/venv/nnUNet/requirements.txt

In order for all the packages to match, we have to install these separately:
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113

Than also install nnUNet:
pip install nnunet

And you are done! You have created nnUNet virutal environment!

To deactivate virtual environment: deactivate




Slicer


Slicer on headless server

Copy tgz from slicer3d.org. The link in browser works, but doesn't get renamed with wget, so do it manually afterwards.

wget https://slicer-packages.kitware.com/api/v1/item/60add706ae4540bf6a89bf98/download
mv download Slicer-4.11.20210226-linux-amd64.tar.gz

From webpage, the revision number of the download was 29738. Typically, Slicer will first complain of missing libraries.

sudo apt-get install libglu1-mesa libpulse-mainloop-glib0

But later, an error like the following may be seen

Could not load the Qt platform plugin "xcb" in "" even though it was found

This is in fact the same error as above, save that it is a bit harder to decipher the missing libraries. Following suggestion re-run with QT_DEBUG_PLUGINS flag set:

export QT_DEBUG_PLUGINS=1
~/software/install/Slicer/Slicer

In my case, libxcb-icccm is missing:

Cannot load library /home/studen/software/install/Slicer-4.11.20210226-linux-amd64/lib/QtPlugins/platforms/libqxcb.so: 
  (libxcb-icccm.so.4: cannot open shared object file: No such file or directory)

Install the missing libraries:

sudo apt-get install libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0 
  libxcb-render-util0 libxcb-xinerama0 libxcb-xkb1 libxkbcommon-x11-0

XPRA, an X emulator

To run Slicer, an X11 incarnation is required. By default, processing PCs have no X11 interface, and an alternative is required.

sudo apt-get install xpra

XPRA has many interesting characteristics which makes it excellent to use as a dummy X server. See more in documentation. Usage examples:

  • Start slicer remotely. Connect via terminal to vangogh and start xpra first:
xpra start :210
~studen/software/install/Slicer/Slicer

Now use attach method below to connect to this session

  • Linux: start a new session from your laptop: xpra start ssh://vangogh/210 --start-child=/home/studen/software/install/Slicer/Slicer. Detach with Ctrl-C or using the system tray menu.
  • Linux: connect to running session from your laptop: xpra attach ssh://vangogh/210. This works for sessions started remotely or from your laptop.
  • Windows from laptop: Download installer and install. Run XPRA, select Connect and fill as shown below, username is your vangogh username:
ModeSSH
Serverusername @ 172.16.21.37:22
Display210
Command/home/studen/software/install/Slicer/Slicer

See [screenshot][image].

I used 210 to be consistent with *nx setup, but display should be unique. No password - rubens is configured to use ssh keys. Disconnect is hidden in system tray - right click on XPRA system tray icon reveals it. Disconnecting leaves the process running on vangogh and later the same session can be joined via Connect button - make sure you remember your display id

Notes on running xpra on Windows (developers)

For developers. A remote username must be configured and Slicer should be started in that remote unit to isolate it from other users. The remote unit must contain user credentials to access LabKey. Such credentials must be copied (once?) from local machine. This must be done using complicated tools like WinSCP, scp from command window or similar, that require coarse knowledge of *nix that might exceed users interest. Other data (images, etc.) are then loaded via LabKey. Ideal workflow - User starts xpra using the instructions above and gets an operating Slicer window which is in the background already connected to LabKey. Solution: the user directory comes with preloaded zip file, which is then imported via LabKey.

[image]: 'Screenshot at 2021-11-15 09-25-56.png'




Remote Pipeline


Notes on using LabKey as a socket client to initiate analysis on remote processing PC

Local job execution managed by LabKey

LabKey was modified to initiate pipeline jobs on local machine using trigger script machinery. The process was split between two pieces of software:

  • LABKEY/analysisModule. The analysis module runs on LabKey server, written in javascript and combines data from instruction list (Runs), json configuration from Analysis/@files/configurationFiles to initiate a python script that will run as user tomcat8 on LabKey server. The actual python scripts are part of project specific code and are expected to be in standard software directories (e.g. /home/nixUser/software/src). The task of the module is to format a shell script command that will combine items from the line and execute it.
  • LABKEY/analysisInterface. This is a set of overhead python routines that will delegate execution to project specific scripts and will manage program execution flags (PROCESSING, FAILED, DONE) and log files that are shown together with job execution.
  • The native pipeline execution was abandoned due to lack of flexibility.

Remote job execution model

Processing PCs are kept distinct from the database PCs. LabKey requires such processing PCs to run an equivalent LabKey software, which makes the infrastructure overwhelming. Some thoughts:

  • As a replacement, python socket model is suggested. Identically, analysisModule formats the call, but sends it to socket rather than executes it directly. Since combining sockets over multiple programming language may be cumbersome, probably best to still start a shell command, but there should be a flag that tells analysisInterface to start the job remotely.
  • Remote sockets starts a python job involving analysisInterface; this is identical to current system call performed by analysisModule, except in python, which might enable some shortcuts, ie starting from python. The nice thing about shells is that they run as new threads. However, previous item already has analysisInterface running socket client, so it might be best for analysisInterface to use sockets directly.
  • analysisInterface runs on processing PC and manages the (local) logs and execution. The status on the initiating job is updated via embedded labkey/python interface and no additional sockets are needed. Log file can be transmitted at the end of the job, although running updates might be of interest, which may be handled by analysisInterfaceusing smart uploading strategies that append rather than transmit full files.
  • Due to asynchronity of the submissions, a queue could be implemented at the processing PC site, probably by the analysisInterface to make the socket itself as transparent as possible. Which begs the question on how could processes initiated by different users be aware of each other. But wait - the user running the socket will be the single user that will execute the code, hence a plain json database is fine. Speaking of databases - it might as well use the originating database, which will have to be modified anyhow also as a queue, eliminating the need for local json or other overhead.
  • This makes the remote pipeline fully opaque and the end-user has no additional overhead by using remote as opposed to local hosts.
  • Let's recapitulate: analysisInterface gets a submit job request via socket. It checks back to server whether it has any jobs running. Here we could apply filters that would allow multiple non-interfering jobs to be run simultaneously, but prevent interfering jobs to be started. The python instance that waits in a low budget loop and checks whether its turn has come. To perserve order all jobs issued previously must reach a conclusive state (DONE/FAILED) and no QUEUED job should be ahead in queue. Then the loop completes and shell command is issued, the loop is switched to wait for completion, part of which a potential log update could be. Once job is completed, status should be changed, now critical, since further jobs might await that flag.

Network considerations

  • List ports: ss -tulpn | grep LISTEN
  • Open ports iptables -I INPUT -p tcp -s X.X.X.X/32 --dport 8765 -j ACCEPT iptables -A INPUT -p tcp -s 0.0.0.0/0 --dport 8765 -j DROP
  • Remove iptables rule: sudo iptables -D INPUT -m conntrack --ctstate INVALID -j DROP
  • Message should contain - the calling server, jobId. analysisInterface should hold a mapping of server-configuration maps. Does websockets report caller id? It does and can be used - websocket.remote_address[0]

Server setup

Processor side:

  • Clone websocket. Edit serviceScripts/env.sh to change IPSERVER and IPCLIENT
  • Clone analysisInterface
  • Check .labkey/setup.json for proper paths and venv. Particularly, if softwareSrcis set in paths.
  • Start server: $HOME/software/src/websocket/serviceScripts/start.sh
  • Enable port: sudo $HOME/software/src/websocket/serviceScripts/open_port.sh

Client (labkey) side:

  • Check from labkey pc by cloning websocket (above), installing websockets pip3 install websockets and running: $HOME/software/src/websocket/send.py AA.BB.CC.DD:TEST:X, where AA.BB.CC.DDis the ip address of the server or its name, if set by DNS.
  • Install websockets for tomcat8 user.

Debug

Check iptables!




SSH Key generation


SSH Key generation

To generate SSH key for Van Gogh access, do

ssh-keygen 

This will generate a pair of private/public ssh keys that your program will use to access a remote resource. To generate an account, the system administrator will require the public part of the key, while the private stays with you. The public key is stored in .ssh/id_rsa.pub, please attach its content to an email and send it to the administrator. The content is a single line starting with ssh-rsa, a long string of random symbols and a name of the key, typically a username and the name of the PC.

# cat ~/.ssh/id_rsa.pub
ssh-rsa 
AAAAB3NzaC1yc2EAAAADAQABAAABAQDSxY3O+S1EKJ/Dye0GxcW8mdM7ulmZmD+uQ/iG9UElTu
8szDiqKOCA+moLEgOWkwTZL3mUbfVBhwEo0ThP+IKFX2J9NmwVEQvUTH2gCtSWoyA4JeZ4xBhh
hHc2GVVmEo85a5ZmBAnD3rqHLO5ElIV84sqHgace3kxEHos0CgqZgUSVHjuAS529VZyr4AKlIY
liMdmJ6vR9Fn+W0aDaBvkTjhP/QcIobI3VmUguxRqcTZfsl5+qwrQRf/ayho3Tqytxv3R2laQb
lDUn858nElMLmatV5MQ7a9FloPNr+VyTOnQN7QYxrglA+nLn+waUGKP/ue9setPYXNXdOconfFfx 
studen@DESKTOP-MJ4P6MG

This will work in Linux terminal or a program like MobaXterm used to access remote PCs in Windows.




Jupyter-Hub


Jupyter-hub

Author: Martin Horvat, Luciano Rivetti, Andrej Studen, October 2023

Install

apt-get install nodejs npm
python3 -m pip install jupyterhub
npm install -g configurable-http-proxy
python3 -m pip install jupyterlab notebook

Config

A. Generating configuration:

mkdir -p /etc/jupyterhub/systemd
cd /etc/jupyterhub
jupyterhub --generate-config

B. Generating systems service:

Edit /etc/jupyterhub/systemd/jupyterhub.service to read

----
[Unit]
Description=JupyterHub
After=syslog.target network.target

[Service]
User=root
WorkingDirectory=/etc/jupyterhub
ExecStart=jupyterhub -f /etc/jupyterhub/jupyterhub_config.py

[Install]
WantedBy=multi-user.target

Perform:

ln -s /etc/jupyterhub/systemd/jupyterhub.service /etc/systemd/system/jupyterhub.service
systemctl enable jupyterhub.service
systemctl start jupyterhub.service

Reverse-proxy

Standard setup, including certbot to generate certificate. Add instructions from documentation, the only
apache module missing was the proxy_wstunnel.

a2enmod ssl rewrite proxy headers proxy_http proxy_wstunnel

Also, add

 RewriteEngine On
  RewriteCond %{HTTP:Connection} Upgrade [NC]
  RewriteCond %{HTTP:Upgrade} websocket [NC]
  RewriteRule /(.*) ws://vangogh.fmf.uni-lj.si:8000/$1 [P,L]

to apache site configuration file under <VirtualHost>, and under <Location>

RequestHeader     set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}

Test

http://jupyter-vangogh.fmf.uni-lj.si