Background

Big thanks to Microsoft and Cdr/Code-Server project which make a modern web-based IDE (Integrated development Environment). Last year when I figured out the code-server project, I was really excited. I built a desktop in the mid of 2019 to fulfill my hobby on machine learning. When I built it, I put 32 GB memory and 1.5 TB SSD (512G + 1GB). The machine learning part is my hobby, so didn’t do too many crazy training. Only trained several studying projects. So I was starting to think how could I utilize this machine rather than sitting it there it in dust. I came across the idea to make it a web based development machine so that I can coding anywhere when there is an internet. Happily, I saw the code-server project and they make it reality together with the Microsoft vscode.

Tools I used to build the system

Here is all the tools I use to build the system

  1. Docker A container system.
  2. Nginx A load balancer and reverse proxy.
  3. Code-Server A vscode based web IDE server.
  4. JupyterLab A bit nicer version of Jupyter notebook. It supports create terminal which make it more convenient for development.
  5. nginx-sso A SSO (single sign on) solution for Nginx. We will talk that later, I use it because of iPad but figured out more later.
  6. FreeOTP A free one time password app which support TOTP(Time-Based One-Time Password Algorithm) and HTOP(An HMAC-Based One-Time Password Algorithm). You can use FreeOTP QR Bar code Generator to generate the QR code which can help you setup the token on your Phone APP. FreeOTP support both iOS and Android.
  7. ddclient A dynamic dns client allows you to update your dynamic dns service your latest IP. I’m using a normal Internet provider which means I won’t get a static IP, it’s all dynamically assigned so I have no way to statically set ip to my domain on DNS server.
  8. Let’s Encrypt A free CA (certification authority) which can issue your SSL certificates to encrypt the communication between web browser and server.

My expectation for the whole development system

As I mentioned above I’m a ML hobbyist, so I would like to have my Jupyter Notebook along with the new IDE (yes, you could use vscode to write notebook as well). So I’d like to make the whole system support both at the same time. Here is my detailed expectation:

  1. Be able to support multiple web based development systems at the same time.
  2. Be able to authenticate the request. Since it is on the internet, I don’t want everybody have access to my personal resources.
  3. Be able to keep different IDE isolated so that one doesn’t impact another.
  4. Be as secure as possible. Use HTTPs, use password, etc.
  5. Be able to access from internet so that I can access from anywhere even when I’m commuting.
  6. Be able to manage all IDEs under one domain, so that I can setup an internet domain easily.

My Considerations

First I need to choose how could I build the system. I could install all software into the desktop host, but this would make the environment hard to control and maintain. You have to spend lots of time to deal with conflicts in the future. And you have to expose your physical hosts to internet as well. So I came cross the virtualization solution - Docker. Although I have very few experience of docker, I thought docker would be a good solution for my isolation requirement and it has plenty of images can be chosen from (build your own is also very easy since you can based on many already built images). And the performance on linux is super awesome. So I choose to use Docker as my base system to manage what’s the functions I want to add into the whole system.

Second I want the system to be accessible under one single domain, so I’m considering using a reverse proxy which Nginx comes into picture. I’m really a fan to put everything separate and put them behind VIP or Reverse Proxy. So by nature, I put this engine in place at the very beginning. And really love it because it has so many plugins and information all over the Internet. And since it supports plugin, if you need you can write a plugin by yourself.

How all those pieces work together

To explain how the system work, the easiest way is to use a diagram. Let’s simulate a request from browser to see how it go through different components of this remote development system.

graph LR; subgraph Docker-Apps D3[Code-Server]; D4[JupytorLab]; end subgraph Nginx D --> D1[Nginx]; D1 --> D2{isAuthorized}; D2 --> |Y| D3; D2 --> |Y| D4; D2 --> |N| D1 end subgraph internet A[Web Browser] --> C[Home Router]; C -->|Port Forwarding| D[Docker]; end

System Setup

In this chapter I will explain the whole setup in detail. My desktop has Ubuntu 18.04 installed. For other OS, it should be similar if not exactly the same. The reason to choose a Linux is because docker’s performance on Linux is amazing way faster than on Mac OS.

Docker installation

On Ubuntu, install docker is quit simple.

sudo apt install docker

If you’re going to use GPU inside docker on Ubuntu 18.03, as of today you need to install nvidia-docker. Here are the commands grabbed from nvidia-docker

# Add the package repositories
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
$ sudo systemctl restart docker

For other platform please refer nvidia-docker project’s page to find the commands.

*Please note docker-compose doesn’t support nvidia as devices. So you for the docker container which will use gpu, you have to manually start them.

Prepare for docker compose file and additional folders.

Now let’s put all necessary configuration together inside one folder, so that will be easier for us to manage all the resources that needs for different docker containers. Let’s create a folder called docker-ide under home folder (or any folder you choose).

$ mkdir ~/docker-ide
$ touch ~/docker-ide/docker-compose.yml

And put following content into the newly created docker-compose.yml

version: '3.7'
services:

Code-server

For code-server the configuration is easy, since we will run it behind the nginx and nginx will take care about the authentication and ssl part. So for code-server itself we can disable the authentication and ssl. One thing I found useful is to have the /home/coder/.local and /home/coder/project mapped to your host resource. The /home/coder/.local will contain the configuration data which vscode rely on like extensions, etc. If you map them to a real folder on the host, they will be persisted and you don’t need to worry about restart your container. The /home/coder/project is the default project folder for you to store your own files and projects. It’s definitely should not be lost.

  1. So let’s create folder for those two:
    $ cd ~/docker-ide 
    $ mkdir -p code-server/local 
    $ mkdir code-server/project
    
  2. Let’s add the configuration for code-server to our docker-compose.yml located in ~/docker-ide/docker-compose.yml. Right under the services:
version: '3.7'
services:
  code-server:
    image: codercom/code-server
    ports: 
    - 8080:8080 # Should be removed once you setup nginx. It is for testing the compose file only.
    volumes:
    - ${BASE_DIR}/code-server/local:/home/coder/.local
    - ${BASE_DIR}/code-server/project:/home/coder/project
    command: ['--auth', 'none']

Now you can try your setup by running following command

$ BASE_DIR=~/docker-ide docker-compose -f docker-compose.yml up

JupyterLab

The configuration for JupyterLab is almost identical to the coder-server. So I’ll ignore it for now, will add it later.

Nginx-sso

For nginx-sso, it’s a bit complex, it is a sso facade and you still need a authentication provider. Depends on what’s kind of provider you want to integrate with, the configuration could be different. Here, since I’ll be the only user of this environment, I’ll use the simple username and password. This would be good enough for a development host.

  1. Setup corresponding folders. For nginx-sso, it’s simple you only need one folder to put its config and plugins (if you’re planning to use providers other than simple).
    $ mkdir -p ~/docker-ide/nginx-sso/data
    $ mkdir -p ~/docker-ide/nginx-sso/plugins
    
  2. Config the nginx-sso. The nginx-sso will read a config file called config.yaml. The full configuration can be found at config.yaml. You can now put the config.ymal under folder nginx-sso/data. Here is a sample configuration I’m using
login:
  title: "My Home IDE Login"
  default_method: "simple"
  default_redirect: "http://your.website/"
  hide_mfa_field: true # Set to false if you are going to use MFA(multiple factor authentication)
  names:
    simple: "Username / Password"

cookie:
  domain: "your.site.com" # Change to your domain
  # Make sure you change it, it is the cookie's encryption key
  authentication_key: "Ff1uWJcLouKu9kwxgbnKcU3ps47gps72sxEz79TGHFCpJNCPtiZAFDisM4MWbstH"
  expire: 3600        # Optional, default: 3600
  prefix: "nginx-sso" # Optional, default: nginx-sso
  secure: true        # Optional, default: false

# Optional, default: 127.0.0.1:8082
listen:
  addr: "0.0.0.0" # nginx will be deployed in another container, so 127.0.0.1 won't be accessible. 
  port: 8082

audit_log:
  targets:
    - fd://stdout
    - file:///var/log/nginx-sso/audit.jsonl
  events: ['access_denied', 'login_success', 'login_failure', 'logout', 'validate']
  headers: ['x-origin-uri']
  trusted_ip_headers: ["X-Forwarded-For", "RemoteAddr", "X-Real-IP"]

acl:
  rule_sets:
    # Allow for any uri for coder and admins.
    - rules:
      - field: "x-origin-uri"
      allow: ["coder", "@admins"]

plugins:
  directory: /plugins/

providers:
  # Authentication against embedded user database
  # Supports: Users, Groups, MFA
  simple:
    enable_basic_auth: false

    # Unique username mapped to bcrypt hashed password
    users:
      # The password is coder-test!!!
      coder: "$2y$12$aDy6kIufCO/zvWWzC.74TO1ZHmuwZWgoS5edTAbX7EPYuYH0imReK"

    # Groupname to users mapping
    groups:
      admins: ["coder"]
    
  1. Add the nginx-sso to docker-compose.yml
  nginx-sso:
    image: luzifer/nginx-sso
    ports:
    - 8082:8082 # Remove this after you setup nginx.
    volumes: 
    - ${BASE_DIR}/nginx-sso/data:/data
    - ${BASE_DIR}/nginx-sso/plugins:/plugins

Run BASE_DIR=/your/base-dir/ docker-compose -f docker-compose.yml up You should be able to access the login page from http://localhost:8082/login. After successfully login, it will redirect you to the default redirect you set in the config.yaml.

FreeOTP (optional)

If you want to put another layer of security on the protection, enable MFA so that the hacker has to guess one static password and a dynamic code (or you could say another secret, because the token is generated off the secret we configured below). Go to freeOPT’s website QR Code Gen and generate a secret. (You will find a very long string in a input box and along with a Random button to its right). Either generate another one (by clicking random) or copy the one in the input box and paste in the secret field below

  simple:
    enable_basic_auth: false

    # Unique username mapped to bcrypt hashed password
    users:
      # The password is coder-test!!!
      coder: "$2y$12$aDy6kIufCO/zvWWzC.74TO1ZHmuwZWgoS5edTAbX7EPYuYH0imReK"

    # Groupname to users mapping
    groups:
      admins: ["coder"]
    # MFA configs: Username to configs mapping
    mfa:
      coder:
        - provider: totp
          attributes:
            secret: MZXW6YTBOIFA  # required
            period: 30            # optional, defaults to 30 (Google Authenticator)
            skew: 1               # optional, defaults to 1 (Google Authenticator)
            digits: 8             # optional, defaults to 6 (Google Authenticator)
            algorithm: sha1       # optional (sha1, sha256, sha512), defaults to sha1 (Google Authenticator)

Let’s Encrypt

For Let’s Encrypt, please follow the instruction and select the one for your situation. Let’s Encrypt

Nginx

For Nginx, we have several things needs to be configured:

  1. Add Nginx to the docker-compose.yml
  2. Configure the nginx to make the Code-Server work
  3. Configure the nginx with SSO authentication
  4. Configure the nginx with SSL

    Create folder to save the configurations

    $ mkdir -p ~/docker-ide/nginx
    $ touch ~/docker-ide/nginx/nginx.conf  
    

Add Nginx to docker

Add following to docker-compose.yml

  nginx:
    image: nginx
    depends_on: 
      - nginx-sso
      - code-server
    ports:
      - 8443:443
    volumes:
      - ${BASE_DIR}/nginx:/etc/nginx
    restart: always

Configure the Code-Server

The detailed configuration guides can be found at nginx.conf. The following is for configuring code-server to a relative path your.site.com/coder/ so that you can mount other server under different path. For my case, I have code-server at /coder/ and Jyputer Notebook at /jyputer/. For this article, I will demonstrate adding code-server. The Jypter Notebook is similar. Since code-server uses websocket, two additional headers should be added

proxy_set_header   Upgrade $http_upgrade;
proxy_set_header   Connection upgrade;

The full configuration for nginx.conf is as follows:

worker_processes auto;

events { worker_connections 1024; }

http {
    sendfile on;

    server {
        listen               443 ssl;
        server_name          your.site.com;
        # Use local resolver which can resolve host name
        resolver             127.0.0.11 valid=10s;

        location /coder/ {
            rewrite            ^/coder/(.*) /$1  break;
            proxy_pass         http://code-server:8080$uri$is_args$args;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $http_host;
            proxy_http_version 1.1;
            proxy_redirect off;
            proxy_buffering off;
            proxy_set_header   Upgrade $http_upgrade;
            proxy_set_header   Connection upgrade;
        }
    }
}

And remove following ports mapping from code-server section in docker-compose.yml

ports:
  - 8080:8080

Now you can test with BASE_DIR=/your/base-dir/ docker-compose -f docker-compose.yml up. You should be able to access the code-server at https://localhost:8443/coder/

Configure the SSO

Ths SSO configuration is a bit complex, you have to configure 4 paths /login, /logout, /sso-auth, 401 Error redirect. All those path configuration is inside the server section of nginx.conf.

  1. /login, this is for showing the login form so that you can enter password and the nginx-sso server will set authentication cookies.
         # Define where to send the user to login and specify how to get back
         location /login {
             auth_request off;
             proxy_pass http://nginx-sso:8082/login?go=https://your.site.com/;
             proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
             proxy_set_header X-Real-IP $remote_addr;
             proxy_set_header Host $http_host;
             proxy_http_version 1.1;
             proxy_redirect off;
             proxy_buffering off;
         }
    
  2. /logout, this is for clear out auth cookies.
         location /logout {
             # Another server{} directive also proxying to http://127.0.0.1:8082
             proxy_pass http://nginx-sso:8082/logout?go=http://blog.lindenliu.com;
             proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
             proxy_set_header X-Real-IP $remote_addr;
             proxy_set_header Host $http_host;
             proxy_http_version 1.1;
             proxy_redirect off;
             proxy_buffering off;
         }
    
  3. /sso-auth, this is for nginx to call to validate each request
         location /sso-auth {
             # Do not allow requests from outside
             internal;
             # Access /auth endpoint to query login state
             proxy_pass http://nginx-sso:8082/auth;
             # Do not forward the request body (nginx-sso does not care about it)
             proxy_pass_request_body off;
             proxy_set_header Content-Length "";
             # Set custom information for ACL matching: Each one is available as
             # a field for matching: X-Host = x-host, ...
             proxy_set_header X-Origin-URI $request_uri;
             proxy_set_header X-Host $http_host;
             proxy_set_header X-Real-IP $remote_addr;
             proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
             proxy_set_header X-Forwarded-Proto $scheme;
             proxy_set_header X-Application "kibana";
         }
    
  4. 401 Error page
         # Define where to send the user to login and specify how to get back
         location @error401 {
             auth_request off;
             proxy_pass http://nginx-sso:8082/login?go=$scheme://$http_host$request_uri;
             proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
             proxy_set_header X-Real-IP $remote_addr;
             proxy_set_header Host $http_host;
             proxy_http_version 1.1;
             proxy_redirect off;
             proxy_buffering off;
         }
    
  5. Require authentication for the server by default. Those configurations should be added right under the server_name section within the server_name.
     server {
         listen               443 ssl;
         server_name          your.site.com;
         # Redirect the user to the login page when they are not logged in
         error_page           401 = @error401;
         auth_request /sso-auth;
    
         # Automatically renew SSO cookie on request
         auth_request_set $cookie $upstream_http_set_cookie;
         add_header Set-Cookie $cookie;
         ...
     }
    
  6. And remove the ports config from nginx-sso section from docker-compose.yml
    Remove the following:
    ports:
      - 8082:8082 # Remove this after you setup nginx.
    

Configure SSL

Add SSL certification to the nginx.config under server name section

    server {
        listen               443 ssl;
        server_name          your.site.com;
        # Put your certificate path
        ssl_certificate      /etc/letsencrypt/live/your.site.com/fullchain.pem;
        ssl_certificate_key  /etc/letsencrypt/live/your.site.com/privkey.pem;
        ssl_protocols        TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers          HIGH:!aNULL:!MD5;
    ...
    }

:warning: If you decide not to use the SSL cert. Please remove the ssl from “listen 443 ssl” in nginx.conf and set secure: false in nginx-sso/data/config.yaml under cookie section: Be very careful here!

Now you should be able to test your environment by

$ BASE_DIR=/your/base-dir/ docker-compose -f docker-compose.yml up

ddclient

The dynamic domain client is a way for the people don’t have a host which does have static public IP address. For example the host live inside your home network usually won’t have a static ip even on your router or modem. Use dynamic domain client can bind your domain to the latest publich IP address to the domain you own, so that you can access the domain through internet.

  1. Create folder to save ddclient.config
$ mkdir -p ~/docker-ide/ddclient 
  1. Add your dynamic domain config to your ddclient.config. I use google domain as example.
verbose=yes
use=web, web=checkip.dyndns.org/, web-skip='IP Address'
ssl=yes
protocol=googledomains
login=your-user-name-from-google-domains
password=the-password
your.site.com
  1. Add ddclient to docker-compose.yml
   ddclient:
    image: linuxserver/ddclient
    volumes:
      - ${BASE_DIR}/ddclient:/config
    restart: always  

Make it auto start

Once you finish everything, your host is becoming a web based developer machine which has vscode as IDE which is accessible from https://your.site.com/coder/. What if your host restarted? I would not like to manually restart the whole stack. So I run following command to make it auto-start (as deamon services)

$ BASE_DIR=/your/base-dir/ docker-compose -f docker-compose.yml start

It tells docker to run the stack as service rather than a one time thing. Whenever the host restarted, those services will be restarted.

Have fun with your remote IDE and work anywhere Internet is available.

Template available on github.com

If you don’t want to make everything from scratch, you can download the template from my project IDE-in-docker on github.com and change the configurations accordingly.