Docker on Windows tips & tricks

Just some tips & tricks on Docker on Windows I documented for myself.

Location of Docker daemon logs

The Docker daemon logs can be found in the following location.

C:\ProgramData\Docker

By default C:\ProgramData is hidden in Explorer, you need to make it visible in the options (File -> Change folder and search options, select tab View, select the option ‘Show hidden files, folders and drives’ under ‘Hidden files and folders’).

Change folder and search options

Advertisements

Docker commands tips & tricks

Just some Docker commands tips & tricks I documented for myself.

Misc

Start interactive shell in Alpine image

Alpine uses the ash shell instead of the bash shell. This by the way also overrules the default CMD in the image.

$ docker run -it --entrypoint=/bin/ash image-id

Connect to a running container

What if you started a shell in the background and you want to see the stdout and stderr output? Connect to the running container.

$ docker attach container-id

Connect to a running container with an interactive shell

So what if you want to connect to a running container and inspect its contents? Just attach and start an interactive shell ūüôā

$ docker exec -it container-id /bin/ash

Inspect image, volume or running container

It can be handy to inspect the settings of images, volumes or running containers. To do this use the following commands.

docker image inspect image-id
docker volume inspect volume-id
docker container inspect container-id

Build

Traditionally, the Dockerfile is called Dockerfile and located in the root of the context directory. You use the -f flag with docker build to point to a Dockerfile anywhere in your file system.

IMPORTANT when pointing to a Dockerfile not located in the context directory, you must add a period (.) at the end of the statement.

$ docker build -t my-label -f /path/to/a/Dockerfile .

Volumes

Volumes make it possible to persist data between container restarts and share data between containers.

Create a volume.

$ docker volume create logs

List files in a volume.

$ docker run -it --rm -v logs:/logs alpine ls -l /logs

Display the contents of a file in a volume.

$ docker run -it --rm -v logs:/logs alpine cat /logs/access.log

Interactive access to the files in a volume.

$ docker container run -ti -v logs:/logs alpine sh -c 'cd /logs; exec "${SHELL:-sh}'

Using tail with -f option to view changes to the contents of a file in realtime.

$ docker run -it --rm -v logs:/logs alpine tail -f -n 25 /logs/access.log

WTF!?! Docker asks for Azure AD credentials when sharing C drive

On my Docker quest I was surprised once again. I wanted to share my drive with my Docker container. I was presented with a login dialog that asked for my Azure AD credentials. Azure AD credentials? Seriously? Do I have Azure AD credentials?

UPDATE 9 November 2018

It turns out you do not have to use an Azure AD account for Shared Drives at all. You can just create a local account (i.e. DockerHost) using the instructions below points 2 to 4 .

The confusing part is that the dialog displayed the username as¬†AzureAD\MyName. This didn’t ring a bell. After fiddling around I figured out that I could use my Office 365 credentials, but the username needed to be changed to my email address in the following format AzureAD\[email protected]. Go figure.

After logging in nothing happened and the share was unchecked again. WTF?!?!

After googling around I found the following post Sharing your C drive with Docker for Windows when using Azure Active Directory by Tom Chantler, which gave me enough information to fix the problem.

I had to make to three changes to the solution he describes in his post:

  1. I do not have Azure AD credentials. So I used my Office 365 credentials, which also uses Azure AD. So instead of the username being¬†AzureAD\MyName, I had to change it to AzureAD\[email protected].
  2. I’m running Windows 10 Professional. It initially would not let me create a local users who’s username was not an email address. In the first dialog I needed to select ‘I don’t have this person’s sign-in information’ and in the next step ‘Add user without Microsoft account’. Then I could create the user with the same username (MyName) without the Azure\ prefix.
  3. Change the account type to Administrator.
  4. Go to the Docker Settings and select Shared Drives and use the local user account created in the previous step to authenticate. It should work now.

 

After that the C drive was shared. Whooha!

Docker: incorrect username or password

I’m reacquainting myself with Docker. My first steps were dodgy. I immediately ran into problems testing if Docker was installed correctly on my Windows 10 machine, when I wanted to execute the hello-world image.

docker run hello-world

Which threw the following error.

Unable to find image 'hello-world:latest' locally
docker: Error response from daemon: Get https://registry-1.docker.io/v2/library/hello-world/manifests/latest: unauthorized: incorrect username or password.
See 'docker run --help'.

It turned out the problem was that I was a little to eager when I signed into the Docker Hub in the Docker Settings.

Screenshot

After logging out with the following command it worked.

docker logout

Go figure…

Encoding HTML tags in WordPress

Grrrr very frustrating. When you add pieces of HTML code to pages and add code tags in the HTML view when you switch back the tags have disappeared. To get around this you need to encode the tags using HTML encoding.

The &gt; and &lt; are character entity references for the > and < character in HTML. It is not possible to use the less than (<) or greater than (>) signs in html, because the browser will mix them with tags. To use these characters you can use entity names (&gt;) and entity numbers(<).

Just do a search & replace to change the < and > then to &gt; and &lt;.

Setting up a reverse tunnel to a local machine

Today I needed to access Google Calendar API. Google Calendar uses OAuth 2.0 for authentication. OAuth 2.0 works by redirecting the user to the OAuth server to be authenticated. After authentication the user is redirected back to the website. This means that it does not work with localhost as localhost is not accessible from the internet, which is a bit of a bummer if you are developing on a local machine. As I do not want to open my development machine up to the outside world I needed a different approach. I decided to setup a secure tunnel between my machine and a server that is accessible from internet. I did this using SSH and Apache Server.

TL;DR

A reverse tunnel is when network traffic is forwarded from one computer to another computer. In this case we are going to forward traffic that connects to a server from internet to a local machine.

What are we going to do?

  • Setup a secure virtual host in Apache Server using reverse proxy
  • Create a SLL/TLS certificate using Let’s Encrypt
  • Setup a reverse SSH tunnel on the client

Prerequisites

This post expects the following prerequisites:

Server

  • A server that is accessible from internet
  • Apache Server up and running
  • SSH Server up and running with password login disabled (certificate login)

Client

  • SSH client up and running that can connect to the server using a certificate

DNS and domain

  • A domain or subdomain that points to the server

Setup a secure virtual host in Apache Server using reverse proxy

Part of the magic starts with the server accepting a secure connection from internet and forwarding this connection to the local machine through the SSH tunnel. We will use Apache to handle the connection from internet. Setup is pretty straight forward using the reverse proxy module of Apache.

Create a virtual host

Create a virtual host file.

$ sudo nano /etc/apache2/sites-available/google-api.mydomain.com.conf

And add the following contents to it.

<VirtualHost *:80>
    ServerName google-api.mydomain.com
    ProxyPass / http://127.0.0.1:8888/
    ProxyPassReverse / http://127.0.0.1:8888/
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

The interesting lines her are ProxyPassand ProxyPassReverse. This maps the context root ‘/’ to the back-end server. This is just a fancy way of saying the https://google-api.mydomain.com is redirected to http://127.0.0.1:8888. In our case the back-end server is 127.0.0.1:8080, which is port 8080 of the server. In this case it will be the SSH Server forwarding port 8080¬†to the connected SSH client running on the local machine. More on that in a minute.

What do ProxyPassand ProxyPassReverse actually do and why do you need them both?

ProxyPass redirects a request from the Apache Server to another server. For example it redirects from https://my-domain.com/my-page to http://localhost:8080/my-page.

The response from the server can contain HTML, Javascript and CSS with links that point to the servers location (http://localhost:8080). If this response is sent as is back to the browser it will not work, as the browser will try to access the links on the users local machine.

This is were ProxyPassReversecomes in. ProxyPassReverse will rewrite all links so that they point to the Apache Server. In other words it will rewrite (replace) http://localhost:8080 with https://mydomain.com. The browser will then send requests back to the Apache Server which will redirect to the other server.

Add a SSL/TLS certificate

This virtual host uses the unsecured HTTP port. Not very good. Lets change it to the secure HTTPS port. For this to work we need to setup a certificate. We will use the free SSL/TLS certificate service of Let’s Encrypt.

Install Let’s Encrypt Certbot with the following commands.

$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-apache

Now generate and install the certificate.

$ sudo certbot --apache -d google-api.mydomain.com

Certbot will ask if you want to redirect HTTP to HTTPS. Choose YES!

Test the virtual host

Now we need to enable the Apache virtual host.

$ sudo a2ensite google-api.mydomain.com

Check if the configuration is valid.

$ sudo apache2ctl configtest

If it is valid then we can restart Apache to activate the virtual host.

$ sudo systemctl reload apache2

Cool now the server is accessible to the outside world over internet. Time to setup the SSH tunnel.

Setup a reverse SSH tunnel on the client

Execute the following command on the local machine from a command shell to connect to the SSH Server and forward network traffic from server port 8888 to client port 8080.

Spoiler alert: the following command might not work (see next section).

ssh -vR 8888:localhost:8080 -l nidkil google-api.mydomain.com

The options mean the following:

  • -R: tells the tunnel to answer on the remote side and forward to the client. In other words¬†reverse direction from the server to the client.
  • -l: specifies the user to login as on the remote machine.
  • -v: Verbose output.

Force SSH to use a specific private key

The previous command will only work if there is a config file in the users .ssh directory that tells SSH which private key to use for the connection. If it does not exist you will get a login error.

Lets create the config file.

$ notepad ~/.ssh/config

Add the following contents to the config file.

Host google-api.mydomain.com
    IdentityFile ~/.ssh/your-private.ssh.key

The host must match the server name you use to connect to the server. In this case google-api.mydomain.com.

Alternatively you can use the -i command line option to pass the private key as an command line argument.

$ ssh -i ~/.ssh/your-private.ssh.key -vR 8888:localhost:8080 -l nidkil google-api.mydomain.com

Test

Okay now all the piece are in place it is time to test if it works. Open the following URL in your browser and if all goes well it should display the web page.

Note: make sure the local web or application server is running on the local machine.

https://google-api.mydomain.com

How cool is that?

Setup certificates for automatic renewal

The Let’s Encrypt certificates are only valid for 90 days. The need to be renewed before this period ends. This can be done with the Certbot tool and Crontab. Open up the Crontab file.

$ sudo crontab -e

Add the following line to the Crontab file.

0 2 * * 0 certbot renew && systemctl restart apache2

This is the syntax Crontab uses to specify day, date and time followed by the command to be run at that interval.

crontab

So this means that Certbot auto renew will run every week at 2 o’clock Sunday morning.

As we are adding the Certbot auto renew to the root’s Crontab there is no need to add sudo to the commands. It would also not work as sudo requires you to manually enter the password.

Debug SSH Server

When you run into problems with SSH it can help to view the SSH logs to figure out what the problem is. To check the SSH log file use the following command

$ sudo tail -f /var/log/auth.log

The server logs are your best friend when troubleshooting. It may be necessary to turn up the log level temporarily to get more information.

Important: Don’t forget to set it back to normal after things are fixed to avoid privacy problems or excessively use of disk space.

Open the SSH config file.

$ sudo nano /etc/ssh/sshd_config

Look for the following line.

LogLevel INFO

And change it to: VERBOSE.

LogLevel VERBOSE

And restart the SSH Server to activate the change.

$ sudo service ssh restart

Debug SSH client

To debug on the client side all you need to do is run the client with the -v option, which will show verbose output. Change this to -vvv to get even more verbose debugging information.

I hope you enjoyed this and found it useful. Happy hacking!

Installing a module from a git repo with npm

I recently needed to add internationalization (i18n) functionality to an existing GitHub repository. I made the changes and put in a pull request. That pull request has not been processed yet. In the mean time I wanted to use my updated version. After some googling around I found out it is possible to manage modules directly from GitHub (git) with npm.

Such a cool feature. It means I can use my updated version until the main repository is updated. It is really easy, just change the line in the package.json file if you already have the module imported.

  ...
  "dependencies": {
    "@babel/polyfill": "^7.0.0-rc.1",
    "@feathersjs/feathers": "^3.2.3",
    "@feathersjs/rest-client": "^1.4.5",
    "axios": "^0.18.0",
    "vue": "^2.5.17",
    "vue-analytics": "^5.16.0",
    "vue-i18n": "^8.2.1",
    "vue-meta": "^1.5.5",
    "vue-recaptcha": "^1.1.1",
    "vue-router": "^3.0.1",
    "vue2-flip-countdown": "https://github.com/nidkil/vue2-flip-countdown",
    "vuelidate": "^0.7.4",
    "vuetify": "^1.3.1"
  },
  ...

And then run npm install command. The module will be replaced with the git version.

If you have not installed the module yet just execute the following command.

npm install --save <module name> <git repo> 

Example:

npm install --save vue2-flip-countdown https://github.com/nidkil/vue2-flip-countdown

Is this cool or what?