Docker commands tips & tricks

Just some Docker commands tips & tricks I documented for myself.

Misc

Start interactive shell in Alpine image

Alpine uses the ash shell instead of the bash shell. This by the way also overrules the default CMD in the image.

$ docker run -it --entrypoint=/bin/ash image-id

Connect to a running container

What if you started a shell in the background and you want to see the stdout and stderr output? Connect to the running container.

$ docker attach container-id

Connect to a running container with an interactive shell

So what if you want to connect to a running container and inspect its contents? Just attach and start an interactive shell 🙂

$ docker exec -it container-id /bin/ash

Inspect image, volume or running container

It can be handy to inspect the settings of images, volumes or running containers. To do this use the following commands.

docker image inspect image-id
docker volume inspect volume-id
docker container inspect container-id

Build

Traditionally, the Dockerfile is called Dockerfile and located in the root of the context directory. You use the -f flag with docker build to point to a Dockerfile anywhere in your file system.

IMPORTANT when pointing to a Dockerfile not located in the context directory, you must add a period (.) at the end of the statement.

$ docker build -t my-label -f /path/to/a/Dockerfile .

Volumes

Volumes make it possible to persist data between container restarts and share data between containers.

Create a volume.

$ docker volume create logs

List files in a volume.

$ docker run -it --rm -v logs:/logs alpine ls -l /logs

Display the contents of a file in a volume.

$ docker run -it --rm -v logs:/logs alpine cat /logs/access.log

Interactive access to the files in a volume.

$ docker container run -ti -v logs:/logs alpine sh -c 'cd /logs; exec "${SHELL:-sh}'

Using tail with -f option to view changes to the contents of a file in realtime.

$ docker run -it --rm -v logs:/logs alpine tail -f -n 25 /logs/access.log

WTF!?! Docker asks for Azure AD credentials when sharing C drive

On my Docker quest I was surprised once again. I wanted to share my drive with my Docker container. I was presented with a login dialog that asked for my Azure AD credentials. Azure AD credentials? Seriously? Do I have Azure AD credentials?

UPDATE 9 November 2018

It turns out you do not have to use an Azure AD account for Shared Drives at all. You can just create a local account (i.e. DockerHost) using the instructions below points 2 to 4 .

The confusing part is that the dialog displayed the username as AzureAD\MyName. This didn’t ring a bell. After fiddling around I figured out that I could use my Office 365 credentials, but the username needed to be changed to my email address in the following format AzureAD\[email protected]. Go figure.

After logging in nothing happened and the share was unchecked again. WTF?!?!

After googling around I found the following post Sharing your C drive with Docker for Windows when using Azure Active Directory by Tom Chantler, which gave me enough information to fix the problem.

I had to make to three changes to the solution he describes in his post:

  1. I do not have Azure AD credentials. So I used my Office 365 credentials, which also uses Azure AD. So instead of the username being AzureAD\MyName, I had to change it to AzureAD\[email protected].
  2. I’m running Windows 10 Professional. It initially would not let me create a local users who’s username was not an email address. In the first dialog I needed to select ‘I don’t have this person’s sign-in information’ and in the next step ‘Add user without Microsoft account’. Then I could create the user with the same username (MyName) without the Azure\ prefix.
  3. Change the account type to Administrator.
  4. Go to the Docker Settings and select Shared Drives and use the local user account created in the previous step to authenticate. It should work now.

 

After that the C drive was shared. Whooha!

Docker: incorrect username or password

I’m reacquainting myself with Docker. My first steps were dodgy. I immediately ran into problems testing if Docker was installed correctly on my Windows 10 machine, when I wanted to execute the hello-world image.

docker run hello-world

Which threw the following error.

Unable to find image 'hello-world:latest' locally
docker: Error response from daemon: Get https://registry-1.docker.io/v2/library/hello-world/manifests/latest: unauthorized: incorrect username or password.
See 'docker run --help'.

It turned out the problem was that I was a little to eager when I signed into the Docker Hub in the Docker Settings.

Screenshot

After logging out with the following command it worked.

docker logout

Go figure…

Encoding HTML tags in WordPress

Grrrr very frustrating. When you add pieces of HTML code to pages and add code tags in the HTML view when you switch back the tags have disappeared. To get around this you need to encode the tags using HTML encoding.

The &gt; and &lt; are character entity references for the > and < character in HTML. It is not possible to use the less than (<) or greater than (>) signs in html, because the browser will mix them with tags. To use these characters you can use entity names (&gt;) and entity numbers(<).

Just do a search & replace to change the < and > then to &gt; and &lt;.

Setting up a reverse tunnel to a local machine

Today I needed to access Google Calendar API. Google Calendar uses OAuth 2.0 for authentication. OAuth 2.0 works by redirecting the user to the OAuth server to be authenticated. After authentication the user is redirected back to the website. This means that it does not work with localhost as localhost is not accessible from the internet, which is a bit of a bummer if you are developing on a local machine. As I do not want to open my development machine up to the outside world I needed a different approach. I decided to setup a secure tunnel between my machine and a server that is accessible from internet. I did this using SSH and Apache Server.

TL;DR

A reverse tunnel is when network traffic is forwarded from one computer to another computer. In this case we are going to forward traffic that connects to a server from internet to a local machine.

What are we going to do?

  • Setup a secure virtual host in Apache Server using reverse proxy
  • Create a SLL/TLS certificate using Let’s Encrypt
  • Setup a reverse SSH tunnel on the client

Prerequisites

This post expects the following prerequisites:

Server

  • A server that is accessible from internet
  • Apache Server up and running
  • SSH Server up and running with password login disabled (certificate login)

Client

  • SSH client up and running that can connect to the server using a certificate

DNS and domain

  • A domain or subdomain that points to the server

Setup a secure virtual host in Apache Server using reverse proxy

Part of the magic starts with the server accepting a secure connection from internet and forwarding this connection to the local machine through the SSH tunnel. We will use Apache to handle the connection from internet. Setup is pretty straight forward using the reverse proxy module of Apache.

Create a virtual host

Create a virtual host file.

$ sudo nano /etc/apache2/sites-available/google-api.mydomain.com.conf

And add the following contents to it.

<VirtualHost *:80>
    ServerName google-api.mydomain.com
    ProxyPass / http://127.0.0.1:8888/
    ProxyPassReverse / http://127.0.0.1:8888/
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

The interesting lines her are ProxyPassand ProxyPassReverse. This maps the context root ‘/’ to the back-end server. This is just a fancy way of saying the https://google-api.mydomain.com is redirected to http://127.0.0.1:8888. In our case the back-end server is 127.0.0.1:8080, which is port 8080 of the server. In this case it will be the SSH Server forwarding port 8080 to the connected SSH client running on the local machine. More on that in a minute.

What do ProxyPassand ProxyPassReverse actually do and why do you need them both?

ProxyPass redirects a request from the Apache Server to another server. For example it redirects from https://my-domain.com/my-page to http://localhost:8080/my-page.

The response from the server can contain HTML, Javascript and CSS with links that point to the servers location (http://localhost:8080). If this response is sent as is back to the browser it will not work, as the browser will try to access the links on the users local machine.

This is were ProxyPassReversecomes in. ProxyPassReverse will rewrite all links so that they point to the Apache Server. In other words it will rewrite (replace) http://localhost:8080 with https://mydomain.com. The browser will then send requests back to the Apache Server which will redirect to the other server.

Add a SSL/TLS certificate

This virtual host uses the unsecured HTTP port. Not very good. Lets change it to the secure HTTPS port. For this to work we need to setup a certificate. We will use the free SSL/TLS certificate service of Let’s Encrypt.

Install Let’s Encrypt Certbot with the following commands.

$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-apache

Now generate and install the certificate.

$ sudo certbot --apache -d google-api.mydomain.com

Certbot will ask if you want to redirect HTTP to HTTPS. Choose YES!

Test the virtual host

Now we need to enable the Apache virtual host.

$ sudo a2ensite google-api.mydomain.com

Check if the configuration is valid.

$ sudo apache2ctl configtest

If it is valid then we can restart Apache to activate the virtual host.

$ sudo systemctl reload apache2

Cool now the server is accessible to the outside world over internet. Time to setup the SSH tunnel.

Setup a reverse SSH tunnel on the client

Execute the following command on the local machine from a command shell to connect to the SSH Server and forward network traffic from server port 8888 to client port 8080.

Spoiler alert: the following command might not work (see next section).

ssh -vR 8888:localhost:8080 -l nidkil google-api.mydomain.com

The options mean the following:

  • -R: tells the tunnel to answer on the remote side and forward to the client. In other words reverse direction from the server to the client.
  • -l: specifies the user to login as on the remote machine.
  • -v: Verbose output.

Force SSH to use a specific private key

The previous command will only work if there is a config file in the users .ssh directory that tells SSH which private key to use for the connection. If it does not exist you will get a login error.

Lets create the config file.

$ notepad ~/.ssh/config

Add the following contents to the config file.

Host google-api.mydomain.com
    IdentityFile ~/.ssh/your-private.ssh.key

The host must match the server name you use to connect to the server. In this case google-api.mydomain.com.

Alternatively you can use the -i command line option to pass the private key as an command line argument.

$ ssh -i ~/.ssh/your-private.ssh.key -vR 8888:localhost:8080 -l nidkil google-api.mydomain.com

Test

Okay now all the piece are in place it is time to test if it works. Open the following URL in your browser and if all goes well it should display the web page.

Note: make sure the local web or application server is running on the local machine.

https://google-api.mydomain.com

How cool is that?

Setup certificates for automatic renewal

The Let’s Encrypt certificates are only valid for 90 days. The need to be renewed before this period ends. This can be done with the Certbot tool and Crontab. Open up the Crontab file.

$ sudo crontab -e

Add the following line to the Crontab file.

0 2 * * 0 certbot renew && systemctl restart apache2

This is the syntax Crontab uses to specify day, date and time followed by the command to be run at that interval.

crontab

So this means that Certbot auto renew will run every week at 2 o’clock Sunday morning.

As we are adding the Certbot auto renew to the root’s Crontab there is no need to add sudo to the commands. It would also not work as sudo requires you to manually enter the password.

Debug SSH Server

When you run into problems with SSH it can help to view the SSH logs to figure out what the problem is. To check the SSH log file use the following command

$ sudo tail -f /var/log/auth.log

The server logs are your best friend when troubleshooting. It may be necessary to turn up the log level temporarily to get more information.

Important: Don’t forget to set it back to normal after things are fixed to avoid privacy problems or excessively use of disk space.

Open the SSH config file.

$ sudo nano /etc/ssh/sshd_config

Look for the following line.

LogLevel INFO

And change it to: VERBOSE.

LogLevel VERBOSE

And restart the SSH Server to activate the change.

$ sudo service ssh restart

Debug SSH client

To debug on the client side all you need to do is run the client with the -v option, which will show verbose output. Change this to -vvv to get even more verbose debugging information.

I hope you enjoyed this and found it useful. Happy hacking!

Installing a module from a git repo with npm

I recently needed to add internationalization (i18n) functionality to an existing GitHub repository. I made the changes and put in a pull request. That pull request has not been processed yet. In the mean time I wanted to use my updated version. After some googling around I found out it is possible to manage modules directly from GitHub (git) with npm.

Such a cool feature. It means I can use my updated version until the main repository is updated. It is really easy, just change the line in the package.json file if you already have the module imported.

  ...
  "dependencies": {
    "@babel/polyfill": "^7.0.0-rc.1",
    "@feathersjs/feathers": "^3.2.3",
    "@feathersjs/rest-client": "^1.4.5",
    "axios": "^0.18.0",
    "vue": "^2.5.17",
    "vue-analytics": "^5.16.0",
    "vue-i18n": "^8.2.1",
    "vue-meta": "^1.5.5",
    "vue-recaptcha": "^1.1.1",
    "vue-router": "^3.0.1",
    "vue2-flip-countdown": "https://github.com/nidkil/vue2-flip-countdown",
    "vuelidate": "^0.7.4",
    "vuetify": "^1.3.1"
  },
  ...

And then run npm install command. The module will be replaced with the git version.

If you have not installed the module yet just execute the following command.

npm install --save <module name> <git repo> 

Example:

npm install --save vue2-flip-countdown https://github.com/nidkil/vue2-flip-countdown

Is this cool or what?

Installing npm modules globally without sudo

I needed to install pm2 globally and run it as non root. This meant installing it with npm without using the sudo command. However, when you do this npm will throw an error.

“Error: EACCES: permission denied”

The npm documentation does provide a solution that works, which has a caveat.

  1. Make a directory for global installations:
    mkdir ~/.npm-global
  2. Configure npm to use the new directory path:
    npm config set prefix '~/.npm-global'
  3. Open or create a ~/.profile file and add this line:
    export PATH=~/.npm-global/bin:$PATH
  4. Back on the command line, update your system variables:
    source ~/.profile

As a side note, an easier way to execute step 3 is:

echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.profile

When I initially executed these steps it worked. However, after logging out and in again the pm2 command was no longer available. It turns out that on login the .bash_profile file is loaded instead of the .profile. Actually bash will try loading ~/.bash_profile, ~/.bash_login and ~/.profile, in that order. Once it finds one of them it will not try and load any of the others. After adding the statement to .bash_profile it worked like a dream.

echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.bash_profile

Hope this helps someone.

Override module loaded by require

I was using the module redact-secrets with Winston logger. This module makes sensitive data like passwords unreadable in logfiles. Very cool and handy module. It makes use of another module is-secret that contains a collection of patterns to determine what sensitive data is. One piece of sentive data was missing from is-secret: pass. I could fix it on my side, but I prefer the original GitHub project to be updated so others can also profit from it. So I submitted an issue on GitHub. While waiting for the fix I needed to continue with my development work. So I used another handy module override-require. This module overrides the resolution logic of require. So you can use it to override a dependency of a module. I used it in the following to overrule is-secret used by redact-secrets.

const overrideRequire = require('override-require');

// Check if a request needs to be overridden
const isOverride = (request) => {
return request === 'is-secret';
};

// If isOverride is true, load the module with the overridden module
const resolveRequest = (request) => {
return require('./overrule/is-secret');
};

// Initialize overide-require
const restoreOriginalModuleLoader = overrideRequire(isOverride, resolveRequest);

const { createLogger, format, transports } = require('winston');
// When redacts-secrets is loaded override-require will kickin and load our own module
const redact = require('redact-secrets')('******');
const fs = require('fs');
const path = require('path');

// Disable override require
restoreOriginalModuleLoader();

 

That’s it. Pretty cool isn’t it?

Webstorm not recognising Vuetify component html tags

Recently I moved from VS Code to Webstorm. What a brilliant IDE. It really improved my development flow. One thing that has been irritating me is that the Vuetify component html tags are not recognised, which results in a s**t load of warning messages of unrecognised html tags.

After Googling around I found an issue on the Vuetify GitHub Issues that provides a simple workaround. All you need to do is create a file somewhere in the project that describes the Vuetify components. You don’t need to import the file. It just has to be accessible to Webstorm so that it can be analysed.

There is a fix in the works for the Vuetify api-generator that provides this file out of the box. Just checkout the Javascript and Typescript files.

I placed the Javascript file in my plugin directory naming it vuetify-fake-components-for-webstorm.js. After that all the warnings of unrecognised html tags disappeared like magic. Wooho!

Using Git with SSH key on Windows

Just a quick write up as reminder how to generate a SSH key on Windows and use it with Git. Git comes with OpenSSH which includes ssh-keygen. Of course you can use Putty to generate SSH keys, but why not do it the quick and easy way with Git? If you use Putty you need to convert the generated key from Putty format to the standard SSH format.

Okay lets get started. Follow along with the following steps.

1. Make Open SSH utilities accessible

Add the Git directory containing the OpenSSH command line utilities to the Windows Path. They are installed in the following location.

C:\Program Files\Git\usr\bin

You can do this the traditional way using Windows Control Panel executing the following steps.

  • Pressing the Windows key to open up the Start Menu
  • Search for “advanced system settings” (just start typing)

Alternatively you can browse through the Control Panel.

  • Select System and Security
  • System
  • Click on the Advanced system settings hyperlink in the left hand pane

Or do it the easy way if you have Rapid Environment Editor (RapidEE) installed. Use the following command line from a shell with administrative privileges to add it to the system wide path (for all users) or leave out the -M flag to add the variable to the user path.

rapidee -A -M Path "C:\Program Files\Git\usr\bin"

2. Generate the SSH key pair

NOTE: If you want Git to work with the generated SSH key pair without any further configuration then accept the default location and name of the SSH key pair.

An SSH key consists of a private and public key. The private key should be stored safely, the public key can be shared with others. Don’t forget to set a passphrase for your private key. The passphrase prevents unauthorised usage of the private key by protecting the key itself with a password. Although the directory holding the private keys should be inaccessible to other users, the root user of the system, or anyone who can access the private key can copy and use it if not protected by a passphrase.To add a passphrase to a key just type it when prompted during the key generation process. Keep in mind that the password must be at least 5 characters long. A good passphrase should be at least 10 characters long, and consist of random upper and lower case letters, numbers and symbols.

Generate the SSH key pair with the following command.

ssh-keygen -t rsa -b 4096 -C "nidkil-git-key"

The -t is the key type, -b is the key length (the higher the more secure) and -C is a comment that is added to the key which makes it easier to identify. The comment is added to the end of the key. Don’t believe me? Open the public key file and you will see your comment. Handy isn’t it?

3. Use the SSH public key

If you did not change the default location and name, the SSH key pair can be found in the directory .ssh in the users home directory.

That’s all fooks. Have fun.