[Vuetify] Multiple instances of Vue detected

Setting up unit tests today using localeVue with Vuetify and @vue/test-unit I ran into the warning message:

[Vuetify] Multiple instances of Vue detected

After googling around I found a number of issues related to this warning message. There are different ways to solve it, which I will address in this post.

1. Ignore it

It is a warning message, so you can safely ignore it. But that red warning text sure is annoying.

2. Use Vue instead of localeVue

You could use initialize the component using Vue instead of localVue. That said the documentation of @vue/test-unit explicitly warns agains this.

3. Suppress the warning message

A hack, but in my opinion the least evel one of all the options is suppressing the warning message. This can be done very locally in the beforeEach. This is the code I am using that is based on this issue comment.

The SilenceWarnHack.js class:

 * This is a hack to suppress the Vuetify Multiple instances of Vue detected warning.
 * See https://github.com/vuetifyjs/vuetify/issues/4068#issuecomment-446988490 for more information.
export class SilenceWarnHack {
  constructor() {
    this.originalLogError = console.error
  enable() {
    console.error = (...args) => {
      if (args[0].includes('[Vuetify]') && args[0].includes('https://github.com/vuetifyjs/vuetify/issues/4068')) return
  disable() {
    console.error = this.originalLogError

And the beforeEach function that is part of the test file:

import { createLocalVue, shallowMount } from '@vue/test-utils'
import { SilenceWarnHack } from '@tst/helpers/SilenceWarnHack'
import Vuetify from 'vuetify'
import VStatsCard from '@/components/common/VStatsCard.vue'

const silenceWarnHack = new SilenceWarnHack()

describe('VStatsCard.vue', () => {
  let localVue = null
  beforeEach(() => {
    localVue = createLocalVue()

It works like a dream 🙂


Add a global ignore file to git

Had enough of adding your IDE’s files to the .gitignore file? Time to add a global ignore file for git.

Just execute the following commands.

On *nix or with Windows git bash:

git config --global core.excludesfile '~/.gitignore-global'

With Windows cmd:

git config --global core.excludesfile "%USERPROFILE%\.gitignore-global"

Then you need to create the file:

notepad %USERPROFILE%\.gitignore-global

And add for example the following contents:


That’s it, now you only have to do this one time and not for each project individually.

Adding aliases with Babel

Had enough of complicated import and require statements in your Node/JavaScript files? I’m used to developing in Vue which uses Webpack aliases that by default sets up the @ alias that points to the src directory. I’m working on a project that I am using Babel but not Webpack. It turns out Babel has the cool plugin babel-plugin-module-resolver that also setup aliases.

Install the plugin.

$ npm i --save-dev babel-plugin-module-resolver

So we have the following directory structure.

├── src
│  └─ MyCoolService
│    └─index.js
│  └── index.js
├── test
│  └── MyCoolService.spec.js
│  .babelrc
│  package.json

In the spec file I want to test MyCoolService. Importing is messy.

import MyCoolService from '../src/MyCoolService'

I want to do it the Vue way.

import MyCoolService from '@/MyCoolService'

Can it be done? O yes, babel-plugin-module-resolver to the rescue! Create a .babelrc file with the following contents.

// .babelrc
  "plugins": [
    ["module-resolver", {
      "root": ["./src/**"]
      "aliases": [
        "@": "./src"

Now we can import with the @ alias. Alternatively as the root points to src you can also import as follows.

import MyCoolService from 'MyCoolService'

How cool is that? Happy coding!

WTF!?! Docker asks for Azure AD credentials when sharing C drive

On my Docker quest I was surprised once again. I wanted to share my drive with my Docker container. I was presented with a login dialog that asked for my Azure AD credentials. Azure AD credentials? Seriously? Do I have Azure AD credentials?

UPDATE 9 November 2018

It turns out you do not have to use an Azure AD account for Shared Drives at all. You can just create a local account (i.e. DockerHost) using the instructions below points 2 to 4 .

The confusing part is that the dialog displayed the username as AzureAD\MyName. This didn’t ring a bell. After fiddling around I figured out that I could use my Office 365 credentials, but the username needed to be changed to my email address in the following format AzureAD\[email protected]. Go figure.

After logging in nothing happened and the share was unchecked again. WTF?!?!

After googling around I found the following post Sharing your C drive with Docker for Windows when using Azure Active Directory by Tom Chantler, which gave me enough information to fix the problem.

I had to make to three changes to the solution he describes in his post:

  1. I do not have Azure AD credentials. So I used my Office 365 credentials, which also uses Azure AD. So instead of the username being AzureAD\MyName, I had to change it to AzureAD\[email protected].
  2. I’m running Windows 10 Professional. It initially would not let me create a local users who’s username was not an email address. In the first dialog I needed to select ‘I don’t have this person’s sign-in information’ and in the next step ‘Add user without Microsoft account’. Then I could create the user with the same username (MyName) without the Azure\ prefix.
  3. Change the account type to Administrator.
  4. Go to the Docker Settings and select Shared Drives and use the local user account created in the previous step to authenticate. It should work now.


After that the C drive was shared. Whooha!

Docker: incorrect username or password

I’m reacquainting myself with Docker. My first steps were dodgy. I immediately ran into problems testing if Docker was installed correctly on my Windows 10 machine, when I wanted to execute the hello-world image.

docker run hello-world

Which threw the following error.

Unable to find image 'hello-world:latest' locally
docker: Error response from daemon: Get https://registry-1.docker.io/v2/library/hello-world/manifests/latest: unauthorized: incorrect username or password.
See 'docker run --help'.

It turned out the problem was that I was a little to eager when I signed into the Docker Hub in the Docker Settings.


After logging out with the following command it worked.

docker logout

Go figure…

Encoding HTML tags in WordPress

Grrrr very frustrating. When you add pieces of HTML code to pages and add code tags in the HTML view when you switch back the tags have disappeared. To get around this you need to encode the tags using HTML encoding.

The &gt; and &lt; are character entity references for the > and < character in HTML. It is not possible to use the less than (<) or greater than (>) signs in html, because the browser will mix them with tags. To use these characters you can use entity names (&gt;) and entity numbers(<).

Just do a search & replace to change the < and > then to &gt; and &lt;.

Setting up a reverse tunnel to a local machine

Today I needed to access Google Calendar API. Google Calendar uses OAuth 2.0 for authentication. OAuth 2.0 works by redirecting the user to the OAuth server to be authenticated. After authentication the user is redirected back to the website. This means that it does not work with localhost as localhost is not accessible from the internet, which is a bit of a bummer if you are developing on a local machine. As I do not want to open my development machine up to the outside world I needed a different approach. I decided to setup a secure tunnel between my machine and a server that is accessible from internet. I did this using SSH and Apache Server.


A reverse tunnel is when network traffic is forwarded from one computer to another computer. In this case we are going to forward traffic that connects to a server from internet to a local machine.

What are we going to do?

  • Setup a secure virtual host in Apache Server using reverse proxy
  • Create a SLL/TLS certificate using Let’s Encrypt
  • Setup a reverse SSH tunnel on the client


This post expects the following prerequisites:


  • A server that is accessible from internet
  • Apache Server up and running
  • SSH Server up and running with password login disabled (certificate login)


  • SSH client up and running that can connect to the server using a certificate

DNS and domain

  • A domain or subdomain that points to the server

Setup a secure virtual host in Apache Server using reverse proxy

Part of the magic starts with the server accepting a secure connection from internet and forwarding this connection to the local machine through the SSH tunnel. We will use Apache to handle the connection from internet. Setup is pretty straight forward using the reverse proxy module of Apache.

Create a virtual host

Create a virtual host file.

$ sudo nano /etc/apache2/sites-available/google-api.mydomain.com.conf

And add the following contents to it.

<VirtualHost *:80>
    ServerName google-api.mydomain.com
    ProxyPass /
    ProxyPassReverse /
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined

The interesting lines her are ProxyPassand ProxyPassReverse. This maps the context root ‘/’ to the back-end server. This is just a fancy way of saying the https://google-api.mydomain.com is redirected to In our case the back-end server is, which is port 8080 of the server. In this case it will be the SSH Server forwarding port 8080 to the connected SSH client running on the local machine. More on that in a minute.

What do ProxyPassand ProxyPassReverse actually do and why do you need them both?

ProxyPass redirects a request from the Apache Server to another server. For example it redirects from https://my-domain.com/my-page to http://localhost:8080/my-page.

The response from the server can contain HTML, Javascript and CSS with links that point to the servers location (http://localhost:8080). If this response is sent as is back to the browser it will not work, as the browser will try to access the links on the users local machine.

This is were ProxyPassReversecomes in. ProxyPassReverse will rewrite all links so that they point to the Apache Server. In other words it will rewrite (replace) http://localhost:8080 with https://mydomain.com. The browser will then send requests back to the Apache Server which will redirect to the other server.

Add a SSL/TLS certificate

This virtual host uses the unsecured HTTP port. Not very good. Lets change it to the secure HTTPS port. For this to work we need to setup a certificate. We will use the free SSL/TLS certificate service of Let’s Encrypt.

Install Let’s Encrypt Certbot with the following commands.

$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-apache

Now generate and install the certificate.

$ sudo certbot --apache -d google-api.mydomain.com

Certbot will ask if you want to redirect HTTP to HTTPS. Choose YES!

Test the virtual host

Now we need to enable the Apache virtual host.

$ sudo a2ensite google-api.mydomain.com

Check if the configuration is valid.

$ sudo apache2ctl configtest

If it is valid then we can restart Apache to activate the virtual host.

$ sudo systemctl reload apache2

Cool now the server is accessible to the outside world over internet. Time to setup the SSH tunnel.

Setup a reverse SSH tunnel on the client

Execute the following command on the local machine from a command shell to connect to the SSH Server and forward network traffic from server port 8888 to client port 8080.

Spoiler alert: the following command might not work (see next section).

ssh -vR 8888:localhost:8080 -l nidkil google-api.mydomain.com

The options mean the following:

  • -R: tells the tunnel to answer on the remote side and forward to the client. In other words reverse direction from the server to the client.
  • -l: specifies the user to login as on the remote machine.
  • -v: Verbose output.

Force SSH to use a specific private key

The previous command will only work if there is a config file in the users .ssh directory that tells SSH which private key to use for the connection. If it does not exist you will get a login error.

Lets create the config file.

$ notepad ~/.ssh/config

Add the following contents to the config file.

Host google-api.mydomain.com
    IdentityFile ~/.ssh/your-private.ssh.key

The host must match the server name you use to connect to the server. In this case google-api.mydomain.com.

Alternatively you can use the -i command line option to pass the private key as an command line argument.

$ ssh -i ~/.ssh/your-private.ssh.key -vR 8888:localhost:8080 -l nidkil google-api.mydomain.com


Okay now all the piece are in place it is time to test if it works. Open the following URL in your browser and if all goes well it should display the web page.

Note: make sure the local web or application server is running on the local machine.


How cool is that?

Setup certificates for automatic renewal

The Let’s Encrypt certificates are only valid for 90 days. The need to be renewed before this period ends. This can be done with the Certbot tool and Crontab. Open up the Crontab file.

$ sudo crontab -e

Add the following line to the Crontab file.

0 2 * * 0 certbot renew && systemctl restart apache2

This is the syntax Crontab uses to specify day, date and time followed by the command to be run at that interval.


So this means that Certbot auto renew will run every week at 2 o’clock Sunday morning.

As we are adding the Certbot auto renew to the root’s Crontab there is no need to add sudo to the commands. It would also not work as sudo requires you to manually enter the password.

Debug SSH Server

When you run into problems with SSH it can help to view the SSH logs to figure out what the problem is. To check the SSH log file use the following command

$ sudo tail -f /var/log/auth.log

The server logs are your best friend when troubleshooting. It may be necessary to turn up the log level temporarily to get more information.

Important: Don’t forget to set it back to normal after things are fixed to avoid privacy problems or excessively use of disk space.

Open the SSH config file.

$ sudo nano /etc/ssh/sshd_config

Look for the following line.

LogLevel INFO

And change it to: VERBOSE.


And restart the SSH Server to activate the change.

$ sudo service ssh restart

Debug SSH client

To debug on the client side all you need to do is run the client with the -v option, which will show verbose output. Change this to -vvv to get even more verbose debugging information.

I hope you enjoyed this and found it useful. Happy hacking!

Installing a module from a git repo with npm

I recently needed to add internationalization (i18n) functionality to an existing GitHub repository. I made the changes and put in a pull request. That pull request has not been processed yet. In the mean time I wanted to use my updated version. After some googling around I found out it is possible to manage modules directly from GitHub (git) with npm.

Such a cool feature. It means I can use my updated version until the main repository is updated. It is really easy, just change the line in the package.json file if you already have the module imported.

  "dependencies": {
    "@babel/polyfill": "^7.0.0-rc.1",
    "@feathersjs/feathers": "^3.2.3",
    "@feathersjs/rest-client": "^1.4.5",
    "axios": "^0.18.0",
    "vue": "^2.5.17",
    "vue-analytics": "^5.16.0",
    "vue-i18n": "^8.2.1",
    "vue-meta": "^1.5.5",
    "vue-recaptcha": "^1.1.1",
    "vue-router": "^3.0.1",
    "vue2-flip-countdown": "https://github.com/nidkil/vue2-flip-countdown",
    "vuelidate": "^0.7.4",
    "vuetify": "^1.3.1"

And then run npm install command. The module will be replaced with the git version.

If you have not installed the module yet just execute the following command.

npm install --save <module name> <git repo> 


npm install --save vue2-flip-countdown https://github.com/nidkil/vue2-flip-countdown

Is this cool or what?

Installing npm modules globally without sudo

I needed to install pm2 globally and run it as non root. This meant installing it with npm without using the sudo command. However, when you do this npm will throw an error.

“Error: EACCES: permission denied”

The npm documentation does provide a solution that works, which has a caveat.

  1. Make a directory for global installations:
    mkdir ~/.npm-global
  2. Configure npm to use the new directory path:
    npm config set prefix '~/.npm-global'
  3. Open or create a ~/.profile file and add this line:
    export PATH=~/.npm-global/bin:$PATH
  4. Back on the command line, update your system variables:
    source ~/.profile

As a side note, an easier way to execute step 3 is:

echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.profile

When I initially executed these steps it worked. However, after logging out and in again the pm2 command was no longer available. It turns out that on login the .bash_profile file is loaded instead of the .profile. Actually bash will try loading ~/.bash_profile, ~/.bash_login and ~/.profile, in that order. Once it finds one of them it will not try and load any of the others. After adding the statement to .bash_profile it worked like a dream.

echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.bash_profile

Hope this helps someone.