PhantomJS is dead, long live headless browsers

Mathis Hofer

Many frontend projects still rely on PhantomJS to run JavaScript tests. As of spring 2017, PhantomJS is not supported anymore and you should migrate your project to an alternative environment. Here is what you can do.

In April 2017, Vitaly Slobodin announced, that he’s stepping down as a developer and maintainer of PhantomJS, the headless WebKit browser. This is mainly due to the fact that Google introduced Headless Chrome with Chrome 59. And since version 55, Firefox also provides a headless mode.

There are several reasons to favor headless Chrome/Firefox over PhantomJS:

  • They are real browsers with a broad feature support (PhantomJS uses a very old version of WebKit – and in the meanwhile Chrome switched to Blink anyway)
  • They are faster and more stable (PhantomJS has a lot of open issues)
  • They use less memory
  • They can be started non-headless, which allows easier debugging
  • No more goofy PhantomJS binary installation with NPM

In the next sections I’m going to suggest a few alternatives to a PhantomJS setup and elaborate on their advantages and disadvantages.

Alternative 1: Don’t use a browser at all

It may sound a little irritating at first, but you should seriously think about not using a browser at all to execute your JavaScript unit tests. Many React projects are already doing this with Jest for example, where the DOM is abstracted with jsdom (a pure-JavaScript implementation of a subset of the DOM and HTML standards). It is possible to use Jest in Angular projects too.

The advantage is that these tests run way faster and can be executed completely within Node. This also means no special setup on the CI server is needed. The downside is that they are not executed in a real browser and you have to mock browser APIs. Additionally, if you have end-to-end tests, you are going to need a real browser setup anyway.

Alternative 2: Use headless Chrome (or Firefox)

In a more conventional setup with Karma, the switch from PhantomJS to Chrome is quite easy. Instead of the karma-phantomjs-launcher, you install the karma-chrome-launcher and configure Karma accordingly in your karma.conf.js:

This will open a Chrome window and execute the tests within the browser. Chances are, you are already using this setup for local debugging.

The karma-chrome-launcher also supports a headless preset which makes working with Headless Chrome dead simple. You only have to change the preset:

The launcher assumes that the Chrome binary is available on the system (if in an exotic location, you can provide a CHROME_BIN environment variable). The launcher supports Chromium as well with the Chromium and ChromiumHeadless presets (for the latter, make sure you have version >= 2.2.0).

So far so good, but what about running the tests on a CI server? For Travis, there is a Chrome addon that can be included. And Jenkins? You probably don’t want to install Chrome/Chromium (and it’s dependencies) on every slave. Furthermore, you cannot just install Chrome/Chromium via NPM1 or download and unpack it2 since you’d still need to install all the libraries it is dynamically linked to.

1 yes, there are some shady packages you shouldn’t trust
2 although puppeteer does exactly this

Alternative 3: Use a cloud service like Sauce Labs

With the karma-sauce-launcher, running tests with various browsers is easy (locally as well as on the CI server). You configure custom launchers for each browser type and toss in the connection credentials as environment variables. Et voilà.

Sauce Labs is a paid service.

Alternative 4: Launch Chrome in a Docker container

A very naive approch is to run Chrome in a Docker container. For this, we create a Dockerfile that installs Chromium and exposes its remote debugging port:

You can then build this image and start the container. I’ve created a script that does this, taking a URL as argument:

By using the karma-script-launcher, we can configure Karma to use this script to start Chromium. It then executes the tests with Chromium running in a Docker container:

While it is pretty promising to be able to use the same image with the exact same browser version locally, there are some issues with this method:

  • Your test setup has to know about the Docker setup and has to be adapted accordingly
  • On the CI server, Docker has to be installed and it must be allowed to do a docker build and docker run within the environment of the job.
  • How do you ensure the image is rebuilt regularly to update to new browser versions?
  • How do you handle concurrent test jobs (container name, debugging port)?
  • How do you clean up containers?

Alternative 5: Dynamic Jenkins slave with the Docker Slave plugin

So when adopting Docker, why not go all the way and manage the whole Jenkins slave with Docker? This is possible with the Docker Slaves plugin. The plugin enables you to setup build agents using Docker containers by placing a Dockerfile in your source repository and set up the job to use it (any image is supported). You can also define side containers (for the database etc.) similar to docker-compose.

The advantage of this option is that your frontend test/build setup has to know nothing about Docker.

Alternative 6: Dynamic Jenkins slave on OpenShift

When working with a Kubernetes/OpenShift cluster, the Jenkins Kubernetes plugin is an interesting option.

OpenShift offers a bunch of preconfigured images that work with the Kubernetes plugin (e.g. openshift/jenkins-slave-base-centos7). You can use them as base image to build an image containing Chromium. Then create an OpenShift build from your Dockerfile with the oc new-build command.

Furthermore, a new pod template has to be created (Manage Jenkins > Cloud > Kubernetes), where the URL to the Docker image(-stream) is configured. The pod configuration options are described here.

Now create a Jenkins (Multi-)Pipeline project for your Git repository and configure the label of the template you defined above in the project’s Jenkinsfile:

What about artifacts? They have to be archived to survive a pod shutdown. Jenkins Plugins like JUnit or Cobertura already pull the concerned files out of the container and copy them onto the Jenkins master. Any other artifacts can be archived with archiveArtifact.

As you may have noticed, the custom ChromiumHeadlessNoSandbox preset is used in this example. This is due to the inability of Chrome’s sandboxing feature to work in a Docker container as-is. For our testing context we can live with disabling the sandbox with a custom laucher in karma.conf.js:

Let’s run the job! When analyzing the output, we can observe that the tests are executed in a container using headless Chromium:

Last but not least the browser has be kept up-to-date. This can be achieved by periodically rebuilding the image with another Jenkins job like this:

Conclusion

PhantomJS is a thing of the past, but the good news is there are compelling alternatives with the headless modes of Chrome and Firefox. Although the overall complexity may rise, especially when Docker comes into play.

Please contact us if you have questions regarding a similar scenario.

What are your experiences on the journey replacing PhantomJS?

Image credit: „Valparaíso Puerto“ by Mathis Hofer, 2010, CC BY-SA 3.0

Kommentare sind geschlossen.