Angular Performances Part 3 - Profiling and runtime performances

This is the third part of this series (check the first part and the second one if you missed them), and this blog post is about how you can profile the runtime performances of your Angular application and how you can improve these runtime performances. If you are the lucky owner of our ebook, you can already check the other parts if you download the last ebook release.

Now that we have talked about first load and reload, we can start talking about runtime performances. But if you run into a performance issue, before trying any of the following tips, you should start by measuring and profiling the application.

Browsers nowadays offer nice developer tools, especially Chrome, which allows to record your application, and analyze its behavior with quite some details. You can even simulate some conditions, like using a slower processor, or using a 3G network. You can also dive into the call hierarchy, and see how much time each function call is consuming.

Profiling

But Angular also offers a precious tool: ng.profiler. It’s not very well-known, but it can be handy as it allows to measure how long a change detection run in the current page took.

You can then try to apply one of the tips we’ll see, and measure again to see if there is any improvement.

In your main.ts file, replace the application bootstrapping code with the following:

platformBrowserDynamic().bootstrapModule(AppModule)
  .then(moduleRef => {
    const applicationRef = moduleRef.injector.get(ApplicationRef);
    const componentRef = applicationRef.components[0];
    // allows to run `ng.profiler.timeChangeDetection();`
    enableDebugTools(componentRef);
  })
  .catch(err => console.log(err));

Then go to the page you want to profile, open your browser console, and execute the following instruction:

> ng.profiler.timeChangeDetection()
ran 489 change detection cycles
1.02 ms per check

You can see how many change detection cycles it ran (it should be at least 5 cycles or during at least 500ms), and the time per cycle. This is a super useful metric, as many of the tricks we are going to show you directly act on the change detection system. You’ll be able to try them, run the profiler again, and compare the results.

You can also record the CPU profile during these checks to analyze them with ng.profiler.timeChangeDetection({ record: true }).

The Angular team recommends to have a time per check below 3ms, to leave enough time for the application logic, the UI updates and browser’s rendering pipeline to fit within the 16 milliseconds frame (assuming a 60 FPS target frame rate).

Let’s discover these tips!

Runtime performances

Angular’s magic relies on its change detection mechanism: the framework automatically detects changes in the state of the application and updates the DOM accordingly. So, as a general rule of thumbs, you’ll want to help Angular and limit the change detection triggering and the amount of DOM to update/create/delete.

To be honest, most applications will be fine, even under heavy load. But some of us will have to recode Excel in the browser for their enterprise, or will have a component with a tree displaying 10,000 customers, or another unreasonable thing to do in a browser. These things are tricky, whatever framework you use. They tend to update a lot of DOM, and have to check a lot of components. A few of the following tricks can help. And a few of these tricks are really mandatory, like the first one.

enableProdMode

When you are in development mode (by default), Angular will run the change detection twice every time there is a change. This is a security to make sure you are not doing strange things, like updating data without following the one-way data flow. If you break the rules, Angular will warn you about it in development, by throwing an exception that will force you to fix your code. But if you are not careful, you will deploy the application in this mode, and change detection will still run twice, slowing your application.

To go in production mode, you need to call a function provided by Angular called enableProdMode. This method will disable the double check, and also make the generated DOM “lighter” (less attributes on the elements, attributes that are added to debug the application).

As usual the CLI got you covered, and the call to enableProdMode is already present in the generated application, wrapped in an environment check: if you build with the production environment, your app will be in production mode.

trackBy in ngFor

This is a simple tip that can really speed things up on *ngFor: add a trackBy. To understand why, let me explain how modern JS frameworks (at least all major ones) handle collections. When you have a collection of 3 ponies and want to display them in a list, you’ll write something like:

<ul>
  <li *ngFor="let pony of ponies">{{ pony.name }}</li>
</ul>

When you add a new pony, Angular will add a DOM node in the proper position. If you update the name of one of the ponies, Angular will change just the text content of the right li.

How does it do that? By keeping track of which DOM node references which object reference. Angular will have an internal representation looking like:

node li 1 -> pony #e435 // { id: 3, color: blue }
node li 2 -> pony #8fa4 // { id: 4, color: red }

It works great, and if you change an object for another one, Angular will destroy the node and build another one.

node li 1 (recreated) -> pony #c1ea // { id: 1, color: green }
node li 2 -> pony #8fa4 // { id: 4, color: red }

If the whole collection is updated with new objects, the complete DOM list will be destroyed and recreated. Which is fine, except when you just refresh a list with almost the same content: in that case, Angular destroys the complete node list and recreates it, even if there is no need to. For example, when you fetch the same results from the server, you will have the same content, but different references as your collection will have been recreated.

The solution for this use-case is to help Angular track the objects, not by their references, but by something that you know will identify the object, typically an ID.

For this, we use trackBy, which expects a method:

<ul>
  <li *ngFor="let pony of ponies trackBy: ponyById">{{ pony.name }}</li>
</ul>

with the method defined in the component:

ponyById(index: number, pony: PonyModel) {
  return pony.id;
}

As you can see, this method receives the current index and the current entity, allowing you to be creative (or simply track by index, but that’s not recommended).

With this trackBy, Angular will only recreate a DOM node if the id of the pony changes. On a very big list which doesn’t change much, it can save a ton of DOM deletions/creations. Anyway, it’s quite cheap to implement and doesn’t have cons, so don’t hesitate to use it. It’s also a requirement if you want to use animations. If a DOM element’s style is supposed to be animated (by transitioning smoothly from the previous value to the new one), and the list of ponies is replaced by a new one when refreshed, then trackBy is a must: without it, the animation will never happen, because the style of the element never changes. Instead, it’s the element itself which is being replaced by Angular.

We have more tips for you, but you’ll have to wait until next week to read about them!

If you enjoyed this blog post, you may want to dig deeper with our ebook, and/or with a complete exercise that we added in our online training. The exercise takes an application and walks you through what we would do to optimize it, measuring the benefits of each steps, showing you how to avoid the common traps, how to test the optimized application, etc. Check it out if you want to learn more!

See you soon for part 4!


The Gradle Kotlin DSL is now documented

More than 2 years ago, I wrote

Kotlin has also been announced as the future language of choice for Gradle, and I can’t wait to be able to use it for my Gradle builds.

It turns out I had to wait quite a bit. Using the Gradle Kotlin DSL is possible for some time now, but it was a bit of a frustrating experience due to the lack of documentation, to the point that I wrote a migration guide a few months ago.

As promised by the Gradle team, a much better, more complete, official migration guide now exists.

The huge, fantastic Gradle user guide, however, still only shows Groovy samples. But not for long. I’ve spent some time, along with other folks, translating all the samples of the user guide from Groovy to Kotlin. The result is already available in the Gradle nightly.

So you have no excuse anymore. Try the Kotlin DSL. It works, it is quite close to the Groovy DSL, but with less black magic involved, and it does allow auto-completion and navigation to the sources in IntelliJ.

Translating the samples has been a great experience. And it helped finding and fixing a few issues, too. Contributing to an open-source project you like and respect is always gratifying. You get the feeling that what you’re doing matters. Gradle folks have been nothing but kind, understanding, helpful, grateful… and demanding.

I didn’t just decide to contribute though. That’s always intimidating: where to start? How to get help? Will I help, or will I be a burden for the maintainers?

I contributed because the Gradle team asked me to. First after I wrote my migration guide, and after when they opened this epic issue, asking for help from contributors, and providing detailed instructions and examples on how to accomplish the task.

I wish more big open-source projects do that. Tagging issues with “ideal-for-contribution” is also nice. What might seem like grunt work for project maintainers or experienced contributors is an interesting challenge and learning experience for casual, less-experienced developers who are willing to help.

So, if you’re an open-source project maintainer and you read this, please make it easy to start contributing on your project. Ask for help. And communicate on public channels (blogs, tweets, etc.) about it. I’m apparently not the only one to have this opinion, so here is some more food for thoughts.


Angular Performances Part 2 - Reload

This is the second part of this series (check the first post if you missed it), and this blog post is about how you can speed up the reloading of an Angular application. In future posts, we’ll talk about how to profile your running application, and how to improve runtime performances. If you are the lucky owner of our ebook, you can already check the other parts if you download the last ebook release.

So, let’s assume a user visits your application for the first time. How to make sure that, when he/she comes back later, the application starts even faster?

Caching

You should always cache the assets of your application (images, styles, JS bundles…). This is done by configuring your server and leveraging the Cache-Control and ETag headers. All the servers of the market allow to do so, or you can use a CDN for this purpose too. If you do so, the next time your users open the application, the browser won’t have to send a request to fetch them because it will have them already!

But a cache is always tricky: you need to have a way to tell the browser “hey, I deployed a new version in production, please fetch the new assets!”.

The easiest way to do this is to have a different name for the asset you updated. That means instead of deploying an asset named main.js, you deploy main.xxxx.js where xxxx is a unique identifier. This technique is called cache busting. And, again, the CLI is there for you: in production mode, it will name all your assets with a unique hash, derived from the content of the file. It also automatically updates the sources of the scripts in index.html to reflect the unique names, the sources of the images, the sources of the stylesheets, etc.

If you use the CLI, you can safely deploy a new version and cache everything, except the index.html (as this will contain the links to the fresh assets deployed)!

Service Worker

If you want to go a step further, you can use service workers.

Service Workers are an API that most modern browsers support, and to simplify they act like a proxy in the browser. You can register a service worker in your application and every GET requests will then go through it, allowing you to decide if you really want to fetch the requested resource, or if you want to serve it from cache. You can then cache everything, even your index.html, which garanties the fastest startup time (no request to the server).

You may be wondering how a new version can be deployed if everything is cached, but you’re covered: the service worker will serve from cache and then check if a new version is available. It can then force the refresh, or ask the user if he/she wants it immediately or later.

It even allows to go offline, as everything is cached!

Angular offers a dedicated package called @angular/service-worker. It’s a small package, but filled with cool features. Did you know that if you add it to your Angular CLI application, and turn a flag on ("serviceWorker": true in angular.json), the CLI will automatically generate all the necessary stuff to cache your static assets by default? And it will only download what has changed when you deploy a new version, allowing blazing fast application start!

But it can even go further, allowing to cache external resources (like fonts, icons from a CDN…), route redirection and even dynamic content caching (like calls to your API), with different strategies possible (always fetch for fresh data, or always serve from cache for speed…). The package also offers a module called ServiceWorkerModule that you can use in your application to react to push events and notifications!

This is quite easy to setup, and a quick win for your reload start time. It’s also one of the steps to build a Progressive Web App, and to score a perfect 100% on Lighthouse, so you should check it out.

If you enjoyed this blog post, you may want to dig deeper with our ebook, and/or with a complete exercise that we added in our online training. The exercise takes an application and walks you through what we would do to optimize it, measuring the benefits of each steps, showing you how to avoid the common traps, how to test the optimized application, etc. Check it out if you want to learn more!

See you soon for part 3!


What's new in Angular CLI 6.2?

Angular CLI 6.2.0 is out (in fact we even have a 6.2.1 available)!

If you want to upgrade to 6.2.1 without pain (or to any other version, by the way), I have created a Github project to help: angular-cli-diff. Choose the version you’re currently using (6.0.0 for example), and the target version (6.2.1 for example), and it gives you a diff of all files created by the CLI: angular-cli-diff/compare/6.0.0…6.2.1. You have no excuse for staying behind anymore!

Let’s see what we’ve got!

Linter

The first thing is not really a new feature, but rather a bugfix, but it was annoying me, so I’m glad it landed!

With the previous CLI versions, if you ran ng lint, the linter was executed on every application in the project (usually your main application and the e2e application). Now, TSLint comes with a super awesome option called --fix, which automatically fixes some of the issues it found, I love it. And you can use it with the CLI! But running ng lint --fix was failing with the previous versions because it couldn’t figure if you wanted to run it on your main application or the e2e application… So you had to run ng lint app --fix and then ng lint app-e2e --fix.

This is now solved, and if you run ng lint --fix, the task will be executed on all your applications!

The fix is slightly more general than that, and this kind of command will now execute on all your applications if it is possible.

You can also now simply run:

ng lint src/app/app.component.ts

if you want to lint just a file.

To conclude this part about the linter, an option that disappeared in CLI 6.0 but existed before is also back: --lint-fix. This option can be used with every schematic, and will automatically fix the lint issues in the new generated files. You might be wondering why that would be useful: aren’t the files generated by the CLI already correct? They are indeed, but they use the default tslint.json. So if you have defined a different preference for TSLint for example to use double quotes instead of single quotes for strings, then by using this option, the generated files will automatically use your preferences.

ng generate component pony --lint-fix

Watch mode for libraries

As you hopefully know if you read our article about the release of Angular CLI 6.0.0, it is now possible to have multiple applications and libraries in your CLI project. But, as I noted in the article, a slightly annoying thing was that, when you made a change to the library source, you had to rebuild it manually if you wanted the rest of the project to see it, because there was no watch mode for ng build in a library.

That’s now no longer the case, and you can use ng build --watch for a library too, so the rest of your project will see the modifications without any manual steps anymore!

Ivy support

Angular 7 is still some weeks/months away, but the CLI is getting ready for the big novelty of this release: the new Ivy renderer (check out our previous article about Ivy).

You can give Ivy a try by generating a new application with:

ng new my-app --experimental-ivy

This will generate a new application with a few options activated for Ivy. It mainly adds in tsconfig.app.json:

"angularCompilerOptions": {
  "enableIvy": "ngtsc"
}

to activate Ivy. Be warned though, this is still very experimental!

That’s all for this small release, but the CLI team is already working on CLI 7.0, with some cool features incoming (an interactive prompt for the command, a better minifier, support of Angular 7…). Stay tuned !

All our materials (ebook, online training and training) are up-to-date with these changes if you want to learn more!


Angular Performances Part 1 - First load

We have just finished a new chapter of our ebook about performances, and we thought we could share it with you in a series of blog posts. It took us a long time, but we wanted to write something more complete than what you can usually find. There are a lot of tips to make Angular faster (whatever faster means for you, we’ll come back to this in a minute), but you usually don’t have the other side of the story: what are the traps of these optimizations, are they what you are looking for, and should you really use them.

This is the first part of this series, and this blog post is about the first load of an Angular application. In future posts, we’ll talk about how to make reloading faster, then about how to profile your running application, and how to improve runtime performances. If you are the lucky owner of our ebook, you can already check the other parts if you download the last ebook release.

Warning: be careful with premature optimization. Always measure before and after. Beware of the benchmarks you find on the internets: it’s pretty easy to make them say what the authors want.

Let’s start!

Performances

Performances can mean a lot of things: speed, CPU usage (battery consumption), memory pressure…​ Everything is not important for everybody: you have different needs if you are programming for a mobile website, an e-commerce platform, or a classic CRUD application.

Performances can also be split in different categories, that, once more, won’t all matter to you: first load, reload, and runtime performances.

First load is when you open an application for the first time. Reload is when you come back to that application. Runtime performances is what happens when the application is running. Some of the following advices are very generic, and could be applied to any framework. We wrote them because we think it’s worth knowing. And because when you talk about performances, the framework is sometimes the bottleneck, but really (really) often not.

First load

When you load a modern Web application in your browser, a few things happen. First, the index.html is loaded and parsed by the browser. Then the JS scripts and other assets referenced are fetched. When one of the assets is received, the browser parses it, and executes it if it is a JS file.

Assets sizes

So the first tip is very obvious: be careful with your assets sizes!

The assets loading phase depends on how many assets you want to load. A lot will be slow. Big ones will be slow. Especially if the network is not that good, which happens more often than you think: you might test your application on a fiber optic connection, but some of your actual users might be in the middle of nowhere, using slow 3G. Here is what you can do.

Bundle your application

When you write your Angular application, you have imports all over the place, and your code is split across hundreds of files. But you don’t want your users to load hundred of files! So before shipping your application, you want to make a “bundle”: group all the JavaScript files into one file.

Webpack’s job is to take all your JavaScript files (and CSS, and template HTML files) and build bundles. It’s not an easy tool to master, but the Angular CLI does a pretty good job at hiding its complexity. If you don’t use the CLI, you can build your application with Webpack, or you can pick another tool that may produce even better results (like Rollup for example). But be warned that this requires quite a lot of expertise (and work) to not mess things up, just to save a few extra kilobytes. I would recommend staying with the CLI. The team working on it is doing a very good job to keep up with the latest Angular, TypeScript and Webpack releases.

More than that, they built some tools to decrease the bundling size. For example, they wrote a plugin that goes through the generated JavaScript, and adds specific comments to help UglifyJS remove dead code.

Tree-shaking

Webpack (or the other tool you use) starts from the entry point of your application (the main.ts file that the CLI generated for you, and that you probably never touched), and then resolves all the imports tree, and outputs the bundle. This is cool because the bundle will only contains the files from your codebase and your third party libraries that have been imported. The rest is not embedded. So even if you have a dependency in your package.json that you don’t use anymore (so you don’t import it anymore), it will not end up in the bundle.

It’s even a bit smarter than that. If you have a file models exporting two classes, let’s say PonyModel and RaceModel, and then only import PonyModel in the rest of the application, but never RaceModel, then Webpack only puts PonyModel in the final bundle, and drops RaceModel. This process is called tree-shaking. And every framework and library in the JavaScript ecosystem is fighting hard to be tree-shakable! In theory, it means that your final bundle contains only what is really needed! But in practice, Webpack (and others) are a bit conservative, and can’t figure some stuff. For example, if you have a class Pony with two methods eat and run, but you only use run, the code of the eat method will be in the final bundle. So it’s not perfect, but it does a good job.

A few techniques can be used in Angular specifically to have a better tree-shaking. First, don’t import modules that you don’t use. Sometimes you give a try to a library offering a wonderful component, and you add the NgModule of this library to the imports of your NgModule. Then you don’t use it anymore, but maybe forget about the module import, and don’t remove it…​ Bad news: this module and the third party library will be in the final bundle (for now, maybe it will be better in the future). So only import and use what you really need.

Another trick is to use providedIn for your services. If you declare a service in the providers of your NgModule, it will always end up in the bundle whether you actually use it or not, simply because it’s imported and referenced in the module. Whereas if you don’t register in the providers of your NgModule, but use providedIn: 'root' instead, then if you never use this service, it will not end up in the bundle.

Minification and dead code elimination

When your bundle has been built, the code is usually minified and dead code will be eliminated. That means all variables, methods names, class names…​ are renamed to use a one or two characters name through the entire codebase. This is a bit scary and sounds like it could break things, but UglifyJS has been doing a great job for years now. UglifyJS will also eliminate dead code that it can find. It does its best, and I was saying above, the CLI team built a tool that prepares the code with special comments on unneeded code, so UglifyJS can remove it safely.

Other assets

While the above sections were about JS specifically, your application also contains other assets, like styles, images, fonts…​ You should have the same concerns about them, and do your best to keep them at a reasonable size. Applying all kind of crazy techniques to optimize your JS bundle sizes, but loading several MBs of images wouldn’t have a big impact on your page loading time and your bandwidth! As this is not really the scope of this post, I won’t dig into this topic, but let me point out a great online resource by Addy Osmani about image optimization: Essential Image Optimization.

Compression

All the modern browsers accept a compressed version of an asset when they ask the server for it. That means you can serve a compressed version to your users, and the browser will unzip it before parsing it. This is a must do because it will save you tons of bandwidth and loading time!

Every server on the market has an option to activate the compression of assets. Generally the first user to request an asset will pay the cost of the compression on the fly, and then the following ones will receive the compressed asset directly.

The most common compression algorithm used is GZIP, but some others like Brotli are also popular.

Lazy-loading

Sometimes, despite doing your best to keep your JS bundle small, you end up with a big file because your app has grown to several dozens of components, using various third party libraries. And not only this big bundle will increase the time needed to fetch the JavaScript, it will also increase the time needed to parse it and execute it.

One common solution to this problem is to use lazy-loading. It means that instead of having a big bundle of JavaScript, you split your application in several parts and tell Webpack to bundle it in several bundles.

The good news is Angular (its router, and its module system, in particular) makes this task relatively easy to achieve. The other good news is that the CLI knows how to read your router configuration to build several bundles automatically. You can read our chapter about the router if you want to learn more.

Lazy-loading can vastly improve the loading time, as you can make the first bundle really small, with only what’s needed to display the home page, and let Angular load the rest on demand when your user navigates to another part. You can also use prefetching strategies to tell Angular to start loading the other bundles when it’s idle.

Note that lazy-loading adds complexity to your application (and a few traps with dependency injection), so I would advise to go this way only if it really makes sense.

Ahead of Time compilation

In development mode, when you open the application in your browser, it will receive the JavaScript code resulting from the TypeScript compilation, and the HTML templates of the components. These templates are then compiled by Angular to JavaScript directly in your browser.

This is not optimal in production for two reasons mainly:

  • every user pays the cost of this template compilation on every reload;
  • the Angular compiler must be shipped to your users (and it’s big).

This process is called Just in Time compilation. But there is another type of compilation: Ahead of Time compilation. With this mode, you compile your templates at build time, and ship the resulting JavaScript with the rest of the application to your users. It means that the templates are already compiled when your users open the application, and that we don’t need to ship the Angular compiler anymore.

So the parsing and starting time of the application will be way better. And, on the paper, not shipping the compiler should lead to smaller bundles, and faster load times. But in fact, the generated JavaScript is generally far bigger than the uncompiled HTML templates. So the bundles tend to be bigger after an AoT compilation. The Angular team has been working hard on this, with big improvements in Angular 4 and Angular 6 (with its experimental Ivy project). If the bundles are still too big and slow your loading time, consider lazy-loading as explained above.

Server side rendering

I’d like to start by saying that this technique is for 0.0001% of you. Server side rendering (or universal rendering) is the technique that consists of pre-rendering the application on the server before serving it to the users. With this, when a user asks for /dashboard, she will receive a pre-rendered version of the dashboard, instead of receiving index.html and then let the router do its job after Angular has finished to start.

It can lead to vast improvements in perceived startup time. Angular offers a package @angular/universal that allows to run the application not in a browser but on a server (usually a NodeJS instance). You can then pre-render the pages and serve them to your users. The page will display very fast and then Angular will start its job and run as usual.

It’s also a big win if you want your web site to be crawlable by search engines which don’t execute JavaScript, since you can serve them pre-rendered pages, instead of a blank page.

It’s also a way to display previews of your website on social networks like Twitter or Facebook. These sites will try to screenshot the shared URL, but since they don’t execute JavaScript, they won’t see anything of your dynamically generated page, unless you serve them a page generated on the server. So if you want to be sure that the preview is perfect, like if you are running a news site, or an e-commerce site, you need to add server-side rendering.

The bad news is that it’s not as easy as adding the @angular/universal package. You application needs to follow some best practices (no direct DOM manipulation for example, as the server won’t have a real DOM to manipulate). Then you need to setup your server and think about the strategy you want to adopt. Do you want to pre-render all pages or just a few? Do you want to pre-render the whole page, with the data fetching and authorization check it will need, or just some critical parts of the page? Do you want to pre-render them on build, or to pre-render them on demand and cache them? Do you want to do this for all the possible profiles and languages or just some? All these questions depends on the type of application you are building, and the effort can vary greatly depending on your goal.

So, again, I would advise you to use server side rendering only if it is critical for your application, and not based on the hype…​

If you enjoyed this blog post, you may want to dig deeper with our ebook, and/or with a complete exercise that we added in our online training. The exercise takes an application and walks you through what we would do to optimize it, measuring the benefits of each steps, showing you how to avoid the common traps, how to test the optimized application, etc. Check it out if you want to learn more!

See you soon for part 2.


What's new in Angular CLI 6.1?

Angular CLI 6.1.0 is out (in fact we even have a 6.1.1 available)!

It is less feature rich than the previous releases: most of the work in this release consists in refactorings and bug fixes.

If you want to upgrade to 6.1.1 without pain (or to any other version, by the way), I have created a Github project to help: angular-cli-diff. Choose the version you’re currently using (6.0.0 for example), and the target version (6.1.1 for example), and it gives you a diff of all files created by the CLI: angular-cli-diff/compare/6.0.0…6.1.1. You have no excuse for staying behind anymore!

Let’s see what we’ve got!

Internal refactoring

Even if that’s not super useful to you as a developer, the devkit project (upon which the CLI relies a lot internally) is now in the same repository than the angular-cli project.

It used to be slightly painful to open issues and contribute code, because it was hard to figure out which repository the issue/code belonged to.

The angular/devkit repository has been archived, and imported back into the angular/angular-cli repository, which is now the only source of truth.

ES2015 modules everywhere

If you check angular-cli-diff/compare/6.0.0…6.1.1, you’ll see that one of the changes is that "module": "es2015" is now used in all tsconfig.json files. It means that we now have the same behaviour when serving/building/testing the app.

Vendor source map

A new option has been introduced called vendorSourceMap allowing to have source maps for vendor packages. You can use it with:

ng build --prod --source-map --vendor-source-map

This can be useful for debugging your production packages and see what is really included, thanks to source-map-explorer.

For example, this is with sourceMap only:

Source maps

and the same source maps built with vendorSourceMap:

Vendor source maps

This is all for this small release, except the support of TypeScript 2.8 and 2.9 and the support of Angular 6.1 of course. You can check out what’s new in Angular 6.1 in our previous blog post.

All our materials (ebook, online training and training) are up-to-date with these changes if you want to learn more!


What's new in Angular 6.1?

Angular 6.1.0 is here!

Angular logo

keyvalue pipe

Angular 6.1 introduced a new pipe! It allows iterating over a Map or an object, and displaying the keys/values in our templates.

Note that it orders the keys:

  • first lexicographically if they are both strings
  • then by their value if they are both numbers
  • then by their boolean value if they are both booleans (false before true).

And if the keys have different types, they will be cast to strings and then compared.

@Component({
  selector: 'ns-ponies',
  template: `
    <ul>
      <!-- entry contains { key: number, value: PonyModel } -->
      <li *ngFor="let entry of ponies | keyvalue">
        {{ entry.key }} - {{ entry.value.name }}
      </li>
    </ul>`
})
export class PoniesComponent {
  ponies = new Map<number, PonyModel>();

  constructor() {
    this.ponies.set(103, { name: 'Rainbow Dash' });
    this.ponies.set(56, { name: 'Pinkie Pie' });
  }
}

If you have null or undefined keys, they will be displayed at the end.

It’s also possible to define your own comparator function:

@Component({
  selector: 'ns-ponies',
  template: `
    <ul>
      <!-- entry contains { key: PonyModel, value: number } -->
      <li *ngFor="let entry of poniesWithScore | keyvalue:ponyComparator">
        {{ entry.key.name }} - {{ entry.value }}
      </li>
    </ul>`
})
export class PoniesComponent {

  poniesWithScore = new Map<PonyModel, number>();

  constructor() {
    this.poniesWithScore.set({ name: 'Rainbow Dash' }, 430);
    this.poniesWithScore.set({ name: 'Pinkie Pie' }, 125);
  }

  /*
   * Defines a custom comparator to order the elements by the name of the PonyModel (the key)
   */
  ponyComparator(a: KeyValue<PonyModel, number>, b: KeyValue<PonyModel, number>) {
    if (a.key.name === b.key.name) {
      return 0;
    }
    return a.key.name < b.key.name ? -1 : 1;
  }
}

TypeScript 2.9 support

Angular 6.0 was stuck with TS 2.7, but Angular 6.1 catches up and adds support for TS 2.8 and 2.9.

You can check out what these new versions bring on the Microsoft blog:

Shadow DOM v1 support

As you may know, Angular offers an encapsulation option that allows to scope CSS styles to their component, and their component only.

Until 6.1, Angular had three available options for this encapsulation option:

  • Emulated, which is the default one
  • Native, which relies on Shadow DOM v0
  • None, which means you don’t want encapsulation

Angular 6.1 introduces a new option: ShadowDom, which relies on Shadow DOM v1, the latest version of the specification. Theoretically, it should be replacing the Native option (as the Shadow DOM v0 specification is now deprecated), but it would be a breaking change, so the team decided to introduce a brand new option.

If you’re into it, you can check out this awesome blog post listing the differences between Shadow DOM v0 and Shadow DOM v1. You can see the current support from the major browsers here. The support for Shadow DOM v1 will be better than for Shadow DOM v0 in the near future, as more browser vendors feel this is the right way to go.

Angular abstracts all the nitty gritty things to know about that, as you just have one option to switch to use Shadow DOM v1, and that’s pretty cool.

This new support also allows Angular Element to be used with the slot elements for basic native content projection.

Tree-shakeable services in core

You may remember that Angular 6.0 introduced tree-shakeable services, with the possiblity to declare a service using @Injectable({ providedIn: 'root' }). The core services of the framework are starting to move to this new declaration, with the first two services: Title (which allows setting the title of the page) and Meta (which allows setting the metadata of the page).

It means that if you are not using them in your application, they will now not end up in your final bundle, saving a few bytes of JavaScript to send to our users.

Router scrolling position restoration

The router received some love in this release with the addition of a few features. The first one is an option allowing to restore the scrolling position when you navigate back to a component.

You simply have to add the option to your RouterModule configuration:

imports: [
  RouterModule.forRoot(routes, {
    scrollPositionRestoration: 'enabled'
  })
]

Three differents values can be passed to this option:

  • disabled, which does nothing (default).
  • top, which sets the scroll position to [0,0].
  • enabled, which sets the scroll position to the stored position.

The enabled option will be the default in the future. With this option, the router stores the scroll position when navigating forward, and restores it when navigating back. When navigating forward, the scroll position will be set to [0, 0], or to the anchor if one is provided.

It also adds an anchorScrolling option, to configure if the router should scroll to the element when the url has a fragment. It has two possible values:

  • disabled, which does nothing (default).
  • enabled, which scrolls to the element. This option will be the default in the future.

And there is also a scrollOffset option, if you want to add an offset to the scrolling. It accepts a position, or a function returning a position.

The router now also emits a new event called Scroll that you can listen to.

On paper, this looks super handy: if you have a very long template in a component, when a user navigates back to it, she will end up on her last scrolling position.

I say “on paper”, because in reality this only works with static content! If you have dynamic content displayed in the template (let’s say a very long list that you fetch from the server), the router will attempt to scroll even before the content is inserted… So it won’t scroll to the correct position, because this position will not exist when the router tries to scroll to it.

If you are in a case like this, you’ll have to write tedious code to trigger the scroll yourself in the component, by using a new service offered by the @angular/router package, called ViewportScroller.

You could think that if the data are loaded via a resolver, the router would handle it correctly, because the data are loaded before the component is displayed, so it would make sense that the router would scroll to the right position in that case.

But sadly, currently, no… We opened an issue right away with this feedback (you can add a thumb up if you agree), but it is currently not adressed in 6.1.0.

So if you have dynamic content, you’ll have to handle the scroll yourself, by writing tedious code looking like this, even if the data comes from a resolver:

export class PendingRacesComponent {
  scrollPosition: [number, number];
  races: Array<RaceModel>;

  constructor(route: ActivatedRoute, private router: Router, private viewportScroller: ViewportScroller) {
    this.races = route.snapshot.data['races'];
    this.router.events.pipe(
      filter(e => e instanceof Scroll)
    ).subscribe(e => {
      if ((e as Scroll).position) {
        this.scrollPosition = (e as Scroll).position;
      } else {
        this.scrollPosition = [0, 0];
      }
    });
  }

  ngAfterViewInit() {
    this.viewportScroller.scrollToPosition(this.scrollPosition);
  }

}

And you’ll have to do the same in every component where you want the scroll position to be restored…

Router — URI error handler

You may have noticed that if a user tries to access a badly formed URL in your Angular application, the router will redirect to the root of the application.

Angular 6.1 introduces a new function called malformedUriErrorHandler that you can provide to redirect your user to a different page.

imports: [
  RouterTestingModule.forRoot(routes, {
    malformedUriErrorHandler:
      // redirects the user to `/invalid-uri`
      (error: URIError, urlSerializer: UrlSerializer, url: string) => urlSerializer.parse('/invalid-uri')
  })
]

As you can see, the handler receives the badly formed URL and the error, so you can even display a proper error to your users if you want.

Router — URL update strategy

In the same vein, if the router navigates to a component, and the navigation fails, the URL is currently not updated.

A new option urlUpdateStrategy has been introduced, and can receive either: deferred or eager. deferred is the default and only updates the URL if the navigation succeeds, as it is the case currently. eager will start by updating the URL and then navigate to the component, so the URL will be updated even if the navigation fails.

Angular CLI 6.1

The CLI has also been released in 6.1.0: check out our other article about what’s new!

All our materials (ebook, online training and training) are up-to-date with these changes if you want to learn more!


ngx-valdemort – super simple, consistent validation error messages for Angular

ngx-valdemort logo

We recently introduced ngx-speculoos, which reduces boilerplate in Angular unit tests. Check it out if you missed it.

Another place where a lot of boilerplate is needed is forms, and especially in validation error messages. Here’s an example of such boilerplate:

<div class="invalid-feedback" *ngIf="form.get('email').invalid && (f.submitted || form.get('email').touched)">
  <div *ngIf="form.get('email').hasError('required')">
    The email is required
  </div>
  <div *ngIf="form.get('email').hasError('email')">
    The email must be a valid email address
  </div>
</div>

That is just for two error messages, on one field of one form.

When you do that for all fields of all your forms, you end up with a lot of duplication of the same logic, and a high risk of misspelling control names.

Developers also end up copying and pasting these snippets, and tend to forget to rename the field name or error types in one or two places, introducing bugs.

Adding a new validation rule on a field means that a new error message must also be added.

Wouldn’t it be nice to be able to replace that mess with something like this?

<val-errors controlName="email" label="The email"></val-errors>

That’s what ngx-valdemort allows. And much more. You can override a default message by a custom one when needed. You can choose if you want one or all error messages. You can configure when to display error messages, in a central place, to ensure consistency in all your forms.

Learn more and see it in action on our project page.

It’s free and open-source. Tell us if you like it. Also tell us if you don’t: we could improve it. The project is on Github.


Announcing ngx-speculoos – simpler, cleaner Angular unit tests

ngx-speculoos logo

Writing Angular unit tests for components quickly leads to quite a lot of boilerplate, and if you’re not careful, code duplication and not type-safe code, too. Especially when dealing with forms.

Out of the frustration from this non-ideal code, we decided to write a small library to help with these issues, and to rely on the page object pattern when it makes sense.

Let me thus introduce ngx-speculoos.

It’s free, as in beer, and as in speach.

It uses the standard Angular TestBed and ComponentFixture abstractions, so you should be up to speed in a few minutes.

So, if you’re like us, and would like your tests to be cleaner, more readable, and easier to maintain, please give it a try and tell us what you think about it.

Since a code snippet is worth a thousand words, here’s how you would test that selecting a country in a select box makes an error message disappear, and another cities select box appear, containing expected option values, labels and selection. Note the absence of calls to detectChanges or dispatchEvent. Note the non-duplication of CSS selectors thanks to the page object pattern. And note the (optional) usage of some custom matchers.

    expect(tester.countryErrors).toContainText('The country is mandatory'); 
    expect(tester.city).toBeNull();

    tester.country.selectValue('FR');

    expect(tester.countryErrors).toBeNull();
    expect(tester.city.optionValues).toEqual(['PARIS', 'LYON']);
    expect(tester.city.optionLabels).toEqual(['Paris', 'Lyon']);
    expect(tester.city).toHaveSelectedLabel('Paris');

For more information, see our README and API documentation.

The project is on Github, so don’t hesitate to star the project if you like it, and to request features, improvements or bug fixes, or even to contribute.

What’s that name?

speculoos cookies

Well, ngx stands for Angular extension.

Oh, you meant the other part of the name?

A speculoos is a delicious cookie from Belgium, where one quarter of the Ninja Squad team (i.e. me) comes from.

And speculoos starts with spec, which is how test files are usually named in an Angular project. That sounded like a cool name for this library.


Angular Elements

Sometimes you don’t want a full Angular app. Sometimes you just want to build a widget. Or maybe you have several teams, some using React, Vue and others Angular. Right now it’s not really easy to integrate just one Angular component, into an app that is not an Angular app.

Angular Labs

But some people fight for a better Web and think that a new standard can save us all: Web Components. Web components are actually 4 different specifications:

  • HTML templates (the template tag)
  • Shadow DOM (view encapsulation)
  • HTML Imports (more or less a dead specification)
  • and the one we are interested in: Custom Elements

Note that it is already possible to use a Web Component in an Angular app, and it works seamlessly. But we had no way of exposing our Angular Components as standard Custom Elements, to use them outside of an Angular app.

Custom Elements give us the ability to declare an element, which is not a standard HTML element, but a… custom one. Like admin-user, or responsive-image, or funky-carousel.

I deep dived into the official specification to learn a bit more about the details of Custom Elements. You can of course build your own Custom Element with vanilla JavaScript but there is a bit of “plumbing” to do (you have to write an ES6 class with a constructor that follows some rules, then observe the attributes that can change, then implement the correct lifecycle methods defined in the specification).

That is why Angular 6 introduces @angular/elements! Angular Elements are classic components packaged as Custom Elements.

When you package an Angular Component as an Angular Element, you can then use it like a standard Custom Element. It will bootstrap itself, and create an NgElement (custom element) that hosts the component. It also builds a bridge between the standard DOM APIs and the underlying Angular Component, by doing the plumbing between the component’s inputs and the custom element properties, between its outputs and the custom element events, and between its attributes.

To use it, build a component as usual:

@Component({
  selector: 'ns-pony',
  template: `<p (click)="onClick()">{{ ponyName }}</p>`
})
export class PonyComponent {
  @Input() ponyName;
  @Output() selected = new EventEmitter<boolean>();

  onClick() {
    this.selected.emit(true);
  }
}

Add it to a module (here PonyModule) and then you can register it in another (non Angular) application to use it as a Custom Element:

import { createCustomElement } from '@angular/elements';
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';

import { PonyComponent, PonyModule } from './pony.module';

platformBrowserDynamic().bootstrapModule(PonyModule)
  .then(({ injector }) => {
    // get the ES6 class
    const PonyElement = createCustomElement(PonyComponent, { injector });
    // use it to register the custom element
    window.customElements.define('ns-pony', PonyElement);
  });

Once that’s done, you can use the element ns-pony as if it is a standard element:

<ns-pony pony-name="Rainbow Dash"></ns-pony>

Note that the attribute is in kebab-case, whereas the property is in camelCase.

The element can be updated with your favorite framework supporting Custom Elements (like VueJS, Preact but not (yet) React, see Custom Elements Everywhere). Or you can of course use Vanilla JS:

const ponyComponent = document.querySelector('ns-pony');

// update the pony's name
setTimeout(() => ponyComponent.ponyName = 'Pinkie Pie', 3000);

// listen to the custom event
ponyComponent.addEventListener('selected', event => console.log('selected!', event));

You can even create new components and insert them, they will be automatically upgraded to custom elements (and the inner PonyComponent will be instantiated)!

const PonyComponent = customElements.get('ns-pony');
const otherPony = new PonyComponent();
otherPony.ponyName = 'Applejack';
document.body.appendChild(otherPony);

The API is still very young (it was in Angular Labs for the past 6 months), so I would not recommend using it in production yet. But this time will come!

Check out our ebook, online training (Pro Pack) and training if you want to learn more!


Posts plus anciens