Web Application Performance with Nik Molnar | Episode 10

nikOn this episode Nik Molnar and I discuss web performance with Nik Molnar.This isn’t just another YSlow talk, things have evolved and I learned quite a bit from this talk so hopefully you will as well.

To subscribe to the podcast, please use the links below:

Click Here to Subscribe via iTunes
Click Here to Subscribe via RSS (non-iTunes feed)

Show Notes:

Nik Molnar’s Blog

Great site for performance tools mentioned:

http://perf-tooling.today/

Nik’s popular open source project for .NET web applications Glimpse.

What CSS triggers the browser to paint:
http://csstriggers.com/

Full Transcript

Craig McKeachie: [0:00] On this episode, Nik Molnar and I discuss Web performance.

[0:03] [music]

Craig: [0:14] Welcome to the Front-End Developer Cast, the podcast that helps developers be awesome at building ambitious Web applications, whether you’re a JavaScript ninja or you’re just getting started. I’m your host, Craig McKeachie.

[0:27] Hi, everyone. My interview today is with Web performance expert, Nik Molnar. Let’s get right to it. This isn’t just another YSlow talk, though. Things are getting more evolved in the space, and I learned quite a bit from this talk. Hopefully, you will as well. Here it is.

[0:44] Hey, today, I’m lucky to have Nik Molnar with me. Hi, Nik. How are you doing?

Nik Molnar:   [0:47] I’m doing all right, Craig. How are you?

Craig: [0:48] Good. Thanks for coming on the show. Today, we’re going to talk about Web performance, but why don’t you tell everybody a little bit about yourself first?

Nik: [0:55] You covered my name already, so that’s good.

Craig: [laughs] [0:57]

Nik: [0:57] We got the most fun thing out of the way.

Craig: [0:58] Did I say it right?

Nik: [0:59] You did. “Nik Molnar,” you got it. I live in New York City. I’ve been there for about eight years, now. I’ve been doing Web development for 18 years.

[1:09] It’s really all I’ve ever done with my life. I work for Redgate Software, working on Web performance tooling. I work on an open source tool, called Glimpse. I travel around the conference circuit in America and abroad espousing the virtues of Web performance.

Craig: [1:23] Awesome. That’s a great topic. When you give your talks, how do you like to talk about Web performance? Do you have a way you can break it down? It’s a big topic.

Nik: [1:32] I do. It’s a big topic. I break it down in order of descending granularity, which I tell the people in my audience and the people that I work with when I consult, that you really want to focus on the biggest problems first. All too often, a developer might be comfortable in JavaScript or PHP, so that’s the thing that they dive into, when it’s not necessarily the thing that their users are having frustration with.

[1:56] Descending granularity means figure out where the pain points the app and the user are having, and focus on those first. With that blanket statement usually, moving things across the network are the biggest problems for performance. That’s usually where I start. Then, I break it down into three other areas after that.

[2:15] With Web applications, I like to think of the way we interact with the Web in two phases. The first time you type the URL into the location bar and you hit enter, you’re “installing an application.”

[2:29] This is the installation phase. We think of that as the on- load thing. After that, the user is using the application. They’re not necessarily going back to the server to get any more data or resources. Performance for the install experience and the usage experience, you handle those two things differently. That’s how I break it up into those two areas with two subsets underneath each.

Craig: [2:53] I like that. Let’s talk a little bit more about the network area. I read “High Performance Websites” by Steve Souders. I believe it is, years ago and used YSlow and stuff like that. I think I’m falling a little out of touch with that sort of thing. How do you recommend to clients take a look at network type issues now?

Nik: [3:11] Steve two books are still seminal. He recognized this was the biggest area of challenge for users. That’s what he focused on. All his rules really still apply, the standard stuff. I mean if I combine files, reduce HTTP requests, et cetera, et cetera. Nowadays with the tooling that we have, we really starting to even think about more low level of stuff, for example, TCP Slow-Start and how you optimize the content you send down to the user, the size of it, and how many packets it’s taking up because TCP doesn’t run at full bandwidth to begin with.

Craig: [3:46] OK. So what is TCP Slow-Start mean? You threw me a little on that one. That’s a new one for me.

Nik: [3:50] When the user agent, when the browser makes the connection to the server and the server going to start sending content down, it has no clue what the connection conditions are between the server and the client. It basically makes an assumption that it’s in a really bad state and it will send out X number of packets, some low number of packets, maybe it’s one maybe it’s ten, but a lower number of packets.

[4:10] It will send those out out, let’s just say it’s one to make the example easier on the air. It will send out that one packet and then wait. Even if it has more data ready to send, it doesn’t know if that packet made it to the client. It waits until the client receives it and sends back an acknowledgment and says, “OK. Hey, that worked out well. I’ll send out two.”

[4:28] Then, it waits until it gets an acknowledgment, and so on and so forth to the four to eight and up. You can think of the bandwidth as being increased over time, so it’s really important to get basically the above-the-fold content, or as much of it as you can, into that first packet because you have to wait an additional round trip and all of the latency on that round trip to start getting more data.

Craig: [4:51] Interesting. What are some tactics? I think you started to get into it there. I’ve even see some new plug-ins that people are taking the CSS that might be above the fold and pulling it — just that part of the CSS — into the page and the header, so that the rendering experience is good. What else do you see? Those kinds of things?

Nik: [5:09] Those are the kinds of things that you see the most. I see people, who aren’t going to be able to get the full experience in. They have too much data. They’ll flush the HTTP response early. That does a lot of additional things that are really nice, but it is also warming up the TCP Slow-Start.

[5:24] How many websites do you go to where the header of the website is basically the same from page to page? While the server is rendering the rest of the HTML or doing whatever work it needs to, it can send that stuff down and warm up the connection to get to a fatter bandwidth for when the rest of the content is ready. You can send that all down.

[5:41] People do stuff like that, I mean GU CSS they have these grunt and gulp plug-ins that will analyze your page and tell you what CSS you should in-line. It’s funny because we learn these rules of thumb in general, in all of software development. This is the way things should be done. Normalize data in your database, but then, when you get to the extreme of performance, no, you throw those rules out the window. Nobody is serving up normalized data. They de-normalize it, so it can be accessed and served quickly. I feel like this is one of those things.

Craig: [6:08] One of those things, yes. What are some plug-ins? You said UCSS, is that what is called?

Nik: [6:11] Yes, there is one called UCSS that is popular.

Craig: [6:15] It will analyze your CSS and pull just the portion that’s needed for the above-the-fold content into the header of the page?

Nik: [6:22] Yeah. None of them work perfectly.

[6:25] You can look at the results of it and see what it’s telling you to do, but you have to use some common sense. Maybe, you have some JavaScript that’s changing something that it doesn’t know. If you go to perf-tooling.today, they have a tool section. There’s a ton of tools, but there’s probably four or five similar things like UCSS that go through and identify these kinds of styles that are important.

Craig: [6:47] What’s that your URL again? I’ll put it in the show notes.

Nik: [6:48] Perf-tooling.today. One of these new TLDs.

Craig: [6:53] OK. Let’s move on from the network stack then. What else do you see performance props happening in a Web application? What is our next lowest hanging fruit here?

Nik: [7:04] If you are doing a public-facing app, there’s probably some server-side rendering of HTML happening. Because the browser doesn’t really know what to do until it receives the HTML, that becomes the bottleneck. When you hit the URL, the browser is installing that application for you. It needs a manifest of all of the assets, the JavaScript, the files, the CSS, the images and … . That manifest we call HTML. It’s just a bunch of instructions of “here’s all of the other things that you need to do.”

[7:34] If we are waiting on the server to generate that, that is a problem. Typically, the reason why we wait on a server is because the CPU is busy. The biggest offenders of that are out of process calls, going to the database, going to the third-party services to gather that data. Those things are slow. What we want to do is to get them, when we have to and then cache them locally in memory as close to the server as possible, so we can serve it quickly.

Craig: [7:59] That’s great. I really like that analogy of the HTML as a manifest, though. It is, when you think about it all, whatever the script tags you have in there and the page comes down. Then, it realizes, here’s what actually I need in order to do my job.

Nik: [8:10] Yeah. It’s a honey-do list. It’s what my wife is going to hand me, when I get home from this conference.

Craig: [8:13] Right.

[8:13] [laughter]

Nik: [8:15] Go all do the other things.

Craig: [8:15] It’s probably the same case for me. All right, so you say, “out of process calls” are usually what? Is the bottleneck on the server? Is that the wording you’re using?

Nik: [8:25] You know, if we are just doing CPU-bound work, it can be expensive and bad, but most of the apps that were built on the Web aren’t right there. They’re kind of crud apps for showing some data. That’s easy. It’s getting the data and going out of process to get that is hard.

Craig: [8:38] Tactics people commonly use are to throw a caching layer in there and so maybe they are not going all the way back to the database. What else do you see? That maybe the big one.

Nik: [8:47] Caching is definitely the big one. The other thing that I see is maybe they’ll write it to the file system, so maybe that’s closer. Then, they will read that file. Some config file, CSB file or batch file that you got from some customer or something. You’ll read that into memory, but you’ll thrash your memory space and a lot of Web services don’t like their memory space being thrashed. They’ll reset. You’ll lose all your cache. You’ll localize, so you have to restart. Streaming files end and reading them that way, so using streaming APIs are much better than reading it all into memory and then throwing it away, when you are done with it. [indecipherable 9:22] plus memory thrashing.

Craig: [9:21] Interesting. You work on a back end tool though. You work in asp.net space for years in NBC and there’s a tool called Glimpse. Do you want to talk about that a little bit? That helps with this back end issues, I believe.

Nik: [9:35] Yeah, exactly. You install Glimpse on an asp.net website and it keeps track of these out of process calls that are being made, as well as various aspects of the framework. Most asp.net developers are using an NBC framework. There’s a model of you and the controller. We increment how long it takes for the controller to execute, how long it takes the view to execute, how long does it take for every database query to execute. Then, when the page renders in the browser, we inject a little widget with JavasScript, right into the bottom of the page.

[10:05] You can see for the request that you just made, that link that you just clicked on, the three database queries, it took this long to render the view, et cetera, et cetera. You can start to compare and contrast pages and see which ones are fast and slow and where that bottleneck is coming from. We’ll focus on that method and approve the code there.

Craig: [10:22] OK. How could someone hook that up in any speed in an asp.net application? How do they get it installed and set up?

Nik: [10:30] In .NET, we have a package management system very similar to NPM that we call NuGet. All you have to do is go into your project and NuGet installs Glimpse.mvc5 or whatever version of MVC you are using, four or three, and that’s it. It will make a change to your configuration file for you. It will drop down the assemblies that are necessary and when you deploy, you’re running. It’s that simple.

Craig: [10:51] Is this an open source project? I know you work at Redgate. Is it an open source project?

Nik: [10:57] Yup. Redgate sponsors Anthony and I to work. Anthony is my partner. Redgate sponsors us to work on Glimpse full-time, and it is open-source available at github.com/glimpse. It’s where the organization is. You can find all this information on our main website, including videos to see how it works and how to use it, at getglimpse.com.

Craig: [11:15] Cool, cool. What’s our next level of things we should look at, when we’re talking about Web performance here?

Nik: [11:21] When we talked about the manifest file and the installation experience of the website. Once it’s installed, then end-user needs to use the website. Now, we’re no longer are talking about what the CPU on the server is doing, but we think about what the CPU and the GPU on the client is doing. We think about JavaScript. We think about CSS, and how efficient we’re being with those things.

[11:43] The truly interesting thing is the same kind of tooling that you would use on the server, a CPU profiler like the one that my company makes, and the CPU profiler that’s built into all the developer tools is basically the same thing. You open up the dev tools, you go to the profiler tab, you run that CPU profile, it will keep track of how taxed the CPU is at any given time, and then show you what the call stack was when the CPU was taxed.

[12:07] You can see what methods are taking a lot of time and which ones are running rather quickly, and prioritize and fix the ones that are slowest first, obviously.

Craig: [12:14] OK. There’s snapshots in there? Usually, I have to get used to that.

Nik: [12:18] Yeah. Snapshots are for memory profiling.

Craig: [12:21] I’ve got you.

Nik: [12:22] Memory profiling and CPU profiling are generally two different concerns. You can have memory problems and be fast, and you can have memory problems and be slow, so it really depends on what your app is doing. I tend to work on a lot of business-y apps or e-commerce, things like that, we don’t really have long-lived pages, so we don’t really run into very many memory problems.

[12:43] If you’re working on something like Gmail or a game or something maybe like the Slack client, that you expect to be in the browser for a long time, memory management becomes much more important.

Craig: [12:53] I get what you’re saying. In my area of focus with these jobs with NEC frameworks, that’s why I’m always thinking about the memory profiling because it’s very easy for that single page to live for hours or days on end, and then, you might want to take snapshots of your memory.

[13:08] But you’re saying it in the traditional server-side Web application, which I think the majority of people are still building here, is that the CPU rendering is really the thing you want to look at next.

Nik: [13:18] Yeah, that’s going to be what’s really affecting the speed, how snappy the application feels to the user. They might have sufficient memory. I’m not saying that it doesn’t matter. Because what happens with memory is that you get memory pressure, and then the garbage collector starts to run, and that will cause CPU problems.

[13:35] You don’t want that to happen, but you might be in a state, where the garbage collector is not running and so you need to look at your own code to see what’s slow.

Craig: [13:42] What are some common things that you see or re-factorings, things that people might be able to improve on that they end up doing when they look at that CPU? It seems like it’s a pretty generic statement, “Well, you’ve got problems with JavaScript code.” What do you see people doing to fix those issues? Is that a tough one because it’s all kinds of things all over the board?

Nik: [14:03] There are certain things that are just bad practice in general that you should avoid. Managing your scope appropriately, knowing when to use the VAR keyword and when not to, making sure that you’re only closing over the data that you need. Because with the way that scoping works in JavaScript, when you create a closure, you are adding another level of that scoping, so that can slow down accessing that memory. That’s obviously bad.

[14:30] Just being diligent about those kinds of things, they’re just things you probably should never do. That in JavaScript is the “with” keyword. It’s just syntactic sugar, it makes it easier for me to type out and change a bunch of properties on an object. I can do a little more typing and not allocate a new scope the variables are stored in. The “with” keyword basically, in my opinion, should never be used, just throw it out.

Craig: [14:53] Right. Really, we’re talking about, hey, look up JavaScript performance in general. Just pay attention to good JavaScript coding practices and a lot of these issues will go out the window. Look at what you’re doing versus what people are saying or go to JavaScript coding practice.

Nik: [15:06] It will be that or it will be bust out your out your algorithms textbook or go watch the MIT course that’s for free online and understand the algorithms that you’re using. See if there’s a way that you can do what you’re trying to do more quickly. Maybe you’re looping over something and you’re getting nlogin and you could do login or whatever it would be to make that algorithm faster, but that’s super specific to your data structures and to your application.

[15:35] The other thing that usually people have problems with is when they start reading or writing from the DOM. That’s really slow. Actually, it’s funny if you look at recommendations, in general, how to write faster JavaScript, they’ll tell you to avoid the DOM, which I think is pretty stupid.

[15:51] The reason why I write JavaScript 90 percent of the time is to get input and to show output to the user and that’s all with the DOM. I think that that’s oversimplified advice. The reality is we need to understand how accessing and writing to the DOM affects performance and the ripple effect that that causes in all the different browser sub-engines like the layout engine and the rendering engine, and when we change the size of a div on a screen what it could do unto the whole kit and caboodle.

Craig: [16:21] Can you talk a little bit more about that? I hear more and more, I think, when I first think heard about ReactJS that was the first time I heard people talking about how many frames per second we could render in. It was a new world to me at that time, but you hear about it more and more.

[16:38] I think the famo.us framework is doing some sort of revolutionary things in the space. I think a lot of developers need to understand that better, how to read that stuff, how to look at it, make sense of it. Can you them give any advice there?

Nik: [16:51] I’m on the same boat as you. I never really played video games growing up. In college, my roommate would be like, “Oh. I have the new Max Payne and I’m getting blah blah frames per second.” I don’t get that. Duck Hunt didn’t have frames a second. I didn’t worry about it.

Craig: [laughs] [17:03]

Nik: [17:04] The tooling and the browsers is now exposing this data to us, so it’s the first time that we can think about it. Basically, what it boils down to is the hardware and our screens. My laptop, your laptop, our phones, all of these things typically run at 60 hertz.

[17:21] Of all that pixels on that screen are refreshed 60 times a second. If we take 60 times a second is a thousand milliseconds. We do the division and that comes out about 16.6 milliseconds in between screen refreshes. If we’re changing the screen and we’re animating something in JavaScript, CSS or resizing something, or whatever is that we’re doing, even if we’re just replacing text or it’s only an animation, we need to do that work, and all the ripple effects that I’m talking about, within that 16-ish millisecond buffer, otherwise the browser will drop a frame.

[18:03] It will say, “You’re not ready for me to paint yet, I’m going to wait.” It can’t kind just pick up, when your work is done. This is v-synched hardware stuck. It’s going to run at that tempo of 16 milliseconds over and over again.

[18:18] When we hear about dropped frames or my frame rate is low, it’s because our work is taking too long and the browsers is not ready to paint yet.

Craig: [18:24] What happens the browser just chooses not to repaint and the user gets an unresponsive experience on that quick phone?

Nik: [18:30] Exactly. We should probably be using the request animation frame API, which is an API that takes in a callback and essentially tells you, the Web developer, “I just finish painting a frame. Go do work.” We use that API to get the most amount of budget possible to get that as much of that 16 milliseconds as possible.

[18:51] We could just wait around for a user to click on a button, but none of my users click on buttons in 16 milliseconds intervals. I click right in the middle and now I only have 8 milliseconds. I’m losing part of my budget. Request animation frame, let’s say that is giving us the full 16 milliseconds, but my work takes 18 milliseconds.

[19:11] We come back around at the end of the first 16 milliseconds. I saw two milliseconds worth of work left to do. The browser says, “OK, I’ll wait.” My two milliseconds goes by, I finish. We have 14 milliseconds that we’re waiting with nothing happening. Our work is done, but the browser has to cycle back around to be able to do that paint. That doesn’t to sound too bad, when you are talking about maybe14 milliseconds. If you’re talking about dropping two or three frames, and now users can start to notice these things, it becomes very bad.

[19:42] We get a phenomenon known as jank. I’ll call it “janky.” You can really see it on underpowered devices like your mobile phone. Particularly, you will notice it if you are scrolling, and you put your finger down on the screen, and you kind of scroll from the top to bottom, and the pixel right underneath your finger doesn’t stick to it.

[19:59] You’re like, your finger get’s down to the bottom and then that pixel kind of catches up after it and it’s stutter-y. That’s jank. You are dropping frames, and that happens when you are scrolling and animating it, and things like that.

Craig: [20:09] Interesting. That’s the best explanation I’ve heard of that. I’m glad you offered that. Anything else we should cover? Anything else that you want to talk about?

Nik: [20:18] I’ll mention something with CSS. The dev tools now will allow you to go in to a time line mode. Where you can turn on the page and start scrolling the page, let some animation run or do whatever the work is that the page needs to do visually. It will show you all of the frames that it painted with a bar chart and how long each one of those frames took.

[20:40] You can see if you’re hitting that 16 millisecond number and getting 60 frames a second or not. 60 frames a second, anything better than that, we don’t really perceive. That’s like smooth as butter. That’s the experience that everyone is shooting for.

[20:53] Because I said there is this ripple effect, if I manipulate something on a dime, I am just going to do something simple, and change the size of a div, because I’ve done that, the layout engine in the browser, that calculates the geometry, calculates the skeleton of the page, figures out where all the boxes go, they have to recalculate the page. Because I moved that div, that might have wrapped something, a float might have been broken, or the whole page could change because I made something five pixels wider. It has to recalculate all of that, and once it’s recalculated everything, it has to repaint the whole page.

[21:24] My simple little JavaScript change of making a div five pixels larger could take 40, 50, 60 milliseconds because of all this corresponding work. These browser tools that show you these bar charts, tell you where that time was spent, which is really handy. What it comes down to is, depending on what properties that you are changing — I mentioned width, that was my example — with this one, it is the box model properties. All of the box model properties will affect the layout engine. Anytime that you affect the layout engine, the paint engine has to clean up afterward. Layout is always followed by paint.

[21:59] Some properties like, color, that doesn’t affect layout at all. You can skip that whole step there and go straight to painting and changing the color. That’s theoretically. It could be more performing. You have to test them, every site. Some jank-free properties like, opacity, and these are specifically handled by the GPU. They go straight to the compositing layer, which is a third and final layer.

[22:23] I want to mention a website called csstriggers.com, which shows for every CSS property, which subcomponents of the browser will be affected. It’s like, I’m moving a lot of things around. I’m using widths or heights or whatever. You’ll see, “Oh yeah, I’m thrashing the layout.”

Craig: [22:41] Interesting. csstriggers.com.

Nik: [22:45] I know this is all hard to talk about and visualize on a podcast. I actually, do a presentation at conferences called, Full Sec Web Performance. It is a blitz of an hour where I cover all of this, and showing a lot of demos. It is very demo driven, and so that’s available online. I can provide that URL.

[23:01] That’s on my blog at nikcodes.com, and that might be a place to watch and get started with all of this stuff. It’s also on that Perf-Tooling today, the video is there, as well.

Craig: [23:09] OK, so, nikcodes.com is where we can find all this stuff.

Nik: [23:12] Yeah, at N-I-K-codes.com

Craig: [23:14] N-i-k, yes, now that’s right. No one will get that right the first time, now that you’ve said that.

Nik: [23:18] No, no, they won’t.

Craig: [23:19] Yeah, come check out the show notes, and you’ll get that URL, if you are in the car listening to this or something like that. Thanks for coming on the show today Nik. It was a great talk.

Nik: [23:27] Thanks, for having me. I really appreciate it.

[23:28] [music]

Craig: [23:31] Thanks for listening to the Front-End Developer Podcast. Please subscribe to the Podcast by going to the iTunes store on your phone or desktop and then, searching for Front-End, or via RSS by going to frontendcast.com where you’ll also find show notes and a full transcript of each episode.

[23:45] You can also sign up for my free AngularJS, Backbone and Ember crash course. All of them solve the same problems, so why not learn them together? See you next time.

Tags:

Trackbacks/Pingbacks

  1. Dew Drop – June 16, 2015 (#2036) | Morning Dew - June 16, 2015

    […] Web Application Performance with Nik Molnar | Front-End Developer Podcast Episode 10 (Craig McKeachie) […]

Leave a Reply