Introduction to Asynchronous JavaScript
Hey, great to meet you! I’m Mat, but “Wilto” works t— wait. Hm. That’s the expression of someone that has met me already. Okay, yeah, I see what’s going on here. The course must be over — or, the course was over. I think I know how this module ended up here. Just humor me for a second, here.
You’ll frequently see JavaScript broadly described as being single-threaded, which is a framing that I don’t love, personally. It’s fundamentally true, sure; I’m certain that I would’ve taught you about this and the concept of the call stack — the data structure that manages the execution of the global context and function contexts?
Just to recap, since that was probably hundreds of unnecessary emdashes ago, knowing me: when the current execution context invokes a function, a new function execution context is created, added to the top of the stack, and takes over execution — when that new execution context concludes, it pops off the stack and the execution context that invoked it resumes. JavaScript code is executed in a very linear way: top to bottom, left to right, one execution context at a time. That linear sequence of execution is called a thread.
With what you’ve learned so far, it sure seems like JavaScript, as a language, is “single threaded.” Then one day you’ll learn about Web Workers, which — not to put too fine a point on this — allow you to execute JavaScript code in another thread.
It’s more accurate (and way more whimsical) to say that a JavaScript realm is single threaded. A realm refers to an environment where code is executed; a realm will provide its own intrinsic objects and global environment. A browser tab is a realm, and within that realm is the single thread where JavaScript is executed — the main thread. JavaScript running in an iframe is running in that realm’s main thread.
A Web Worker is a separate realm, with a worker thread. An overarching JavaScript application can make use of multiple execution threads, executing code that isn’t shared between them, but can potentially communicate back and forth. Within each realm, though, there’s only one execution thread.
The main thread is where a browser tab executes JavaScript, but the main thread can also be occupied by the processes of repainting and reflowing the page layout, CSS animations, garbage collection, and processing user interactions. Browsers can distribute these tasks across multiple threads, but not JavaScript — in the context of a browser tab, JavaScript can only run on the main thread. When you’re reading up about front-end performance, you’ll frequently encounter the concept of “long tasks,” which bog down the main thread. The more we cram into the main thread — the more complex our JavaScript applications become — the more we stand to damage the overall browsing experience.
Now, tasks queued on the main thread can include asynchronous actions, like responses to requests made with fetch, timers specified with setTimeout or setInterval, or handlers for user interaction registered via addEventListener. These are all approaches to asynchronous JavaScript that depend on callback functions (usually just “callbacks”) — functions that are invoked when a condition is met:
- Code language
- js
function theCallback() { console.log( "Time's up." ); } // `theCallback` is invoked after 5000ms (5 seconds): setTimeout( theCallback, 5000 ); // result (five seconds later): Time's up.
The main thread doesn’t get put on hold while we’re waiting for the timer to tick away, the results of a request to come back, or a user to interact with an element — the call stack goes right on callin’ and stackin’:
- Code language
- js
function theCallback() { console.log( "Time's up." ); } console.log( "First." ); setTimeout( theCallback, 5000 ); console.log( "Last." ); /* result: First. Last. */ // result (five seconds later): Time's up.
Once those things do happen, though, the execution context for the associated callback function will get shuffled right back into the call stack. Asynchronous tasks like these are managed by way of an event-driven concurrency model made up of the event loop, callback queue (or “message queue”) and microtask queue.
Once the criteria for executing a callback function is met, instead of being placed on top of the stack right away, the callback’s function execution context is placed in one of these queues based on priority. For example, the lower-priority callback queue is reserved for a setTimeout callback, while the higher-priority microtask queue is reserved for working with Promises.
The event loop continuously polls the status of these queues and the call stack; the former to see if there’s anything waiting to be executed, and the latter to see if the stack is empty. If there are tasks in the either queue and the call stack is empty, tasks from the queues are pushed to the stack one at a time and executed one at a time — first from the microtask queue, then from the callback queue. For example, the callback function associated with a setTimeout( () => console.log( "Done." ), 500 ) doesn’t interrupt whatever else is going on in the main thread to execute at the precise 500ms mark — rather, 500ms plus whatever time it takes for the call stack to empty out.
That explains how this lesson ended up here. At some point I must have planned a whole thing where the last of your lessons is completed, so this — the asynchronous module — lands back on the “stack” to be “executed.” That sounds like something I’d do. I bet I’ll end up doing a bunch of corny bits like that throughout the course, huh? Or, already did, I suppose.
Well, we’re going to get ourselves unstuck from time in the same way. In this, our final final module, we’re going learn more about tapping into the event loop — not by way of callback functions, but by way of objects that represent the results they may or may not contain in the future: promises.
Hey — while I’ve got you here, how’d the course turn out?
Ah, no, you’d better not say. Spoilers.