Blog>>Software development>>Frontend>>Navigating the module maze: history of JavaScript module systems

Navigating the module maze: history of JavaScript module systems

JavaScript was designed to handle uncomplicated scripting tasks that we know from everyday life - handling events, updating content, etc. However, nowadays we write huge and complex apps in JS. This distinction between initial design and current usage is a root cause of a lot of frustration in the JS community. Let’s look at one of these causes of frustration and confusion: modules.

Modules were one of those topics which I've never felt like I fully grasped. I knew about ESM and CJS, the differences between them, and some history. However, anytime I have to fix some setup error related to modules I get quite lost. Nowadays, it's not just JS - we've got TypeScript, bundlers, package managers, and a huge amount of environments as it's not just browser and node. We have to operate at so many levels of abstraction, that it can quickly get quite challenging.

Suffice to say - modern frontend is complicated. But I firmly believe understanding the basics leads to better mental maps, understanding the whole process, and being a better developer. Let's dive in.

Services Frontend development

In fact, what is a module?

Nowadays, we think of modules mostly as files that export some functionalities. So, modules are a way of splitting our programs into smaller, separate entities that we can import from.

However, a module doesn't have to be a separate file. We think this way because of ES6 modules, where each file is a module, and we can't have two modules in one file. So, if you had 1000 modules, in ES6 you have to have 1000 separate files. That means 1000 files sent to the browser, at least if you don't have any build process.

Take a look at the below snippet:

const module = (function Module(arg) {
  const foo = () => {
    console.log(arg);
  }
  return { foo };
})("My module");

module.foo();

It turns out that this is a perfect example of a module. This pattern is called a revealing module pattern. It is not file-based, though, meaning we could have a similar module2 below it, in the same file, and it'd be perfectly fine.

The module concept is strictly connected to the notion of encapsulation, meaning that we have some control over exposing our data and functionalities to the outside. For example, in the above code, the user has access to the public API exposed by a module (in this example, it's only one function - foo), but doesn't have access to the arg. Without encapsulation, our module pattern becomes simply a namespace pattern:

const namespace = {
  arg: "My module",
  foo: (arg) => {
    console.log(arg)
  }
}

console.log(namespace.arg, namespace.foo("My module"));

As you can guess, modules had been there long before the introduction of ES6 modules, just in a slightly different form. Back then, there wasn't first-class language support for them, so programmers had to come up with an idiom that worked for them. And this was completely fine and usable. In fact, people have been using this revealing module pattern, even after the introduction of ES6 modules, because of incompatibilities between different types of modules, which we will get to later.

Sidenote: modules are singletons

Modules in essence consist of two parts - an internal state and a public API. However, remember that modules are always executed only once. Whether that is by IIFE      link-icon, as in our example, or ES6 file-based modules. They're like singletons      link-icon in that sense. Two files importing one module will work with the same internal state of the module.

That is quite obvious when using a revealing module pattern, but it can be a surprise when working for the first time with ES6 modules. So if you don't believe me, take a look at the below example.

Let's say we have a small workspace with just four files, all in the same directory:

modules are singletons JavaScript module systems

Now, let's define our module. It will consist of some internal state, variable x, and a public API that uses this internal state - increment function:

/* module.js */
var x = 0;

export const increment = () => {
  ++x;
  console.log(x);
};

We want to use this public API in file1.js and file2.js. The code in both of them will be the same:

/* file1.js */
import { increment } from "./module";

increment();

and the same in file2.js:

/* file2.js */
import { increment } from "./module";

increment();

We have our setup, let's use it in our very simple main file - index.js:

/* index.js */
import "./file1";
import "./file2";

And that's it, just two imports. What do you think we'll see in the console?

1
2

Remember that state isn't redefined for every import in some magical way. This is still a regular JavaScript file, where every file that imports module.js will implicitly work with the variable x (not directly though, but through the public API).

Ok, that's enough about modules as singletons. We've got a good grasp on the fundamental concept of a module. Let's have a quick overview of all of the module systems that are used in the JavaScript world today.

We'll start with the two most ubiquitous module systems - CommonJS and ECMAScript modules.

CommonJS

On Jan 29, 2009, Kevin Dangoor uploaded the blogpost      link-icon that would be the beginning of the module system ServerJS, later to be renamed to CommonJS that we know and love to this day.

The motivation can be best seen in the following paragraph:

"JavaScript needs a standard way to include other modules and for those modules to live in discreet namespaces. There are easy ways to do namespaces, but there’s no standard programmatic way to load a module (once!). This is really important because server side apps can include a lot of code and will likely mix and match parts that meet those standard interfaces."

As you can see, the goal from the first day was to create a standard module system to be used outside the browser. The browser was never the environment that the CommonJS team had in mind. In fact, to this day, browsers don't support CommonJS module syntax. If you wanted to use CommonJS in your JavaScript code and ship it to the browser, you would have to use a transpiler and have it convert your CommonJS to ES6 style modules. Here's an example of a Babel plugin that will do this transformation: link      link-icon.

Enough about ES6, though. Let's see a small example of a CommonJS module.

/* module.js */
var x = 0;

exports.increment = () => {
    ++x;
    console.log(x);
}

/* **************************** */

/* index.js */
const m = require("./module");

m.increment();

Hopefully, this looks somewhat similar to the revealing module pattern example before. Obviously, the syntax is different, but the general idea can be noticed right away. The public API (increment function) is exported using the exports keyword by attaching functionalities onto the exports object. The internal state is just regular JavaScript, with no need for any syntax changes.

The module.js file could've been written using the module.exports syntax as well:

/* module.js */
var x = 0;

increment = () => {
    ++x;
    console.log(x);
}

module.exports = { increment };

This is all because what we're doing, in fact, is modifying one of the properties of the module object      link-icon. exports is one of these properties, but there are many more, like filename or require. Because the module.exports is an object, the regular JavaScript object rules apply to it. Consider the following example.

Let's also export variable x. Then, in index.js we'll increment this imported x "manually":

/* module.js */
var x = 0;

exports.x = x;
exports.increment = () => {
  ++x;
  console.log(x);
};

/* **************************** */

/* index.js */
const m = require("./module");

m.x++;
m.increment();

The output will still be 1. This shouldn't come as a surprise after learning that what we're doing here is just adding properties to the object. And it makes sense, since exports.x points to the value in the memory, not the original variable x itself. However, it is a little bit different behavior that we should be aware of.

Another important property of CommonJS modules in Node.js is that they are synchronous and cached      link-icon. Caching simply means that every require(x) will return exactly the same object. In other words, a module is executed only once and the result is served from the cache. In addition, as mentioned, CommonJS imports (requires) are synchronous - like regular JavaScript code, synchronous code is processed line by line. As we'll soon find out, this is different from ECMAScript modules, which are asynchronous.

ECMAScript modules

ES6 has given us many great things - let and const variable declarations, arrow functions, promises, and... for the first time in history, built-in modules! When writing ES6 modules, everything is assumed to be private. You have to explicitly state what you want to export. Everything that you don't explicitly export will stay private and be accessible only inside the module. Want to make something public? Use export, easy-peasy.

We've all seen countless examples of ES6 modules, but anyways here's our module, but as an ES6 module:

/* module.js */
var x = 0;

export const increment = () => {
  ++x;
  console.log(x);
};

/* **************************** */

/* index.js */
import { increment } from "./module.js";

increment();

Of course, there are many more import/export styles that you can use with ES6 modules. As always, you can find all of them on the MDN      link-icon.

There are a couple of subtle differences here.

One of them being the import path: import { increment } from "./module"; became import { increment } from "./module.js";. When using ES6 modules, import paths have to be correct relative or absolute paths, with file extensions. But as always, there are exceptions. For example, if you use your ES6 modules in a browser environment and use import map      link-icon you can omit file extensions. In the node environment, you can use --es-module-specifier-resolution=node to customize the resolution algorithm      link-icon and skip file extensions, as well.

That second one being the behavior on the bizarre example with the x variable and increment function we've seen before. Let's see what happens when we use ES6 modules.

/* module.js */
export var x = {};

export const log = () => {
  console.log(x);

};

/* **************************** */

/* index.js */
import { x, increment } from "./module.js";

++x;
increment();

In this case, we'll get the following error:

++x;
  ^

TypeError: Assignment to constant variable.

How so? After all, the x variable was defined using the var keyword. The thing is, it's not the x variable itself. Instead, it's a reference to that variable and it doesn't make much sense to reassign a reference. However, you can still mutate. Consider what happens when we change x to be an object instead of a number and try to mutate it:

/* module.js */
export var x = {};

export const log = () => {
  console.log(x);
};

/* **************************** */

/* index.js */
import { x, log } from "./module.js";

x.prop = "x";
log();

Output is { prop: 'x' } and hopefully everything makes sense!

It was already hinted above that ES6 modules are asynchronous. This is one of the bigger distinctions between the two most popular module systems. If you think about a large-scale application with thousands (or more) modules it becomes pretty clear why we wouldn't want to process each module one by one. This is also a key to some language functionalities - like top level await      link-icon. The MDN description says it all: modules can act as big asynchronous functions without blocking other modules from loading.

So it's 2015, and we still obviously have CommonJS modules but it's just a matter of time before we all start using the new standard. After all, ES6 has just dropped, and we have first-class support for modules! Life's good, we can finally all start writing the same module syntax. Right? Right??? Well...

Incompatibilities

... of course not.

It always takes a while for the spec to be implemented in all environments. It's infeasible to drop new spec and expect it to immediately work correctly in all runtime environments. Take, for example, the Fetch API      link-icon. It was introduced in ES6 and had full support in most major browsers in 2015, which is the year that the ES6 spec landed. But in node land, the pull request with Fetch API was merged in 2022. Before that, programmers had to use polyfills, like the node-fetch package      link-icon. However, this is a corner case rather than a rule.

Check out this very helpful site      link-icon showing which node version is required for all ES6 features. Take, for example, arrow functions. They were pretty well supported already in version 4, and fully correct support since version 6 - not bad.

All of this is to say, people obviously expected some delay with introducing ECMAScript modules in Node. Chrome had full support in 2017, Firefox in 2018, and Node...

Version 12 finally had a stable implementation of ECMAScript modules. Here's a brief history      link-icon. Version 12 of Node.js was released in the middle of 2019, so almost five years after the initial spec came out. Not ideal at all.

And maybe the worst part of it? The process. The communication between TC39      link-icon and the Node.js team was... well, to put it lightly, not ideal. One could maybe even say non-existent. So the ES6 spec lands, ECMAScript modules are introduced, Node.js is incompatible, and then there's silence. And only after some significant time (roughly a year) some smart people realized that they should probably try to resolve that.

So then the talking starts. It was quite frustrating, as we had those beautiful ES6 modules introduced and most people were very eager to start using them. Unfortunately, some details like the NPM repository being completely incompatible with the new syntax were slightly problematic :)

Temporary solutions

How do we make all of this work? First of all, NPM packages use CommonJS modules. Browsers don't support them, though. We can't unfortunately just import an NPM package to our code that runs in the browser and expect it to work.

So one of the ways people made it work was bundlers. One of these was Browserify      link-icon. They have a very simple explanation and use case on their site:

"Browsers don't have the require method defined, but Node.js does. With Browserify you can write code that uses require in the same way that you would use it in Node."

So, in short, you'd use require in a code meant to be run in a browser and compile it using browserify.

Other people would pull packages from CDN, for example using unpkg      link-icon. For example: https://unpkg.com/[email protected]/umd/react.production.min.js      link-icon. This is an example of UMD (Universal Module Definition)      link-icon, meaning that it will work in any environment we use - browser, node, doesn't matter.

Sidenote: why are there lib, es, and dist directories in a package I'm importing?

Package maintainers technically don't have to support all module systems. However, publishers want their code to be "consumable" by everyone, so they usually provide various versions. Let's check them out:

  1. lib - When you install an NPM package, you used to use this directory and "consume" code from it directly. This is because the lib directory is for modules that use CommonJS. You will still use this directory to import code in environments that use CommonJS modules.
  2. es - If you use ECMAScript modules, this is the directory you're looking for. This is usually the best solution if you can use it, as code from the es directory is tree-shakable      link-icon.
  3. dist - It's used mostly for shipping a UMD to a CDN, as in the example with the unpkg and react.

Let's look at an example, how did package maintainers support these modules in the last couple of years? We'll look at the AntD      link-icon open-source GitHub repository as an example. You can find their repo here      link-icon and use the branch switch select to hop onto the mentioned branch.

v0.12-stable, with the last package.json update on May 5, 2016. The properties that we're looking for are files, main, module, and unpkg:

 "main": "lib/index",
 "files": [
   "lib",
   "style"
 ],

Pretty clear, we have one entry point to our program - the lib/index module, so we have to use this one. There are no module or unpkg fields as at that time there weren't any modules to point them to.

As you can see, at that time (around 2016) AntD supported only CommonJS modules. If you wanted to use this package in the browser, you had to use one of the methods mentioned above.

1.x-stable, this one with the last update on Jan 12, 2017. We look inside the package.json file and we can see:

"main": "dist/antd",
"files": [
  "dist",
  "lib",
  "index.d.ts"
],

We already know the purpose of the dist directory - it's used mostly for CDNs. And in fact, you could pull AntD from CDN, for example using unpkg.

Now, let's check out the latest version as at the time of writing this article:

"files": [
  "dist",
  "es",
  "lib",
  "locale"
],
"main": "lib/index.js",
"module": "es/index.js",
"unpkg": "dist/antd.min.js",

As you can see, we have different entry points depending on the module system that we use. The module is still an entry point to our program similar to main, but is an ECMAScript module ID instead of a CommonJS one.

What about the unpkg field? It's not a standard, but rather a way of configuring your package so that it's published on CDN. This should be, and almost always is, minified UMD style code.

Remember that the folder names are just a convention. In theory, the package maintainer could change the name of the dist folder to a cdn for example. However, it is unusual to change these names and you'll usually see the ones mentioned above. main and module fields are not just a convention, though, so pay attention to those. If you see some other directory name it doesn't necessarily mean that this package doesn't support your module system.

The compromise

Unfortunately, the incompatibility couldn't be easily fixed having got so far down the road. There had to be some compromises to allow the usage of ECMAScript modules in Node.js, and there were (and still are) some.

If you want to use ES6 modules in Node.js you have to use either .mjs syntax or set the type field in your package.json file to "module". type field has only 2 options - "commonjs" or the mentioned "module". Setting this will cause all of the files to be treated as either ECMAScript modules or CommonJS modules.

If you try to load the ES6 module without using one of the above solutions, you'll see a warning similar to the one below:

"Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension."

Can I import the CJS module and require the ESM module?

In short, yes. Let's focus first on the case of importing the CommonJS module into the ES6 module using the import keyword.

It turns out that we can safely import the CJS module into the ESM one like a regular ESM module. The import will always be asynchronous. In addition, we can use all forms of imports - default, namespace, etc. Let's see some examples.

First - default import:

/* cjs-module.js */
var x = 0;

exports.increment = () => {
  ++x;
  console.log(x);
};

/* **************************** */

/* index.mjs */
import cjs from './cjs-module.js';

cjs.increment();

Of course, if you want, you can also use the other version of default import - import { default as cjs } from './cjs-module.js';. The default key will always point to the module.exports value.

Node.js also supports named exports, which means that you can also do this:

/* index.mjs */
import { increment } from './cjs-module.js';

increment();

Namespace imports obviously work, too. Now, let's look at the second case, which is unfortunately slightly more complicated.

“require” ES6 module?

Sadly we can not just require ESM, as that would break the synchronicity constraint - require() is synchronous. Instead, we have to use the import() syntax      link-icon, also called dynamic import.

/* esm-module.mjs */
var x = 0;

export const increment = () => {
  ++x;
  console.log(x);
};

/* **************************** */

/* index.js */
import("./esm-module.mjs")
    .then(m => m.increment());

import(...) returns a promise that will resolve and give us access to this module's exports object.

Honorary mentions - AMD and UMD

CommonJS and ECMAScript modules are not the only module systems that were and are still present in the JavaScript ecosystem, though they're the most ubiquitous. However, before the wide support for ES6 modules, web developers didn't really have a good module system to use in the browser.

You couldn't use CommonJS modules, because of their synchronicity. It is fine in the Node.js environment but unacceptable in the browser one, as HTTP requests are asynchronous by definition. So... another one. AMD coming to the rescue!

Asynchronous Module Definition (AMD)

The main problem with the AMD module system is how it works under the hood, but first, let's see a simple example from the require.js page      link-icon:

//Calling define with module ID, dependency array, and factory function
define('myModule', ['dep1', 'dep2'], function (dep1, dep2) {

    //Define the module value by returning a value.
    return function () {};
});

Doesn't look bad - we've got the define function which is defined by the module loader, like the aforementioned RequireJS. As AMD is just a specification (similarly to CommonJS), we need a loader (code) that implements this spec. One of these is the RequireJS.

So far so good, but we need to mention the simplified CommonJS wrapping      link-icon. We can see require calls in the given example:

define(function (require) {
    var dependency1 = require('dependency1'),
        dependency2 = require('dependency2');

    return function () {};
});

This is surprising, considering that we know that require calls are synchronous. How does that happen? Using the Function.prototype.toString() function (yes, on a function!) and regex searches. Now, what happens if we have for example require('./module') inside some string in this function? This will result in an HTTP request for a module.js file that obviously doesn't exist.

We won't go into more details as the AMD spec is not really relevant anymore. Nowadays, there is no use case for using the AMD module system. ECMAScript modules are asynchronous, so the only reason ever to use this module system is no longer viable.

Universal Module Definition (UMD)

The idea behind UMD is the ability to use the module everywhere, meaning in every environment. If you expose your code as a UMD module, the consumer can use this code in Node.js using the require keyword, but also with the AMD module system and in the browser.

The formal definitions can be found in the GitHub repository      link-icon. There is no one concrete code that implements this module system, instead, you can see a couple of patterns in the templates directory, for example, the commonjsStrict.js:

(function (root, factory) {
    if (typeof define === 'function' && define.amd) {
        // AMD. Register as an anonymous module.
        define(['exports', 'b'], factory);
    } else if (typeof exports === 'object' && typeof exports.nodeName !== 'string') {
        // CommonJS
        factory(exports, require('b'));
    } else {
        // Browser globals
        factory((root.commonJsStrict = {}), root.b);
    }
}(typeof self !== 'undefined' ? self : this, function (exports, b) {
    // Use b in some fashion.

    // attach properties to the exports object to define
    // the exported module properties.
    exports.action = function () {};
}));

We're not going to go through every bit of this code but, as you can see, it's just regular JavaScript. Remember the idiomatic way of implementing module systems mentioned before? This is one of them - a code pattern used by programmers often enough that it has become de facto the standard.

UMD was used mostly before the introduction of ES6, as ECMAScript modules are not supported by this pattern. So, today it's practically dead. However, if you see a lot of checks, as in the code above, it's typically an UMD module. If you check the React unpkg module      link-icon mentioned earlier, even in the minified code, you can clearly see this typical style.

We now know about all major module systems used in JavaScript. With the gained knowledge, let's jump into some practicalities and real-world examples from today's web development.

Throwing TypeScript into the mix

TypeScript treats every file with the import or export keyword as an ECMAScript module, the same as regular JavaScript. If you import something available only in TypeScript, like a type, it will be stripped during the compilation step. There are multiple different syntaxes to export and import a module in TypeScript that satisfy pretty much every need. You can find them in the TS documentation      link-icon.

“module” compiler option

Using export and import keywords in TypeScript doesn't mean that we can only compile TypeScript code to ECMAScript modules. Using the module property in the tsconfig file, we can choose between different module code that will be generated. You can see code generated for different options here      link-icon.

The value of a module property can be well-known module systems that we already covered, like AMD, CommonJS, ES6, and UMD. However, there are also some different values, like ES2020, ES2022, and more. Those are not completely different module systems, but rather evolutions. For example, ES2020 is still an ECMAScript module system, but with support for the dynamic imports      link-icon. It was introduced in ES11 (ES2020), hence the ES2020 property value.

Other supported options are NodeNext and Node16. This is yet another way that makes compiled JavaScript code use either ECMAScript modules or CommonJS ones. The type of the module system used depends on the type field in the package.json file and the file extension.

TypeScript compiler will look for the closest package.json file by going up the file tree. Then, based on the type field value (either module, or commonjs) it will decide whether to compile this file to the ECMAScript module, or the CommonJS one.

Remember that setting the type field sets the module type for all the files? Well, as always, there is an escape hatch. If you want to override the module type for one file you can do that using the file extension. A file with .mjs extension is always an ECMAScript module. On the contrary, a file with a .cjs extension is always a CommonJS module. TypeScript supports that with the corresponding extensions - .mts and .cts.

Even if your package.json specifies a different module system, a file extension can override that. .mts files will compile to .mjs, and .cts to .cjs. There is no way to override this, so if you set the file extension, this stays the corresponding module type - no surprises anymore.

When using this setting, it's crucial to add file extensions in import paths as without them imports will fail when compiled to ECMAScript modules. Adding extensions in imports works in both CJS and ESM, so at this point, you should probably stick to those.

Module resolution

Slightly unrelated, but necessary for the full picture, is the process of resolving an import. The module resolution topic isn't about CJS vs. ESM, etc. It's rather about the algorithm that the compiler uses to figure out what your import actually refers to. Using the generic word 'compiler' is not a mistake here, as the algorithm can differ, for example between TypeScript and Node.js module resolution.

When the import is relative, the algorithm is straightforward. Go to the given location and grab the module; sounds easy. However, as we'll find out, there are a couple of different strategies that could be applied here.

It turns out that in the TypeScript itself, we have two main module resolution strategies: classic      link-icon and node      link-icon.

The classic strategy is more of a fun fact at this point than anything. It had been used before the 1.6 release, and you won't have to use it anymore. The relative imports work as expected, so for a moduleA located in src/moduleA, the following import: import { b } from './moduleB' will result in the two lookups:

  • import { b } from 'src/moduleB.ts
  • import { b } from 'src/moduleB.d.ts

So far so good. However, absolute imports are where it gets weird. If we change our import to import { b } from 'moduleB' it will result in the following lookups:

  • import { b } from '/moduleB.ts'
  • import { b } from '/moduleB.d.ts'
  • import { b } from '/src/moduleB.ts'
  • import { b } from '/src/moduleB.d.ts'

As you can imagine, the deeper we go, the more useless lookups will be done.

This strategy is not viable anymore as what we actually want is the one used in Node.js. We have to account for the node_modules directory and some package.json fields. That's why we have and use node module resolution. What happens if we have the aforementioned import (import { b } from './moduleB')with this strategy? For simplicity (for example, there are d.ts and .tsx files that are checked for, as well) let's ditch TypeScript for now and check how Node.js would resolve such an import:

  1. Check src/moduleB.js.
  2. Check if the src/moduleB contains the package.json with the main/module field. If so, try to resolve that path using this value.
  3. Check if src/moduleB contains index.js. If it does, treat this index.js as the entry point.

What about absolute imports? The algorithm is basically to check every level for the node_modules folder and go up the directory chain - one by one. So if we change our import to import { b } from 'moduleB', Node.js will first look in src/node_modules/moduleB.js, and then node_modules/moduleB.js. On each level, all three of the checks mentioned above are done (.js file -> main field -> index.js file). You can check out the complete algorithm here      link-icon.

TypeScript doesn't differ much here as it just has to check for some additional files with different extensions - .d.ts and .tsx and types directory. You can check the full algorithm here      link-icon.

“exports” field

Remember the main and module fields in the package.json file? Those were CommonJS and ESM module paths that are the entry to your program. Node.js used the main path while, for example, bundlers used the module path. However, in newer versions of Node.js, we have a much more powerful option - the exports field. It simplifies and allows a couple of things, but we'll focus on the conditional exports      link-icon.

Imagine that you have a monorepo with backend, frontend, e2e tests, etc. You want to create a directory with some constants that will be reused in all of these directories. Let's name this directory common. And needless to say, we are using TypeScript.

Because this has to be usable in many environments, we have to compile it to both CommonJS and ESM. Following the convention, we'll compile our code to CommonJS modules to the lib directory, and ESM to the es directory.

The sad part about this is that we have to remember every time which directory we should use. If you're in frontend code, you should import from the es directory, in backend code, the lib, in e2e we're back to ESM, so es, ugh. Isn't it obvious that when I'm using import I'm using ESM, and when I'm using require, CJS?

Well, it turns out that with the exports keyword, we can automate that. In our package.json, we can specify paths to match when using those keywords:

{
  "exports": {
    ".": {
      "import": "./es/index.js",
      "require": "./lib/index.js"
    },
  }
}

Now, when we use import, it will automatically match with the index.js file in the es directory. On the other hand, if we use require, it gets matched with index.js from the lib directory. There are a couple more conditions that you can check out here      link-icon.

Conclusion

The history of JavaScript modules is unfortunate. All the delays, incompatibilities and compromises significantly reduced the overall understanding of the topic. Now, all of the ways of trying to salvage these nuisances probably make developers think less of module systems. However, gaining an understanding of the whole topic can get even more complicated, because of the little improvements and shortcuts.

Hopefully, with the help of this article, you've gained a good basis that you can use to resolve any problem related to the module system. Maybe in the near future, we won't need to worry about the module system at all with features such as the exports keyword. That said, someone has to know and use it, and that someone is hopefully you!

Kuliński Michał

Michał Kuliński

Frontend Engineer

Michał is an experienced Frontend Engineer who develops robust web applications. He works with various UI technologies, including JavaScripts frameworks like React and Angular. He adeptly handles state management using Redux and MobX, ensuring consistent data flow. He is also knowledgeable in working with...Read about author >

Read also

Get your project estimate

For businesses that need support in their software or network engineering projects, please fill in the form and we'll get back to you within one business day.