Blog>>Data>>Data engineering>>Normalization vs. Denormalization in databases

Normalization vs. Denormalization in databases

Databases are like the digital brains behind many of the apps and websites we use every day. They store all the important data in an organized way so we can easily access it when needed. However, how we organize that data can make a big difference in how well the database works. Today, we’re going to dive into two key techniques used to organize databases: normalization and denormalization. Don’t worry if these terms sound complicated – we’ll break them down and make them easy to understand.

Basic concepts

Let’s start with the basics. Normalization is like tidying up your room. It involves organizing your things so that everything has its place and you don’t have duplicate items lying around. In database terms, it means structuring the data to minimize redundancy (duplicate data) and improve consistency.

On the other hand, denormalization is like making your room more convenient for you, even if it means it’s not as tidy. You might leave your favorite books and snacks out for easy access, even if it means having duplicates elsewhere. In databases, denormalization means merging tables or adding redundant data to speed up data retrieval, even if it sacrifices some neatness and consistency.

Normalization

Why normalize?

The main goal of normalization is to eliminate duplicate data and ensure everything is stored logically. According to Wikipedia      link-icon:

"Database normalization is the process of structuring a relational database in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity. It was first proposed by British computer scientist Edgar F. Codd as part of his relational model.

Normalization entails organizing the columns (attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of synthesis (creating a new database design) or decomposition (improving an existing database design)". 

Think of it as making sure you only have one pair of shoes in your closet instead of multiple pairs scattered around the house. This helps keep the database efficient and less prone to errors when you add, update, or delete data.

Levels of normalization

There are different levels (or “normal forms”) of normalization, each adding more rules to follow:

1. First Normal Form (1NF)

Rule: Ensure each column in a table contains unique and indivisible values.

Reason: This eliminates repeating groups and ensures that the data in each column is atomic, meaning that it cannot be divided further. It helps in simplifying the data structure and avoids the pitfalls of storing multiple values in a single column.

Impact: While 1NF ensures a basic level of data integrity by making data retrieval straightforward, it is not sufficient to handle more complex data dependencies and anomalies. Without further normalization, the database might still suffer from redundancy and update anomalies.

2. Second Normal Form (2NF)

Rule: Ensure all non-key columns are fully dependent on the entire primary key.

Reason: This addresses the issue of partial dependency, where non-key attributes are dependent on part of a composite primary key. It ensures that each non-key attribute is only dependent on the primary key, thus eliminating redundancy for non-key attributes.

Impact: By achieving 2NF, the database reduces redundancy and ensures that updates to the data are consistent. However, it might still face issues related to transitive dependencies, which are addressed in 3NF.

3. Third Normal Form (3NF)

Rule: Remove any columns that depend on non-key columns (transitive dependency).

Reason: This ensures that all attributes are only dependent on the primary key and not on other non-key attributes. It eliminates the potential for anomalies that can occur when non-key attributes are dependent on other non-key attributes.

Impact: 3NF improves data integrity by ensuring that changes in non-key attributes do not lead to inconsistencies. It also enhances query performance by reducing the number of redundant data and simplifying the schema.

4. Boyce-Codd Normal Form (BCNF)

Rule: A stricter version of 3NF where every determinant must be a candidate key.

Reason: BCNF addresses situations where 3NF does not fully eliminate anomalies, particularly in cases where a table has multiple overlapping candidate keys.

Impact: Achieving BCNF further strengthens data integrity by ensuring that every determinant is a candidate key, thus removing any remaining anomalies and ensuring a more robust schema.

5. Fourth Normal Form (4NF) and Higher

Rule: Address even more complex dependencies, such as multi-valued dependencies.

Reason: These forms are designed to handle more complex scenarios that are not addressed by the earlier normal forms. For example, 4NF deals with multi-valued dependencies, ensuring that one attribute cannot have multiple independent multi-valued facts associated with it.

Impact: Higher normal forms provide even more stringent controls over data integrity and further reduce redundancy, though they are less commonly applied in everyday database design.

Examples of normalization

Imagine you’re managing a database for a supermarket like Walmart. Initially, you might have one big table that includes products, suppliers, and sales. Normalizing this database means breaking it down into separate tables for products, suppliers, and sales and then linking them together with unique IDs. This way, you avoid having multiple entries for the same supplier across different products.

Example of normalized tables

Products table:

| ProductID | ProductName | SupplierID | CategoryID | Price |

| 1     	| Apple   	  | 101    	   | 201        | 0.50  |
| 2     	| Banana  	  | 102    	   | 201        | 0.30  |
| 3     	| Carrot  	  | 103    	   | 202    	| 0.20  |

Suppliers table:

| SupplierID | SupplierName | ContactInfo   |

| 101    	 | FreshFruits  | 123-456-7890  |
| 102    	 | BananaWorld  | 987-654-3210  |
| 103    	 | VeggieLand   | 555-123-4567  |

Sales table:

| SaleID | ProductID | Quantity | SaleDate   |

| 1001   | 1     	 | 10   	| 2023-07-01 |
| 1002   | 2     	 | 20   	| 2023-07-01 |
| 1003   | 3     	 | 15   	| 2023-07-02 |

Entity-relationship diagram (ERD)

Here’s a simple ERD to visualize the normalized structure of our supermarket database:

+---------------+    	+---------------+    	+----------------+
|   Products	|    	|   Suppliers   |    	| 	Sales 	     |
+---------------+    	+---------------+    	+----------------+
| ProductID 	|<------| SupplierID	|    	| SaleID    	 |
| ProductName   |    	| SupplierName  |    	| ProductID ---->|
| SupplierID	|    	| ContactInfo   |    	| Quantity  	 |
| CategoryID	|    	+---------------+    	| SaleDate  	 |
| Price     	|                             	+----------------+
+---------------+

- Products Table is linked to the Suppliers Table via `SupplierID`.

- Sales Table references the Products Table via `ProductID`.

Denormalization

Why denormalize?

Denormalization aims to make your database faster and easier to read, even if it means duplicating some data. It is like leaving your most-used items within arm’s reach because you need to access them quickly.

Methods of denormalization

  • Adding Redundant Data: Include extra copies of data to avoid complicated lookups.
  • Creating Aggregates: Store pre-calculated summary data to speed up queries.
  • Combining Tables: Merge related tables to simplify queries.

Examples of denormalization

In our supermarket’s database, a denormalized table might include product details, sales information, and supplier data in a single table. This way, you can quickly pull up all the details about a sale without needing to join multiple tables.

Example of a denormalized table

| SaleID | ProductName | SupplierName | Quantity | SaleDate   | Price |
|--------|-------------|--------------|----------|------------|-------|
| 1001   | Apple   	   | FreshFruits  | 10   	 | 2023-07-01 | 0.50  |
| 1002   | Banana  	   | BananaWorld  | 20     	 | 2023-07-01 | 0.30  |
| 1003   | Carrot  	   | VeggieLand   | 15   	 | 2023-07-02 | 0.20  |

Comparison of normalization and denormalization

Pros and cons of normalization

  • Pros: Better data integrity, less redundancy, easier maintenance.
  • Cons: Can slow down read operations due to multiple table joins.

Pros and cons of denormalization

  • Pros: Faster read operations, simpler queries.
  • Cons: More redundancy, higher risk of data inconsistencies, harder maintenance.

When to use?

Use normalization when you need to keep your data accurate and tidy, especially in systems that handle a lot of updates, like a supermarket’s inventory system. Use denormalization when you need fast access to data, like in a reporting system that needs to generate sales reports quickly.

Use cases

Normalization in action

In transaction-heavy systems, like our example supermarket’s inventory management, normalization helps keep everything accurate. For instance, if a supplier changes their contact info, you only need to update it in one place.

Denormalization in action

Denormalization is more important for data warehouses or business intelligence tools. If the supermarket wants to analyze sales trends quickly, a denormalized database can provide faster insights.

Here’s a real-life example: I was working as a railway data lead for one of the biggest public transportation companies in Europe. One of our projects was to bring data from ticket sales from many different sources into a few tables organized in a star schema.

Ticket sales from website systems, app systems, vending machines, and cashier points were denormalized to provide much better visibility into data, focusing on aspects important for stakeholders.

Tools and technologies

Modern database systems like PostgreSQL and MySQL help enforce normalization rules. On the other hand, NoSQL databases like MongoDB naturally support denormalized structures, which can be great for quick data retrieval. There are also tools like ERD software for designing normalized schemas and ETL tools to help with the denormalization process in data warehouses.

Conclusion

Balancing normalization and denormalization is crucial for optimizing your database. Normalization keeps your data clean and consistent, while denormalization can make data retrieval much faster. It’s all about finding the right balance for your specific needs. By understanding both techniques, you can design databases that are both efficient and reliable.

References and sources

For further reading, check out books like "Database System Concepts" by Silberschatz, Korth, and Sudarshan, as well as online resources like the official PostgreSQL and MySQL documentation. These sources will give you more in-depth knowledge and examples of database normalization and denormalization.

In the end, both normalization and denormalization are essential tools in a database designer's toolkit. Knowing when and how to use them can make all the difference in the performance and reliability of your database systems - choose wisely.

Kordos Arkadiusz

Arkadiusz Kordos

Software Engineer

Arkadiusz Kordos is a Data Engineer with over ten years of experience in the field. Throughout his career, he has worked in the FinTech and public transport industries, including transport for London. He is proficient with SQL and NoSQL databases, public clouds, and data warehousing. His skillset extends to...Read about author >

Read also

Get your project estimate

For businesses that need support in their software or network engineering projects, please fill in the form and we’ll get back to you within one business day.

For businesses that need support in their software or network engineering projects, please fill in the form and we’ll get back to you within one business day.

We guarantee 100% privacy.

Trusted by leaders:

Cisco Systems
Palo Alto Services
Equinix
Jupiter Networks
Nutanix