Get Knowledge - Information

Denormalization in SQL Server -How does it work?

237

What is Denormalization?

Denormalization in SQL Server  – The intentional introduction of redundancy in a table in order to improve performance,  called “Denormalization”. Denormalization is a technique to move from higher to lower normal forms of database modeling in order to speed up database access. De-normalization is the reverse process of Normalization i.e, combining two or more tables into a single table.

For Upon De-normalization increases the performance (Searching data from one table is quite faster than searching data from multiple tables). Good for OLAP systems. Denormalization is usually done to decrease the time required to execute complex queries. The drawbacks of a normalized database are mostly in performance. In a normalized database, more joins are required to gather all the information from multiple entities, as data is divided and stored in multiple entities rather than in one large table. Queries that have a lot of complex joins will require more CPU usage and will adversely affect performance. Sometimes, it is good to denormalize parts of the database. Examples of design changes to denormalize the database and improve performance are:

ORDERS

PRODUCTS

Denormalization in SQL Server  -If you have calculated the total cost of each order placed as the cost of the product plus a tax of 10% of the product cost, the query to calculate the total cost sales is as follows:

select sum((Costqty)+ 0.10cost*qty)) from orders join products on orders.ProductId =products.ProductId

Denormalization in SQL Server  – If there are thousands of rows, the server will take a lot of time to process the query and return the results as there is a join and computation involved.

ORDERS

To find the total sales write a simple query:

select SUM(ORDERCOST)from orders

What is denormalization and when would you go for it?

The process of adding redundant data to get rid of complex join, in order to optimize database performance. This is done to speed up database access by moving from a higher to a lower form of normalization.

In other words, we can define De-Normalization as:-

De-normalization is the process of attempting to optimize the performance of a database by adding redundant data. It’s used To introduce redundancy into a table in order to incorporate data from a related table. The related table can then be eliminated. De-normalization can improve efficiency and performance by reducing complexity in a data warehouse schema.

De-normalization is an application tool in the SQL server model. There are three methods for de-normalization:.
1• Entity inheritance
• Role expansion
• Lookup entities.

Entity Inheritance

This method for de-normalization should be implemented when one entity is named another entity. This will do with the help of inheritance. Inheritance means parent-child relations of the entity. This will be done by making the foreign key and candidate key. This is also in the notice that the creation of a model creates a band of relationships and if you select the inheritance this property should be automatically deleted.

Role Expansion

This type of de-normalization should be created when it is a surety that one entity has a relationship with another entity or it is part of another entity. In this storage, the reason is removed. It is used with the help of Expand inline function. It uses the shared schema is used in form of a table.

Lookup Entities

This type of de-normalization is used when an entity depends on the lookup table. It works with the help of Is Look up the property. This property applies to the entity. These three will give authority to the user to create a genuine and tempting report model. This model is a navigation experience for the customer.

The Reason for Denormalization
Only one valid reason exists for denormalizing a relational design – to enhance performance. However, there are several indicators that will help to identify systems and tables which are potential denormalization candidates. These are:
• Many critical queries and reports exist which rely upon data from more than one table. Often times these requests need to be processed in an online environment.
1• Repeating groups exist which need to be processed in a group instead of individually.
2• Many calculations need to be applied to one or many columns before queries can be successfully answered.
3• Tables need to be accessed in different ways by different users during the same timeframe.
4• Many large primary keys exist which are clumsy to query and consume a large amount of disk space when carried as foreign key columns in related tables.
• Certain columns are queried a large percentage of the time causing very complex or inefficient SQL to be used.

Be aware that each new RDBMS release usually brings enhanced performance and improved access options that may reduce the need for denormalization. However, most of the popular RDBMS products on occasion will require denormalized data structures. There are many different types of denormalized tables that can resolve the performance problems caused when accessing fully normalized data. The following topics will detail the different types and give advice on when to implement each of the denormalization types.

Types of Denormalization

Pre-Joined Tables are used when the cost of joining is prohibitive
Report Tables are used when specialized critical reports are needed
Mirror Tables are used when tables are required concurrently by two different types of environments
Split Tables are used when distinct groups use different parts of a table
Combined Tables are used when one-to-one relationships exist
Redundant Data used to reduce the number of tables joins required
Repeating Groups used to reduce I/O and (possibly) storage usage
Derivable Data used to eliminate calculations and algorithms
Speed Tables used to support hierarchies

This article was first published here.

TO GET MORE KNOWLEDGE ABOUT Denormalization in SQL Server, PLEASE VISIT OUR SITE: Forupon.com.

Comments are closed.