Skip to main content

When is snowflaking of dimensions required

There are few key factors need to be considered to decide on snowflake schema  as a data model in DW initiative
  • Frequency of dimension attribute change
  • No of dimension attributes - Large dimensions
  • Hierarchies within the dimension attributes
  • History tracking of dimension attributes

Consider type 2 SCD implementation is used for history tracking.
If a dimensional table has 10 dimension attributes and only one among the dimension attributes is frequently changing in many rows,
then table becomes huge causing performance issues and unnecesary space consumption by rarely changed dimension attributes.
Move the frequently changed dimension attribute to a seperate table and maintain RI with main dimension table,
Thus snowflaking the dimension to avoid performance issues and unneccesary space consumption

If there are hierarchies within the dimension attributes,then keying between Aggregate facts and dimension table(with hierarchies) is not appropriate
A record in sales fact will contain a store key,granularity of the fact table is product by store by day.
This means the lowest level in geography dimension is store.The hierarchy in the dimension is store, district, region
If we have all hierarchies of geography dimension stored in one table then there is no single key for district or region as the dimension table has only unique key for store
By normalizing the dimensional hierarchies,we can provide single dimension keys(district key ,region key) to aggregate facts

Final note is to remember  evaluating tradeoffs between ease of use and ease of maintenance with snowflake schema as
normalizing dimensions will involve more joins may result in degrade in query performance

Advantage of snowflake schema
The main advantage of the snowflake schema is the improvement in query performance due to minimized disk storage requirements and joining smaller lookup tables. The main disadvantage of the snowflake schema is the additional maintenance efforts needed due to the increase number of lookup tables.


Comments

Popular posts from this blog

Informatica Powercenter Partitioning

Informatica PowerCenter Partitioning Option increases the performance of PowerCenter through parallel data processing. This option provides a thread-based architecture and automatic data partitioning that optimizes parallel processing on multiprocessor and grid-based hardware environments. Introduction: With the Partitioning Option, you can execute optimal parallel sessions by dividing data processing into subsets that are run in parallel and spread among available CPUs in a multiprocessor system. When different processors share the computational load,large data volumes can be processed faster. When sourcing and targeting relational databases, the Partitioning Option enables PowerCenter to automatically align its partitions with database table partitions to improve performance. Unlike approaches that require manual data partitioning, data integrity is automatically guaranteed because the parallel engine of PowerCenter dynamically realigns data partitions for set-oriented trans...

Data virtualization

Data virtualization is a process of offering a data access interface that hides the technical aspects of stored data, such as location, storage structure, API, access language, and storage technology. Analogous to concept of views in databases Data virtualization tools come with capabilities of  data integration, data federation, and data modeling Requires more memory caching Can integrate several data marts or data warehouses through a  single data virtualization layer This concept and software is a subset of data integration and is commonly used within business intelligence, service-oriented architecture data services, cloud computing, enterprise search, and master data management. Composite, Denodo, and Informatica are the largest players in the area of data virtualization References for definition: http://www.b-eye-network.com/view/14815

Find Changed Data by computing Checksum using MD5 function in Informatica

Introduction: Capturing and preserving the state of data across time is one of the core functions of a data warehouse, but CDC can be utilized in any database or data integration tool. There are many methodologies such as Timestamp, Versioning, Status indicators, Triggers and Transaction logs exists but MD5 function outlines on working with Checksum. Overview: MD5 stands for Message Digest 5 algorithm.It calculates the checksum of the input value using a cryptographic Message-Digest algorithm 5 and returns a128-bit 32 character string of hexadecimal digits (0 - F). Advantage of using MD5 function is that, it will reduce overall ETL run time and also reduces cache memory usage by caching only required fields which are utmost necessary. Implementation Steps : Identify the ports from the source which are subjected to change. Concatenate all the ports and pass them as parameter to MD5 function in   expression transformation Map the MD5 function output to a checksum outp...