Skip to main content

Dimension types


Rapidly Changing DimensionsA dimension is considered to be a rapidly changing if one or more of its attributes changes frequently in many rows. For a rapidly changing dimension, the dimension table can grow very large from the application of numerous type 2 changes.

Slowly Changing Dimensions
 Attributes of a dimension that would undergo changes over time. It depends on the business requirement whether particular attribute history of changes should be preserved in the data warehouse. This is called a Slowly Changing Attribute and a dimension containing such an attribute is called a Slowly Changing Dimension.

Conforming Dimension
A dimension table may be used in multiple places if the data warehouse contains multiple fact tables or contributes data to data marts.
For example,A dimension such as customer, time, or product that is used in multiple schemas is called a conforming dimension if all copies of the dimension are the same

Use of Conforming Dimensions in Multiple Facts: Multiple fact tables are used in data warehouses that address multiple business functions, such as sales, inventory, and finance. Each business function will typically have its own schema that contains a fact table, several conforming dimension tables, and some dimension tables unique to the specific business function

Degenerate dimension
A degenerate dimension is a dimension that is stored in the fact table rather than the dimension table. It eliminates the need to join to a Dimension table.
Example:
if you have a dimension that only has Order Number and Order Line Number, you would have a 1:1 relationship with the Fact table. Do you want to have two tables with a billion rows or one table with a billion rows. Therefore, this would be a degenerate dimension and Order Number and Order Line Number would be stored in the Fact table.


Junk dimension
Junk dimensions are dimensions that contain miscellaneous data (like flags and indicators) that do not fit in the base dimension table


Role Playing Dimensions
A role-playing dimension is one where the same dimension key — along with its associated attributes — can be joined to more than one foreign key in the fact table. For example, a fact table may include foreign keys for both Ship Date and Delivery Date. But the same date dimension attributes apply to each foreign key, so you can join the same dimension table to both foreign keys. Here the date dimension is taking multiple roles to map ship date as well as delivery date, and hence the name of Role Playing dimension.

Shrunken Dimensions
A shrunken dimension is a subset of another dimension. For example, the Orders fact table may include a foreign key for Product, but the Target fact table may include a foreign key only for ProductCategory, which is in the Product table, but much less granular. Creating a smaller dimension table, with ProductCategory as its primary key, is one way of dealing with this situation of heterogeneous grain. If the Product dimension is snowflaked, there is probably already a separate table for ProductCategory, which can serve as the Shrunken Dimension.

Static Dimensions
Static dimensions are not extracted from the original data source, but are created within the context of the data warehouse. A static dimension can be loaded manually — for example with Status codes — or it can be generated by a procedure, such as a Date or Time dimension.

Inferred Dimensions
Inferred Dimension is a dummy dimension record inserted in dimension table to handle transaction record processing when there is no matching dimension data found for transaction data. This should technically be called an inferred member, but is often called an inferred dimension.

Comments

Popular posts from this blog

Data virtualization

Data virtualization is a process of offering a data access interface that hides the technical aspects of stored data, such as location, storage structure, API, access language, and storage technology. Analogous to concept of views in databases Data virtualization tools come with capabilities of  data integration, data federation, and data modeling Requires more memory caching Can integrate several data marts or data warehouses through a  single data virtualization layer This concept and software is a subset of data integration and is commonly used within business intelligence, service-oriented architecture data services, cloud computing, enterprise search, and master data management. Composite, Denodo, and Informatica are the largest players in the area of data virtualization References for definition: http://www.b-eye-network.com/view/14815

Difference between server jobs and parallel jobs in Datastage

Server job stages do not have in built partitioning and parallelism mechanism for extracting and loading data between different stages. To enhance the speed and performance in server jobs is to     - Enable inter process row buffering through the administrator. This helps stages  to exchange data as soon as it is available in the link.     - Using IPC stage also helps one passive stage read data from another as soon as data is available. In other words, stages do not have to wait for the entire set of records to be read first and then transferred to the next stage.    - Link partitioner and link collector stages can be used to achieve a certain degree of partitioning paralellism. All of the above features which have to be explored in server jobs are built in datastage Parallel jobs . The Px engine runs on a multiprocessor sytem and takes full advantage of the processing nodes defined in the configuration file. Both SMP and MMP archi...

Informatica Powercenter Partitioning

Informatica PowerCenter Partitioning Option increases the performance of PowerCenter through parallel data processing. This option provides a thread-based architecture and automatic data partitioning that optimizes parallel processing on multiprocessor and grid-based hardware environments. Introduction: With the Partitioning Option, you can execute optimal parallel sessions by dividing data processing into subsets that are run in parallel and spread among available CPUs in a multiprocessor system. When different processors share the computational load,large data volumes can be processed faster. When sourcing and targeting relational databases, the Partitioning Option enables PowerCenter to automatically align its partitions with database table partitions to improve performance. Unlike approaches that require manual data partitioning, data integrity is automatically guaranteed because the parallel engine of PowerCenter dynamically realigns data partitions for set-oriented trans...