Skip to main content

Business Requirement Specifications/Findings for a Data warehouse intiative

The scope of data warehouse initiative must be driven by business requirements.Requirements determine what data must be available in the data warehouse, how it is organized, and how often it is updated.

High-level requirements analysis with business management should comprise of following areas:
  • Understand their key strategic business initiatives.
  • Identify their key performance indicators or success metrics for each of the strategic business  initiatives.
  • Determine the core business processes they monitor and want to impact.
  • Determine the potential impact on their performance metrics with improved access to improved business process information.
You will also need to conduct preliminary data discovery sessions to identify any gaping data feasibility issues.

Approach to Requirements Definition 

Talk to the business users and get the overall understanding of an organization:
  • What are the objectives of your Organization/department? What are you trying to accomplish?
  • How do you go about achieving this objective?
  • What are your success metrics? How do you know you are doing well? How often do you measure yourselves?
  • What are the key business issues you face today? What limits your success today?
  • Describe your products (or other key business dimensions such as customers,vendors, manufacturing sites, etc.). How do you distinguish between products? Is there a natural way to categorize your products? How often do these major categorizations change? Assuming you can’t physically look at a list of all your thousands of products, how would you narrow the list to find the one product you are looking for?
  • What types of routine analysis do you currently perform? What data is used?
  • How do you currently get that data? What do you do with the information once you get it?
  • What analysis would you like to perform? Are there potential improvements to your current methods/process?
  • Currently, what on-the-fly analysis do you typically perform? Who requests these ad hoc analyses? What do they do with the analyses? How long does it typically take? Do you have the time to ask the follow-on question?
  • What analytic capabilities would you like to have? Is there much re-creation of the wheel across the organization?
  • Are there specific bottlenecks to getting at information?
  •  How much historical information is required?
  • What opportunities exist to dramatically improve your business based on improved access to information? What is the financial impact? If you had the capability just described, what would it mean to your business?
  • Which reports do you currently use? Which data on the report is important? How do you use the information? If the report were dynamic, what would it do differently?

Determine the Success Criteria

Identify the Number-one thing the project must accomplish to be deemed successful
Acceptable success metrics include the following:
  • Implementation metrics. These metrics include the number of gigabytes of data available to users, number of users trained or number of users with installed end user software. Each of these could be tracked as of the end of a given time period.
  • Activity and usage metrics. For a given time period, such as a day, week, month, or quarter, you might track the number of queries, number of logons, total number of logon minutes, or average number of logon minutes.
  •  Service level metrics. Some organizations establish service level agreements with their users that are based on the following types of measures:
    -Availability based on database and application server down time.- Data quality based on the number of errors (e.g., completeness or adherence to transformation business rules) per gigabyte of data.
    - Data timeliness based on the amount of time following the close of business before the data is available in the data warehouse.
    - Data warehouse responsiveness based on the average response time to a standard set of queries and applications.
    - Support responsiveness based on the average response time to service requests or average time to resolve service requests. Both these measures are extremely cumbersome to track.
  • Business impact metrics. Business impact metrics include the financial impact associated with cost savings or incremental revenue generation. This financial impact can then be used to calculate a return on investment. These are typically the most important success metrics for a data warehouse, although they are difficult to capture.Even if your numbers aren’t absolutely precise, you should strive to capture and calculate these business impact metrics.
  •  Performance against “pre–data warehouse” baseline. For example, you may hear that it took a week to perform a given analysis prior to the data warehouse; following the implementation of the data warehouse, the same analysis could be completed in less than an hour. These “before and after” examples are typically useful and impressive, although they presume that you had a pre-data warehouse baseline measurement for comparative purposes.
Requirement findings document should cover following areas:
  • Executive overview
  • Project overview (including requirements definition approach and participants)
  •  Business requirements:  
  • -High-level review of business objectives
    -Analytic and information requirements
  • Preliminary source system analysis (tied as often as possible to a business requirement)
  • Preliminary success criteria
The draft of the requirements findings should be reviewed with the business project leads and sponsors and then distributed to the business users, their management, the data warehouse team, and  management.

References: Data warehouse lifecycle Tool kit by Ralph Kimball Chapter 4.

Comments

Popular posts from this blog

Informatica Powercenter Partitioning

Informatica PowerCenter Partitioning Option increases the performance of PowerCenter through parallel data processing. This option provides a thread-based architecture and automatic data partitioning that optimizes parallel processing on multiprocessor and grid-based hardware environments. Introduction: With the Partitioning Option, you can execute optimal parallel sessions by dividing data processing into subsets that are run in parallel and spread among available CPUs in a multiprocessor system. When different processors share the computational load,large data volumes can be processed faster. When sourcing and targeting relational databases, the Partitioning Option enables PowerCenter to automatically align its partitions with database table partitions to improve performance. Unlike approaches that require manual data partitioning, data integrity is automatically guaranteed because the parallel engine of PowerCenter dynamically realigns data partitions for set-oriented trans

Data virtualization

Data virtualization is a process of offering a data access interface that hides the technical aspects of stored data, such as location, storage structure, API, access language, and storage technology. Analogous to concept of views in databases Data virtualization tools come with capabilities of  data integration, data federation, and data modeling Requires more memory caching Can integrate several data marts or data warehouses through a  single data virtualization layer This concept and software is a subset of data integration and is commonly used within business intelligence, service-oriented architecture data services, cloud computing, enterprise search, and master data management. Composite, Denodo, and Informatica are the largest players in the area of data virtualization References for definition: http://www.b-eye-network.com/view/14815

Difference between server jobs and parallel jobs in Datastage

Server job stages do not have in built partitioning and parallelism mechanism for extracting and loading data between different stages. To enhance the speed and performance in server jobs is to     - Enable inter process row buffering through the administrator. This helps stages  to exchange data as soon as it is available in the link.     - Using IPC stage also helps one passive stage read data from another as soon as data is available. In other words, stages do not have to wait for the entire set of records to be read first and then transferred to the next stage.    - Link partitioner and link collector stages can be used to achieve a certain degree of partitioning paralellism. All of the above features which have to be explored in server jobs are built in datastage Parallel jobs . The Px engine runs on a multiprocessor sytem and takes full advantage of the processing nodes defined in the configuration file. Both SMP and MMP architecture is supported by datastage Px. Px ta