Skip to main content

Posts

Showing posts from 2012

Understanding the Datastage configuration file

In Datastage, the degree of parallelism, resources being used, etc. are all determined during the run time based entirely on the configuration provided in the APT CONFIGURATION FILE. This is one of the biggest strengths of Datastage. For cases in which you have changed your processing configurations, or changed servers or platform, you will never have to worry about it affecting your jobs since  all the jobs depend on this configuration file for execution. Datastage jobs determine which node to run the process on, where to store the temporary data , where to store the dataset data, based on the entries provide in the configuration file. There is a default configuration file available whenever the server is installed.  You can typically find it under the <>\IBM\InformationServer\Server\Configurations  folder with the name default.apt. Bear in mind that you will have to optimise these configurations for your server based on your resources. Basically the configuration file contai
In parallel we have Dataset which acts as the intermediate data storage in the linked list, it is the best storage option it stores the data in datastage internal format. In parallel we can choose to display OSH , which gives information about the how job works. In Parallel Transformer there is no reference link possibility, in server stage reference could be given to transformer. Parallel stage can use both basic and parallel oriented functions. Datastage server executed by datastage server environment but parallel executed under control of datastage runtime environment Datastage compiled in to BASIC(interpreted pseudo code) and Parallel compiled to OSH(Orchestrate Scripting Language). Debugging and Testing Stages are available only in the Parallel Extender. More Processing stages are not included in Server example, Join, CDC, Lookup etc….. In File stages, Hash file available only in Server and Complex flat file , dataset , lookup file set avail in parallel only. Server Trans

Difference between server jobs and parallel jobs in Datastage

Server job stages do not have in built partitioning and parallelism mechanism for extracting and loading data between different stages. To enhance the speed and performance in server jobs is to     - Enable inter process row buffering through the administrator. This helps stages  to exchange data as soon as it is available in the link.     - Using IPC stage also helps one passive stage read data from another as soon as data is available. In other words, stages do not have to wait for the entire set of records to be read first and then transferred to the next stage.    - Link partitioner and link collector stages can be used to achieve a certain degree of partitioning paralellism. All of the above features which have to be explored in server jobs are built in datastage Parallel jobs . The Px engine runs on a multiprocessor sytem and takes full advantage of the processing nodes defined in the configuration file. Both SMP and MMP architecture is supported by datastage Px. Px ta

Points to keep in mind after upgradation of Informatica powerCenter new version

After upgrading Informatica PowerCenter, the INFA_HOME operating system environment variables must be set to a new PowerCenter installation path: For example, when upgrading from 9.0 to 9.0.1, the path: INFA_HOME=/opt/Informatica/9.0    is changed to path    INFA_HOME=/opt/Informatica/9.0.1 Other PowerCenter environment variables are relative to the INFA_HOME environment variable. Therefore, one need not apply any further changes, but it is worthwhile to check if the above statement is true. An example setting for other PowerCenter environment variables is presented below: INFA_HOME=/opt/Informatica/9.0.1 INFA_DOMAINS_FILE=$INFA_HOME/domains.infa PM_ROOT=$INFA_HOME/server/infa_shared PM_HOME=$PM_ROOT PATH=$PATH:$INFA_HOME/server/bin LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$INFA_HOME:$INFA_HOME/server/bin JRE_HOME=$INFA_HOME/Java For each Integration Service make sure that $PMRootDir variable is set properly. Is should be set to path: /server/infa_shared Where is Power

Find Changed Data by computing Checksum using MD5 function in Informatica

Introduction: Capturing and preserving the state of data across time is one of the core functions of a data warehouse, but CDC can be utilized in any database or data integration tool. There are many methodologies such as Timestamp, Versioning, Status indicators, Triggers and Transaction logs exists but MD5 function outlines on working with Checksum. Overview: MD5 stands for Message Digest 5 algorithm.It calculates the checksum of the input value using a cryptographic Message-Digest algorithm 5 and returns a128-bit 32 character string of hexadecimal digits (0 - F). Advantage of using MD5 function is that, it will reduce overall ETL run time and also reduces cache memory usage by caching only required fields which are utmost necessary. Implementation Steps : Identify the ports from the source which are subjected to change. Concatenate all the ports and pass them as parameter to MD5 function in   expression transformation Map the MD5 function output to a checksum output port i

Concatenate contents of multiple files to a single file using shell script

#!/bin/ksh # Author:Dhana Lakshmi                          Date:Jan 15 2012 # Shell script to copy contents of mutiple files(file1,file2,file3) # in to one single file(fileout) # Below command is used here to delete the contents of destination file before copying source files > fileout # Copying multiple files and appending the contents one by one to fileout cat file1 file2 file3 >> fileout echo  " files copy successful" Save the above script as Appendfile.ksh Execute using command sh Appendfile.ksh Note : In the above script all source filenames are starting with similar string such as file*, we can use cat file* instead of cat file1 file2 file3.

Concatenate attributes/fields from multiple records based(group by) on a key column

Input data Acc_num Transaction_Type_ID Transaction_type 2156 1 Cash deposit 2156 3 Pin change 8463 2 Cash Withdraw 8463 4 Balance enquiry 8463 1 Cash deposit Output structure should be group by customer_id and concatenate notes from the multiple records.  Acc_num Transaction_type_out 2156 1 Cash deposit 3 Pin change 8463 2 Cash Withdraw 4 Balance enquiry  1 Cash deposit Solution: Step 1 : Sort the input records based on Acc_num Step 2 : Create a variable (v_type) in expression transformation to concatenate Transaction_type_id and Transaction_type: Transaction_type_id || Transaction_type. Step 3 : Create another variable holding the value of v_type (v_type_curr) and another variable to hold the value of previous v_type (v_type_prev). loop through the Acc_num and as long as it reads the same Acc_num, output Transaction_type_out (Target column) = v_type_curr || v_type_prev. Transformations to be used :  SQ/Sorter for step 1 Expression for step 2

Methods of Loading Data Warehouses

Various methods for extracting transactional data from operational sources have been used to populate data warehouses.These techniques vary mostly on the latency of data integration, from daily batches to continuous real-time integration. The capture of data from sources is either performed through incremental queries that filter based on a timestamp or flag or through a CDC mechanism that detects any changes as it is happening. Architectures are further distinguished between pull and push operation, where a pull operation polls in fixed intervals for new data, while in a push operation data is loaded into the target once a change appears. A daily batch mechanism is most suitable if intra-day freshness is not required for the data, such as longer-term trends or data that is only calculated once daily, for example financial close information. Batch loads might be performed in a downtime window, if the business model doesn’t require 24 hour availability of the data warehou

Informatica Powercenter Partitioning

Informatica PowerCenter Partitioning Option increases the performance of PowerCenter through parallel data processing. This option provides a thread-based architecture and automatic data partitioning that optimizes parallel processing on multiprocessor and grid-based hardware environments. Introduction: With the Partitioning Option, you can execute optimal parallel sessions by dividing data processing into subsets that are run in parallel and spread among available CPUs in a multiprocessor system. When different processors share the computational load,large data volumes can be processed faster. When sourcing and targeting relational databases, the Partitioning Option enables PowerCenter to automatically align its partitions with database table partitions to improve performance. Unlike approaches that require manual data partitioning, data integrity is automatically guaranteed because the parallel engine of PowerCenter dynamically realigns data partitions for set-oriented trans