Skip to main content

Posts

Showing posts from January, 2012

Concatenate contents of multiple files to a single file using shell script

#!/bin/ksh # Author:Dhana Lakshmi                          Date:Jan 15 2012 # Shell script to copy contents of mutiple files(file1,file2,file3) # in to one single file(fileout) # Below command is used here to delete the contents of destination file before copying source files > fileout # Copying multiple files and appending the contents one by one to fileout cat file1 file2 file3 >> fileout echo  " files copy successful" Save the above script as Appendfile.ksh Execute using command sh Appendfile.ksh Note : In the above script all source filenames are starting with similar string such as file*, we can use cat file* instead of cat file1 file2 file3.

Concatenate attributes/fields from multiple records based(group by) on a key column

Input data Acc_num Transaction_Type_ID Transaction_type 2156 1 Cash deposit 2156 3 Pin change 8463 2 Cash Withdraw 8463 4 Balance enquiry 8463 1 Cash deposit Output structure should be group by customer_id and concatenate notes from the multiple records.  Acc_num Transaction_type_out 2156 1 Cash deposit 3 Pin change 8463 2 Cash Withdraw 4 Balance enquiry  1 Cash deposit Solution: Step 1 : Sort the input records based on Acc_num Step 2 : Create a variable (v_type) in expression transformation to concatenate Transaction_type_id and Transaction_type: Transaction_type_id || Transaction_type. Step 3 : Create another variable holding the value of v_type (v_type_curr) and another variable to hold the value of previous v_type (v_type_prev). loop through the Acc_num and as long as it reads the same Acc_num, output Transaction_type_out (Target column) = v_type_curr || v_type_prev. Transformations to be used :  SQ/Sorter for step 1 Expression for ...

Methods of Loading Data Warehouses

Various methods for extracting transactional data from operational sources have been used to populate data warehouses.These techniques vary mostly on the latency of data integration, from daily batches to continuous real-time integration. The capture of data from sources is either performed through incremental queries that filter based on a timestamp or flag or through a CDC mechanism that detects any changes as it is happening. Architectures are further distinguished between pull and push operation, where a pull operation polls in fixed intervals for new data, while in a push operation data is loaded into the target once a change appears. A daily batch mechanism is most suitable if intra-day freshness is not required for the data, such as longer-term trends or data that is only calculated once daily, for example financial close information. Batch loads might be performed in a downtime window, if the business model doesn’t require 24 hour availability of the data warehou...

Informatica Powercenter Partitioning

Informatica PowerCenter Partitioning Option increases the performance of PowerCenter through parallel data processing. This option provides a thread-based architecture and automatic data partitioning that optimizes parallel processing on multiprocessor and grid-based hardware environments. Introduction: With the Partitioning Option, you can execute optimal parallel sessions by dividing data processing into subsets that are run in parallel and spread among available CPUs in a multiprocessor system. When different processors share the computational load,large data volumes can be processed faster. When sourcing and targeting relational databases, the Partitioning Option enables PowerCenter to automatically align its partitions with database table partitions to improve performance. Unlike approaches that require manual data partitioning, data integrity is automatically guaranteed because the parallel engine of PowerCenter dynamically realigns data partitions for set-oriented trans...