Skip to main content

Difference between server jobs and parallel jobs in Datastage


Server job stages do not have in built partitioning and parallelism mechanism for extracting and loading data between different stages.
To enhance the speed and performance in server jobs is to
    - Enable inter process row buffering through the administrator. This helps stages  to exchange data as soon as it is available in the link.
    - Using IPC stage also helps one passive stage read data from another as soon as data is available. In other words, stages do not have to wait for the entire set of records to be read first and then transferred to the next stage.
   - Link partitioner and link collector stages can be used to achieve a certain degree of partitioning paralellism.


All of the above features which have to be explored in server jobs are built in datastage Parallel jobs. The Px engine runs on a multiprocessor sytem and takes full advantage of the processing nodes defined in the configuration file. Both SMP and MMP architecture is supported by datastage Px.
Px takes advantage of both pipeline parallelism and partitoning paralellism.
      Pipeline parallelism means that as soon as data is available between stages( in pipes or links), it can be exchanged between them without waiting for the entire record set to be read.
     Partitioning parallelism means that entire record set is partitioned into small sets and processed on different nodes(logical processors).
For example if there are 100 records, then if there are 4 logical nodes then each node would process 25 records each. This enhances the speed at which loading takes place to an amazing degree. Imagine situations where billions of records have to be loaded daily. This is where datastage PX comes as a boon for ETL process and surpasses all other ETL tools in the market.


Few more differences:
  • In parallel we have Dataset which acts as the intermediate data storage in the linked list, it is the best storage option it stores the data in datastage internal format.
  • In parallel we can choose to display OSH , which gives information about the how job works.
  • In Parallel Transformer there is no reference link possibility, in server stage reference could be given to transformer. Parallel stage can use both basic and parallel oriented functions.
  • Datastage server executed by datastage server environment but parallel executed under control of datastage runtime environment
  • Datastage compiled in to BASIC(interpreted pseudo code) and Parallel compiled to OSH(Orchestrate Scripting Language).
  • Debugging and Testing Stages are available only in the Parallel Extender.
  • More Processing stages are not included in Server example, Join, CDC, Lookup etc…..
  • In File stages, Hash file available only in Server and Complex flat file , dataset , lookup file set avail in parallel only.
  • Server Transformer supports basic transforms only, but in parallel both basic and parallel transforms.
  • Server transformer is basic language compatibility, parallel transformer is c++ language compatibility
  • Look up of sequential file is possible in parallel jobs
  • In parallel we can specify more file paths to fetch data from using file  pattern similar to Folder stage in Server, while in server we can specify one file name in one O/P link.
  • We can simultaneously give input as well as output link to a seq. file stage in Server. But an output link in parallel means a reject link, that is a link that collects records that fail to load into the sequential file for some reasons.
  • The difference is file size Restriction.Sequential file size in server is 2GB Sequential file size in parallel is No Limitation
  • Parallel sequential file has filter options too. Where you can specify the file pattern.

Comments

  1. Thanks for the Blog,
    Hotel jobs at your finger tips. This hotelierjobz provides you thousands of hotel jobs, chef jobs, hospitality jobs in different places in world like Asia , Dubai many more
    Address: 8th block, Janis Alpine meadows, Tiruneermalai Road, Tiruneermalai ,Chennai
    info@hotelierjobz.com

    ReplyDelete

Post a Comment

Popular posts from this blog

Informatica Powercenter Partitioning

Informatica PowerCenter Partitioning Option increases the performance of PowerCenter through parallel data processing. This option provides a thread-based architecture and automatic data partitioning that optimizes parallel processing on multiprocessor and grid-based hardware environments. Introduction: With the Partitioning Option, you can execute optimal parallel sessions by dividing data processing into subsets that are run in parallel and spread among available CPUs in a multiprocessor system. When different processors share the computational load,large data volumes can be processed faster. When sourcing and targeting relational databases, the Partitioning Option enables PowerCenter to automatically align its partitions with database table partitions to improve performance. Unlike approaches that require manual data partitioning, data integrity is automatically guaranteed because the parallel engine of PowerCenter dynamically realigns data partitions for set-oriented trans...

Data virtualization

Data virtualization is a process of offering a data access interface that hides the technical aspects of stored data, such as location, storage structure, API, access language, and storage technology. Analogous to concept of views in databases Data virtualization tools come with capabilities of  data integration, data federation, and data modeling Requires more memory caching Can integrate several data marts or data warehouses through a  single data virtualization layer This concept and software is a subset of data integration and is commonly used within business intelligence, service-oriented architecture data services, cloud computing, enterprise search, and master data management. Composite, Denodo, and Informatica are the largest players in the area of data virtualization References for definition: http://www.b-eye-network.com/view/14815

Find Changed Data by computing Checksum using MD5 function in Informatica

Introduction: Capturing and preserving the state of data across time is one of the core functions of a data warehouse, but CDC can be utilized in any database or data integration tool. There are many methodologies such as Timestamp, Versioning, Status indicators, Triggers and Transaction logs exists but MD5 function outlines on working with Checksum. Overview: MD5 stands for Message Digest 5 algorithm.It calculates the checksum of the input value using a cryptographic Message-Digest algorithm 5 and returns a128-bit 32 character string of hexadecimal digits (0 - F). Advantage of using MD5 function is that, it will reduce overall ETL run time and also reduces cache memory usage by caching only required fields which are utmost necessary. Implementation Steps : Identify the ports from the source which are subjected to change. Concatenate all the ports and pass them as parameter to MD5 function in   expression transformation Map the MD5 function output to a checksum outp...