Skip to main content

Data virtualization

Data virtualization is a process of offering a data access interface that hides the technical aspects of stored data, such as location, storage structure, API, access language, and storage technology.
  • Analogous to concept of views in databases
  • Data virtualization tools come with capabilities of  data integration, data federation, and data modeling
  • Requires more memory caching
  • Can integrate several data marts or data warehouses through a  single data virtualization layer
  • This concept and software is a subset of data integration and is commonly used within business intelligence, service-oriented architecture data services, cloud computing, enterprise search, and master data management.
  • Composite, Denodo, and Informatica are the largest players in the area of data virtualization


References for definition:
http://www.b-eye-network.com/view/14815

Comments

  1. Thank you.Well it was nice post and very helpful information on Big Data Hadoop Online Training Hyderabad


    ReplyDelete
  2. Excellent article... Thank you for providing such valuable information; the contents are quite intriguing. Keep Posting more on
    Data Engineering Services 

    ReplyDelete
  3. Cisco Data Virtualization
    Unified provides an enhanced collaboration experience with Cisco’s Collaboration Room End Points for a wide set of business purposes. Cisco’s Collaboration Room End Points gives a steady experience to the users when grouped with customer partnership, discussion solutions and integrated communication.

    ReplyDelete

Post a Comment

Popular posts from this blog

Informatica Powercenter Partitioning

Informatica PowerCenter Partitioning Option increases the performance of PowerCenter through parallel data processing. This option provides a thread-based architecture and automatic data partitioning that optimizes parallel processing on multiprocessor and grid-based hardware environments. Introduction: With the Partitioning Option, you can execute optimal parallel sessions by dividing data processing into subsets that are run in parallel and spread among available CPUs in a multiprocessor system. When different processors share the computational load,large data volumes can be processed faster. When sourcing and targeting relational databases, the Partitioning Option enables PowerCenter to automatically align its partitions with database table partitions to improve performance. Unlike approaches that require manual data partitioning, data integrity is automatically guaranteed because the parallel engine of PowerCenter dynamically realigns data partitions for set-oriented trans...

Find Changed Data by computing Checksum using MD5 function in Informatica

Introduction: Capturing and preserving the state of data across time is one of the core functions of a data warehouse, but CDC can be utilized in any database or data integration tool. There are many methodologies such as Timestamp, Versioning, Status indicators, Triggers and Transaction logs exists but MD5 function outlines on working with Checksum. Overview: MD5 stands for Message Digest 5 algorithm.It calculates the checksum of the input value using a cryptographic Message-Digest algorithm 5 and returns a128-bit 32 character string of hexadecimal digits (0 - F). Advantage of using MD5 function is that, it will reduce overall ETL run time and also reduces cache memory usage by caching only required fields which are utmost necessary. Implementation Steps : Identify the ports from the source which are subjected to change. Concatenate all the ports and pass them as parameter to MD5 function in   expression transformation Map the MD5 function output to a checksum outp...