Skip to main content

Teradata Architecture


Symmetric multiprocessing (SMP) - A single node that contains multiple CPUs sharing a memory pool.

Massively parallel processing
(MPP) - Multiple SMP nodes working together comprise a larger configuration. The nodes are connected using the BYNET, which allows multiple virtual processors on multiple system nodes to communicate with each other.

Shared Nothing Architecture(MPP) - means that each vproc(Access Module Processors and Parsing Engines are Virtual processors) is responsible for its own portion of the database and do not share common components.each AMP manages its own dedicated memory space and the data on its own vdisk -- these are not shared with other AMPs. Each AMP uses system resources independently of the other AMPs so they can all work in parallel for high system performance overall

A node is made up of various hardware and softwares

A clique is a set of Teradata nodes that share a common set of disk arrays. Cabling a subset of nodes to the same disk arrays creates a clique.

A disk array is a configuration of disk drives that utilizes specialized controllers to manage and distribute data and parity acroos the disks while providing fast access and data integrity

RAID 5 Data and parity protection striped across multiple disks

RAID 1 Each disk has a physical mirror replicating the data


Teradata Storage Process
  • The Parsing Engine interprets the SQL command and converts the data record from the host into an AMP message
  • The BYNET distributes the row to the appropriate AMP
  • The AMP formats the row and writes it to its associated disks
  • The disk holds the row for subsequent access

Teradata Retrieval Process
  • The Parsing Engine dispatches a request to retrieve one or more rows
  • The BYNET ensures that appropriate AMP(s) are activated
  • The AMPs locate and retrieve desired rows in parallel access and will sort, aggregate or format if needed
  • The BYNET returns retrieved rows to parsing engine
  • The Parsing Engine returns row(s) to requesting client application

The BYNET is responsible for
  • Point-to-point communications between nodes and virtual processors
  • Merging answer sets back to the PE
  • making Teradata parallelism possible

The Parsing Engine is responsible for
  • Managing Individual sessions (up to 120)
  • Parsing and optimizing SQL requests
  • Dispatching the optimized plan to the AMPs
  • Sending the answer set response back to the requesting client

The AMP is responsible for
  • Storing and retrieving rows to and from the disks
  • Lock Management
  • Sorting rows and aggregating columns
  • Join Processing
  • Output conversions and formatting
  • Creating answer sets for clients
  • Disk space management and accounting

Comments

Popular posts from this blog

Informatica Powercenter Partitioning

Informatica PowerCenter Partitioning Option increases the performance of PowerCenter through parallel data processing. This option provides a thread-based architecture and automatic data partitioning that optimizes parallel processing on multiprocessor and grid-based hardware environments. Introduction: With the Partitioning Option, you can execute optimal parallel sessions by dividing data processing into subsets that are run in parallel and spread among available CPUs in a multiprocessor system. When different processors share the computational load,large data volumes can be processed faster. When sourcing and targeting relational databases, the Partitioning Option enables PowerCenter to automatically align its partitions with database table partitions to improve performance. Unlike approaches that require manual data partitioning, data integrity is automatically guaranteed because the parallel engine of PowerCenter dynamically realigns data partitions for set-oriented trans...

Data virtualization

Data virtualization is a process of offering a data access interface that hides the technical aspects of stored data, such as location, storage structure, API, access language, and storage technology. Analogous to concept of views in databases Data virtualization tools come with capabilities of  data integration, data federation, and data modeling Requires more memory caching Can integrate several data marts or data warehouses through a  single data virtualization layer This concept and software is a subset of data integration and is commonly used within business intelligence, service-oriented architecture data services, cloud computing, enterprise search, and master data management. Composite, Denodo, and Informatica are the largest players in the area of data virtualization References for definition: http://www.b-eye-network.com/view/14815

Find Changed Data by computing Checksum using MD5 function in Informatica

Introduction: Capturing and preserving the state of data across time is one of the core functions of a data warehouse, but CDC can be utilized in any database or data integration tool. There are many methodologies such as Timestamp, Versioning, Status indicators, Triggers and Transaction logs exists but MD5 function outlines on working with Checksum. Overview: MD5 stands for Message Digest 5 algorithm.It calculates the checksum of the input value using a cryptographic Message-Digest algorithm 5 and returns a128-bit 32 character string of hexadecimal digits (0 - F). Advantage of using MD5 function is that, it will reduce overall ETL run time and also reduces cache memory usage by caching only required fields which are utmost necessary. Implementation Steps : Identify the ports from the source which are subjected to change. Concatenate all the ports and pass them as parameter to MD5 function in   expression transformation Map the MD5 function output to a checksum outp...