Skip to main content

AMP


DEFINITION


AMP, acronym for "Access Module Processor," is the type of vproc (Virtual Processor) used to manage the database, handle file tasks and and manipulate the disk subsystem in the multi-tasking and possibly parallel-processing environment of the Teradata Database.


OVERVIEW


In reality, each AMP is an instance of the database management software responsible for accessing and manipulating data. As such, every AMP is allowed a part of the database to manage, and a part of the physical disk space to keep its set of database tables. Usually, the AMP obtains its portion of the disk space by being associated with a virtual disk (vdisk). It handles its disk reading / writing by using its file system software, which converts AMP steps (i.e., the steps from PEs) into physical data block requests. The AMPs are responsible to access and manipulate the data so as to complete the request processing. There may be mutiple AMPs on one node, and the communication among the AMPs is controlled by the BYNET.

The AMP vproc was invented with Teradata V2 to replace its dedicated physical predecessor on the DBC 1012 systems. In Teradata V1, the Access Module Processor (AMP) was the physical processing unit for all the Teradata database functions. Each AMP then contained its own microprocessor, disk drive, file system, database software (Database Manager), Teradata Operating System (TOS), and YNET interface. In that sense, each AMP was a node.

In Teradata V2, AMPs become software entities, and thus more flexible units that "deliver basic query parallelism to all work in the system." The number of AMPs (2 - 20 per node) is defined for the Teradata database before the database is loaded. The system partitions database tables across all the defined AMPs via hash functions to enable subquery-level parallel processing. In practice, all the database operations are run in parallel across all the AMPs, where all the related data rows are being processed simultaneously but independently.


FUNCTIONS

The functions of AMP can be classified as the following:

   1. BYNET interface, or Boardless BYNET interface;
   2. Database management:
         1. Locking;
         2. Joining;
         3. Sorting;
         4. Aggregation;
         5. Output data conversion;
         6. Disk space management;
         7. Accounting;
         8. Journaling;
   3. File-subsystem management;
   4. Disk-subsystem management.


SIZE LIMITS

 AMP SIZE LIMITS FOR TERADATA DATABASE

TERADATA DATABASE RELEASE     MAX CYLINDERS     MAX (BASE 2)     MAX (BASE 10)
V2R6.2.0.0 - UP                                 700,000                 1.26 TB     1.39 TB
V2R5.0.0.0 - V2R6.1.x.x                     600,000                 1.08 TB     1.19 TB
V2R4.0.1.0 - V2R4.1.3.x                     700,000                 1.26 TB     1.39 TB

Comments

Popular posts from this blog

Informatica Powercenter Partitioning

Informatica PowerCenter Partitioning Option increases the performance of PowerCenter through parallel data processing. This option provides a thread-based architecture and automatic data partitioning that optimizes parallel processing on multiprocessor and grid-based hardware environments. Introduction: With the Partitioning Option, you can execute optimal parallel sessions by dividing data processing into subsets that are run in parallel and spread among available CPUs in a multiprocessor system. When different processors share the computational load,large data volumes can be processed faster. When sourcing and targeting relational databases, the Partitioning Option enables PowerCenter to automatically align its partitions with database table partitions to improve performance. Unlike approaches that require manual data partitioning, data integrity is automatically guaranteed because the parallel engine of PowerCenter dynamically realigns data partitions for set-oriented trans...

Data virtualization

Data virtualization is a process of offering a data access interface that hides the technical aspects of stored data, such as location, storage structure, API, access language, and storage technology. Analogous to concept of views in databases Data virtualization tools come with capabilities of  data integration, data federation, and data modeling Requires more memory caching Can integrate several data marts or data warehouses through a  single data virtualization layer This concept and software is a subset of data integration and is commonly used within business intelligence, service-oriented architecture data services, cloud computing, enterprise search, and master data management. Composite, Denodo, and Informatica are the largest players in the area of data virtualization References for definition: http://www.b-eye-network.com/view/14815

Find Changed Data by computing Checksum using MD5 function in Informatica

Introduction: Capturing and preserving the state of data across time is one of the core functions of a data warehouse, but CDC can be utilized in any database or data integration tool. There are many methodologies such as Timestamp, Versioning, Status indicators, Triggers and Transaction logs exists but MD5 function outlines on working with Checksum. Overview: MD5 stands for Message Digest 5 algorithm.It calculates the checksum of the input value using a cryptographic Message-Digest algorithm 5 and returns a128-bit 32 character string of hexadecimal digits (0 - F). Advantage of using MD5 function is that, it will reduce overall ETL run time and also reduces cache memory usage by caching only required fields which are utmost necessary. Implementation Steps : Identify the ports from the source which are subjected to change. Concatenate all the ports and pass them as parameter to MD5 function in   expression transformation Map the MD5 function output to a checksum outp...