目录
此内容是否有帮助?

# DataX Engine

# I. Introduction to DataX

DataX is an open source project released by Alibaba (for details, please visit DataX's Github homepage (opens new window)). It is an efficient offline data synchronization tool, which is often used for data synchronization between heterogeneous data sources.

DataX uses the Framework + plugin architecture. Data source reading and writing correspond to Reader and Writer plug-ins respectively. Each data source will have a corresponding Reader or Writer. DataX provides rich Reader and Writer support by default. Used to adapt to a variety of mainstream data sources. Framework is used to connect Reader and Writer, and is responsible for data processing, torsion and other core processes in the synchronization task.

DataX's data synchronization task is mainly controlled through a configuration file. The most important configuration is the configuration of Reader and Writer, which respectively represent how to extract data from the data source and how to write the extracted data to the data source. Synchronization of heterogeneous data sources can be completed by using the Reader and Writer of the corresponding data source in the configuration file.

In ta-tool, we have integrated the DataX engine and written plug-ins for the TA cluster (that is, Reader and Writer for the TA cluster). With the TA cluster plug-in, the TA cluster can be used as the data source for DataX.

Through the DataX engine in ta-tool, you can complete the following data synchronization:

  1. To import data from other databases into the TA cluster, you need to use DataX's existing Reader plug-in and TA Writer
  2. To export the data of TA cluster to other databases, you need to use the existing Writer plug-in of TA Reader and DataX

# II. Instructions for DataX Engine Use

If you need to use the DataX engine in ta-tool for multi-data source synchronization tasks, you first need to **write the Config file of the DataX task **in the TA cluster, and then execute **the DataX command in the secondary development component **, read the Config file to perform data synchronization tasks.

# 2.1 Sample Configuration File

DataX task Config file needs to be a json file, json configuration template is as follows:

{
  "job": {
    "content": [
      {
        "reader": {
          "name": "streamreader",
          "parameter": {
            "sliceRecordCount": 10,
            "column": [
              {
                "type": "long",
                "value": "10"
              },
              {
                "type": "string",
                "value": "hello,hello,world-DataX"
              }
            ]
          }
        },
        "writer": {
          "name": "streamwriter",
          "parameter": {
            "encoding": "UTF-8",
            "print": true
          }
        }
      }
    ],
    "setting": {
      "speed": {
        "channel": 5
      }
    }
  }
}

The entire configuration file is a JSON, the outermost 'job' element, which contains two elements, namely 'content' and 'setting'. The elements within 'content' contain reader and writer information. Reader and Writer for TA clusters can be viewed later in this article. 'Channel' in 'speed' in 'set' is the number of tasks performed simultaneously.

The main part of the configuration file that needs to be configured is the 'reader' and 'writer' elements in the 'content'. The Reader plug-in for reading data and the Writer plug-in for writing data are configured respectively. For configuration of the DataX preset Reader and Writer plug-ins, visit the Support Data Channels (opens new window)section of DataX.

# 2.2 Execute the DataX Command

After you have written the configuration file, you can execute the following command to read the configuration file and start the data synchronization task.

ta-tool datax_engine -conf <configPath> [--date <date>]

The parameter passed in is the path where the configuration file is located.

# III. Description of DataX Plug-in for TA Cluster

# 3.1 Plug-ins Used Within the Cluster

Type
Data source
Reader (read)
Writer (write)
Doc
TA system
TA


Read
and
write
Custom table
TA

Write
Json text
TA

Write

# 3.2 Plug-ins Used Out the Cluster

Type
Data source
Reader (read)
Writer (write)
Doc
TA system
TA

Write

# 3.3 D ata X Native Plug-ins

Type
Data source
Reader (read)
Writer (write)
Doc
RDBMS relational database
MySQL


Read
and
write
Oracle


Read
and
write
SQLServer


Read
and
write
PostgreSQL


Read
and
write
DRDS


Read
and
write
General purpose RDBMS (supports all relational databases)


Read
and
write
Alibaba Cloud Data Storage
ODPS


Read
and
write
ADS

Write
US


Read
and
write
OCS


Read
and
write
NoSQL data storage
OTS


Read
and
write
Hbase0.94


Read
and
write
Hbase1.1


Read
and
write
Phoenix4.x


Read
and
write
Phoenix5.x


Read
and
write
MongoDB


Read
and
write
Hive


Read
and
write
Cassandra


Read
and
write
Unstructured data storage
TxtFile


Read
and
write
FTP


Read
and
write
HDFS


Read
and
write
Elasticsearch


Read
and
write
Time series database
OpenTSDB

Read
TSDB


Read
and
write