# TaDataWriterプラグイン
# TaDataWriter Plug-in
# Introduction
TaDataWriter enables DataX to transfer data to the TA cluster, and the data will be sent to the TA's receiver.
# Functions and Limitations
TaDataWriter can convert the data from the DataX protocol to the internal data in the TA clusters. TaDataWriter has the following functions:
- Support and only support writing to TA clusters.。
- Support data compression. Existing compression formats are gzip, lzo, lz4, snappy.
- Support multi-thread transmission.
- Supported and only supported on TA nodes.
# Function Description
# 3.1 Sample configuration
{
  "job": {
    "setting": {
      "speed": {
        "channel": 1
      }
    },
    "content": [
      {
        "reader": {
          "name": "streamreader",
          "parameter": {
            "column": [
              {
                "value": "ABCDEFG-123-abc",
                "type": "string"
              },
              {
                "value": "F53A58ED-E5DA-4F18-B082-7E1228746E88",
                "type": "string"
              },
              {
                "value": "login",
                "type": "string"
              },
              {
                "value": "2020-01-01 01:01:01",
                "type": "date"
              },
              {
                "value": "abcdefg",
                "type": "string"
              },
              {
                "value": "2019-08-08 08:08:08",
                "type": "date"
              },
              {
                "value": 123456,
                "type": "long"
              },
              {
                "value": true,
                "type": "bool"
              }
            ],
            "sliceRecordCount": 1000
          }
        },
        "writer": {
          "name": "ta-data-writer",
          "parameter": {
            "type": "track",
            "appid": "34c703a885014208a737911748a7b51c",
            "column": [
              {
                "index": "0",
                "colTargetName": "#account_id",
                "type": "string"
              },
              {
                "index": "1",
                "colTargetName": "#distinct_id"
              },
              {
                "index": "2",
                "colTargetName": "#event_name"
              },
              {
                "index": "3",
                "colTargetName": "#time",
                "type": "date",
                "dateFormat": "yyyy-MM-dd HH:mm:ss.SSS"
              },
              {
                "index": "4",
                "colTargetName": "testString",
                "type": "string"
              },
              {
                "index": "5",
                "colTargetName": "testDate",
                "type": "date",
                "dateFormat": "yyyy-MM-dd HH:mm:ss.SSS"
              },
              {
                "index": "6",
                "colTargetName": "testLong",
                "type": "number"
              },
              {
                "index": "7",
                "colTargetName": "testBoolean",
                "type": "boolean"
              },
              {
                "colTargetName": "add_clo",
                "value": "addFlag",
                "type": "string"
              }
            ]
          }
        }
      }
    ]
  }
}
# 3.2 Parameter description
- type
- Description: written data type user_set, track
- Required: Yes
- Default value: none
 
- appid
- Description: project appid.
- Required: Yes
- Default value: none
 
- thread
- Description: number of threads.
- Required: No
- Default value: 3
 
- compress
- Description: text compression type. By default, non-filling means no compression. Supported compression types are gzip, lzo, lz4 and snappy.
- Required: No
- Default value: no compression
 
- connType
- Description: the way to accept data within the cluster, send it to receiver or send it directly to kafka.
- Required: No
- Default value: http
 
- column
- Description: read the list of fields. typespecifies the type of data,indexspecifies the current column corresponding toreader(starting with 0).valuespecifies the current type as a constant, does not read data fromreader, but automatically generates the corresponding column according tovalue.
 
- Description: read the list of fields. 
The user can specify the Column field information, configured as follows:
[
  {
    "type": "Number",
    "colTargetName": "test_col", //generate column names corresponding to data
    "index": 0 //transfer the first column from reader to dataX to get the Number field
  },
  {
    "type": "string",
    "value": "testvalue",
    "colTargetName": "test_col" //generate the string field of testvalue from TaDataWriter as the current field
  },
  {
    "index": 0,
    "type": "date",
    "colTargetName": "testDate",
    "dateFormat": "yyyy-MM-dd HH:mm:ss.SSS"
  }
]
- For user-specified Column information, one ofindex/valuemust be selected,typeis not required. When setting thedatetype, you can setdataFormatnot required.- Required: Yes
- Default value: all read by reader type
 
# 3.3 Type conversion
The type is defined as TaDataWriter:
| DataX internal type | TaDataWriter data type | 
|---|---|
| Int | Number | 
| Long | Number | 
| Double | Number | 
| String | String | 
| Boolean | Boolean | 
| Date | Date | 
