POST /connectors

Creates a new connector, returning the current connector information if successful.

Description

The POST request along with the parameters is used to create connectors in distributed mode.

The following table provides the parameters needed to create a new connector.

Table 1. Parameters
Parameters Description
name (string) Name of the created connector
config (map) Configuration parameters for the connector. See HDFS Connector and JDBC Connector for configuration options.
tasks (array) List of active tasks generated by the connector.
tasks[i].connector (string) Name of the connector that the task belongs to.
tasks[i].task (int) Task ID within the connector.

Syntax

http://<host>:8083/connectors/?name=<connector_name>&config=<config_parameters>

Request Example

POST /connectors HTTP/1.1 Host: connect.example.com Content-Type: application/json Accept: application/json  
{     "name": "hdfs-sink-connector",     
        "config": {         
          "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",         
          "tasks.max": "1",         
          "topics": "test-topic",         
          "hdfs.url": "hdfs://fakehost:9000",         
          "hadoop.conf.dir": "/opt/hadoop/conf",         
          "hadoop.home": "/opt/hadoop",         
          "flush.size": "100",         
          "rotate.interval.ms": "1000"     } 
 } 
      

Response Example

The response JSON object is in the following form:
  • name (string) – Name of the connector to create.
  • config (map) – Configuration parameters for the connector. All values should be strings.
  • tasks (array) – List of active tasks generated by the connector.
HTTP/1.1 201 Created Content-Type: application/json  
{     
      "name": "hdfs-sink-connector", 
      "config": 
        {
          "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
          "tasks.max": "10",
          "topics": "test-topic",
          "hdfs.url": "hdfs://fakehost:9000", 
          "hadoop.conf.dir": "/opt/hadoop/conf",
          "hadoop.home": "/opt/hadoop", 
          "flush.size": "100", 
          "rotate.interval.ms": "1000"
        },     
      "tasks": [
          { "connector": "hdfs-sink-connector", "task": 1 },         
          { "connector": "hdfs-sink-connector", "task": 2 },         
          { "connector": "hdfs-sink-connector", "task": 3 }
       ] 
}