LaserData Cloud
Connectors

Connector Configuration

Configure sink and source connector instances with schemas, transforms, and plugin settings

After activating a connector, configure each instance through the Configuration system. Each instance has its own versioned configuration - you can create versions, activate a specific one, and roll back.

Sink Configuration

SettingDescription
enabledWhether the connector instance is active
streamsWhich Iggy streams and topics to consume from
schemaMessage format - JSON (default), Raw, Text, Protocol Buffers, FlatBuffers
batch_lengthNumber of messages to batch before sending
poll_intervalHow often to poll for new messages
consumer_groupOptional consumer group for coordinated consumption
plugin_configPlugin-specific settings (connection strings, credentials, table names, etc.)
transformsOptional data transformations before sending

Source Configuration

SettingDescription
enabledWhether the connector instance is active
streamsWhich Iggy stream and topic to produce into
schemaMessage format - JSON (default), Raw, Text, Protocol Buffers, FlatBuffers
batch_lengthNumber of messages to batch before producing
linger_timeMaximum time to wait before flushing a batch
plugin_configPlugin-specific settings (source connection, polling interval, etc.)
transformsOptional data transformations before producing

Data Transforms

Apply field-level transformations to messages as they flow through connectors:

TransformDescription
AddFieldsAdd new fields to messages
DeleteFieldsRemove fields from messages
FilterFieldsKeep only specified fields
UpdateFieldsModify existing field values
ProtoConvertConvert to/from Protocol Buffers
FlatBufferConvertConvert to/from FlatBuffers

Custom transforms can be built by implementing the Transform trait in Rust.

Examples

PostgreSQL Sink

{
  "name": "orders-to-postgres",
  "values": {
    "enabled": true,
    "streams": [
      {
        "stream": "orders",
        "topics": ["completed", "refunded"],
        "schema": "json",
        "batch_length": 100,
        "poll_interval": "1s",
        "consumer_group": "pg-sink-orders"
      }
    ],
    "plugin_config": {
      "connection_string": "postgres://user:pass@host:5432/orders",
      "table": "order_events"
    }
  },
  "activate": true
}

Random Source (Development)

{
  "name": "test-data-generator",
  "values": {
    "enabled": true,
    "streams": [
      {
        "stream": "test_stream",
        "topic": "test_topic",
        "schema": "json",
        "batch_length": 1000,
        "linger_time": "5ms"
      }
    ],
    "plugin_config": {
      "interval": "3000ms",
      "max_count": 1000000,
      "message_range": [1, 5],
      "payload_size": 200
    }
  },
  "activate": true
}

Plugin Schema Validation

Each connector plugin defines its own configuration schema with typed fields, defaults, and secret markers. When you create or update a connector config:

  • Only known plugin_config keys are accepted — unknown keys are rejected
  • Field types are validated (string, int, boolean, duration, array, etc.)
  • Fields marked as secret (e.g. connection strings, passwords) are masked with *** in API responses

Use the Get Config Schema endpoint to discover all available plugin fields before creating a config.

Configuration Flow

  1. Get the schema to discover available fields, defaults, and validation rules
  2. Create a config with your values — a new version is created automatically
  3. Activate the config to apply it to all nodes — or pass "activate": true during creation

Each connector instance has its own independent versioned config. You can create multiple versions, activate any one, and roll back.

Secret Masking

Fields marked as secrets (passwords, connection strings, credentials) are masked with *** in API responses. When updating a config, masked values are preserved — you only need to send fields you want to change.


API Reference

The config kind for connectors follows the format connector:{type}:{key} — for example connector:sink:postgres or connector:source:random.

Get Config Schema

Returns the available fields, types, defaults, and validation rules for a connector. The response uses the same enriched section format as Iggy config schemas.

curl {supervisor_url}/deployments/{deployment_id}/configs/connector:sink:postgres/schema \
  -H "ld-api-key: YOUR_API_KEY"
{
  "sink": {
    "name": "Sink",
    "description": "Connector sink pipeline settings.",
    "schema": [...]
  },
  "plugin_config": {
    "name": "Plugin Config",
    "description": "Connector plugin-specific settings.",
    "schema": [
      {
        "key": "connection_string",
        "name": "Connection String",
        "description": "PostgreSQL connection string",
        "default_value": "",
        "kind": "string",
        "editable": true,
        "secret": true,
        "requirements": [],
        "rules": []
      }
    ]
  }
}

Sections include sink or source (base connector fields like enabled, streams, batch settings) and plugin_config (plugin-specific fields like connection strings, table names). Fields marked "secret": true are masked with *** in config responses.

Create a Config

curl -X POST {supervisor_url}/deployments/{deployment_id}/configs/connector:sink:postgres \
  -H "ld-api-key: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "orders-to-postgres",
    "values": {
      "enabled": true,
      "streams": [
        {
          "stream": "orders",
          "topics": ["completed", "refunded"],
          "schema": "json",
          "batch_length": 100,
          "poll_interval": "1s",
          "consumer_group": "pg-sink-orders"
        }
      ],
      "plugin_config": {
        "connection_string": "postgres://user:pass@host:5432/orders",
        "table": "order_events"
      },
      "transforms": {}
    },
    "activate": true
  }'
  • name — optional, defaults to the connector kind
  • values — config fields validated against the schema
  • activate — set to true to immediately promote this version as primary

Returns 201 Created with ld-config header containing the new config ID.

If a config with the same name already exists, a new version is created automatically.

Get Active Config

curl {supervisor_url}/deployments/{deployment_id}/configs/connector:sink:postgres/primary \
  -H "ld-api-key: YOUR_API_KEY"
{
  "id": 1,
  "kind": "connector:sink:postgres",
  "name": "orders-to-postgres",
  "primary": true,
  "initialized": true,
  "version": 2,
  "created_at": "2026-03-16T12:00:00Z",
  "updated_at": "2026-03-16T12:05:00Z",
  "values": {
    "enabled": true,
    "streams": [...],
    "plugin_config": {
      "connection_string": "***",
      "table": "order_events"
    },
    "transforms": {}
  }
}

List Config Versions

curl {supervisor_url}/deployments/{deployment_id}/configs/connector:sink:postgres/orders-to-postgres/versions \
  -H "ld-api-key: YOUR_API_KEY"

Get a Specific Version

curl {supervisor_url}/deployments/{deployment_id}/configs/connector:sink:postgres/orders-to-postgres/versions/2 \
  -H "ld-api-key: YOUR_API_KEY"

Activate a Specific Version

Promotes a config version to primary and triggers reconfiguration on all nodes.

curl -X PUT {supervisor_url}/deployments/{deployment_id}/configs/connector:sink:postgres/orders-to-postgres/activate/2 \
  -H "ld-api-key: YOUR_API_KEY"

Returns 204 No Content.

Delete a Config

curl -X DELETE {supervisor_url}/deployments/{deployment_id}/configs/connector:sink:postgres/{config_id} \
  -H "ld-api-key: YOUR_API_KEY"

Required permissions: deployment:config:manage (create, activate, delete) or deployment:config:read (view, schema)

On this page