Core Concepts

Before diving into advanced usage, it's helpful to understand the building blocks of Acme.

Key abstractions

graph TB
    subgraph Pipeline
        S[Source] --> T1[Transform]
        T1 --> T2[Transform]
        T2 --> D[Destination]
    end

    subgraph Scheduling
        SC[Scheduler] --> Pipeline
    end

    subgraph Monitoring
        Pipeline --> E[Events]
        E --> A[Alerts]
    end
ConceptDescriptionLearn more
PipelineA complete data workflow: extract → transform → loadPipelines
ConnectorA source or destination adapter (PostgreSQL, S3, etc.)Connectors
TransformA data manipulation step (filter, map, aggregate, custom)Transforms
SchedulerControls when and how often pipelines runScheduler API
EventMetadata emitted during pipeline executionEvents API

Design principles

Acme is built around a few core beliefs:

  1. Configuration over code — Most pipelines don't need custom code. YAML should be enough for 80% of use cases.
  2. Incremental by default — Pipelines track their last run and only process new data.
  3. Fail loudly — When something breaks, you should know immediately. See Error Handling.
  4. Testable — Every pipeline can be tested locally before deployment. See Testing.
Architecture deep dive

For a complete overview of how Acme processes data internally, see Architecture.

Built with LogoFlowershow