Core Concepts
Introduction to Acme's key abstractions — pipelines, connectors, transforms, schedulers, and events — and the design principles behind them.
Core Concepts
Before diving into advanced usage, it's helpful to understand the building blocks of Acme.
Key abstractions
graph TB
subgraph Pipeline
S[Source] --> T1[Transform]
T1 --> T2[Transform]
T2 --> D[Destination]
end
subgraph Scheduling
SC[Scheduler] --> Pipeline
end
subgraph Monitoring
Pipeline --> E[Events]
E --> A[Alerts]
end
| Concept | Description | Learn more |
|---|---|---|
| Pipeline | A complete data workflow: extract → transform → load | Pipelines |
| Connector | A source or destination adapter (PostgreSQL, S3, etc.) | Connectors |
| Transform | A data manipulation step (filter, map, aggregate, custom) | Transforms |
| Scheduler | Controls when and how often pipelines run | Scheduler API |
| Event | Metadata emitted during pipeline execution | Events API |
Design principles
Acme is built around a few core beliefs:
- Configuration over code — Most pipelines don't need custom code. YAML should be enough for 80% of use cases.
- Incremental by default — Pipelines track their last run and only process new data.
- Fail loudly — When something breaks, you should know immediately. See Error Handling.
- Testable — Every pipeline can be tested locally before deployment. See Testing.
Architecture deep dive
For a complete overview of how Acme processes data internally, see Architecture.