pin - pipeline with Docker Golang API.
Local pipeline project with Docker Golang API. Run pipelines locally or as a daemon with real-time monitoring.

terminal from terminalgif.com
Pin can run as a long-running daemon service with SSE (Server-Sent Events) support for real-time pipeline monitoring and HTTP-triggered execution.
# Start daemon mode
pin apply --daemon
# Trigger pipeline from another terminal
curl -X POST -H "Content-Type: application/yaml" \
--data-binary @pipeline.yaml \
http://localhost:8081/trigger
# Monitor real-time events
curl -N http://localhost:8081/events
| Endpoint | Method | Description |
|---|---|---|
/events |
GET | Server-Sent Events stream for real-time updates |
/health |
GET | Health check and connected client count |
/trigger |
POST | Trigger pipeline execution with YAML config |
/ |
GET | API information and available endpoints |
The daemon broadcasts various events during pipeline execution:
// Connect to event stream
const eventSource = new EventSource("http://localhost:8081/events");
eventSource.onmessage = function (event) {
const data = JSON.parse(event.data);
console.log(`[${data.level}] ${data.message}`);
};
// Events received:
// {"level":"info","message":"Pipeline execution started","job":"build"}
// {"level":"info","message":"Container started","job":"build"}
// {"level":"success","message":"Job completed successfully","job":"build"}
# Run daemon with specific pipeline
pin apply --daemon -f production.yaml
# Run daemon without initial pipeline (HTTP-only mode)
pin apply --daemon
# Monitor from remote machine
curl -N http://your-server:8081/events
# Trigger deployments via API
curl -X POST -H "Content-Type: application/yaml" \
--data-binary @deployment.yaml \
http://your-server:8081/trigger
You can download latest release from here
Clone the pin
git clone https://github.com/muhammedikinci/pin
Download packages
go mod download
Build executable
go build -o pin ./cmd/cli/.
Or you can run directly
go run ./cmd/cli/. apply -n "test" -f ./testdata/test.yaml
Pin includes built-in YAML validation to catch configuration errors before pipeline execution.
Pin automatically validates your pipeline configuration before execution:
image or dockerfile is specified# Valid configuration passes validation
$ pin apply -f pipeline.yaml
Pipeline validation successful
β build Starting...
# Invalid configuration shows helpful errors
$ pin apply -f invalid.yaml
Pipeline validation failed: validation error in job 'build': either 'image' or 'dockerfile' must be specified
workflow:
- run
logsWithTime: true
# Optional: Specify custom Docker host
docker:
host: "tcp://localhost:2375"
run:
image: golang:alpine3.15
copyFiles: true
soloExecution: true
script:
- go mod download
- go run .
- ls
port:
- 8082:8080
You can create separate jobs like the run stage and if you want to run these jobs in the pipeline you must add its name to workflow.
Configure Docker daemon connection settings.
default: system default (usually unix:///var/run/docker.sock on Linux/macOS)
Specify a custom Docker host to connect to a different Docker daemon. This is useful for:
# TCP connection to remote Docker daemon
docker:
host: "tcp://192.168.1.100:2375"
# TCP connection with TLS (secure)
docker:
host: "tcp://docker.example.com:2376"
# Unix socket (Linux/macOS default)
docker:
host: "unix:///var/run/docker.sock"
# Windows named pipe
docker:
host: "npipe://./pipe/docker_engine"
# SSH connection to remote host
docker:
host: "ssh://user@docker-host"
# Connect to local Docker Desktop
workflow:
- build
docker:
host: "tcp://localhost:2375"
build:
image: golang:alpine
script:
- go build .
# Connect to remote Docker daemon
workflow:
- deploy
docker:
host: "tcp://production-docker:2375"
deploy:
image: alpine:latest
script:
- echo "Deploying to remote Docker"
default: false
If you want to copy all projects filed to the docker container, you must set this configuration to true
default: false
When you add multiple commands to the script field, commands are running in the container as a shell script. If soloExecution is set to true each command works in a different shell script.
# shell#1
cd cmd
ls
# shell#1
cd cmd
# shell#2
ls
If you want to see all files in the cmd folder you must set soloExecution to false or you can use this:
# shell#1
cd cmd && ls
default: false
logsWithTime => true
β 2022/05/08 11:36:30 Image is available
β 2022/05/08 11:36:30 Start creating container
β 2022/05/08 11:36:33 Starting the container
β 2022/05/08 11:36:35 Execute command: ls -a
logsWithTime => false
β Image is available
β Start creating container
β Starting the container
β Execute command: ls -a
default: empty mapping
You can use this feature for port forwarding from container to your machine with flexible host and port configuration.
"hostPort:containerPort""hostIP:hostPort:containerPort"# Standard port mapping (binds to all interfaces)
port: "8080:80"
# Multiple ports with different configurations
port:
- "8082:8080" # Standard format
- "127.0.0.1:8083:8080" # Bind only to localhost
- "192.168.1.100:8084:8080" # Bind to specific IP address
# Mix of standard and custom host formats
run:
image: nginx:alpine
port:
- "8080:80" # Available on all network interfaces
- "127.0.0.1:8081:80" # Only accessible from localhost
- "0.0.0.0:8082:80" # Explicitly bind to all interfaces
127.0.0.1:8080:80)192.168.1.100:8080:80)default: empty mapping
You can use this feature to ignore copying the specific files in your project to the container.
Sample configuration yaml
run:
image: node:current-alpine3.15
copyFiles: true
soloExecution: true
port:
- 8080:8080
copyIgnore:
- server.js
- props
- README.md
- helper/.*/.py
Actual folder structure in project
index.js
server.js
README.md
helper:
- test.py
- mock
test2.py
- api:
index.js
- props:
index.js
Folder structure in container
index.js
helper:
- mock (empty)
- api:
index.js
default: false
If you want to run parallel job, you must add parallel field and the stage must be in workflow(position doesnβt matter)
workflow:
- testStage
- parallelJob
- run
---
parallelJob:
image: node:current-alpine3.15
copyFiles: true
soloExecution: true
parallel: true
script:
- ls -a
You can specify environment variables for your jobs in the YAML configuration. These variables will be available inside the container during job execution.
Example:
workflow:
- run
run:
image: golang:alpine3.15
copyFiles: true
soloExecution: true
script:
- go mod download
- go run .
- echo "Environment variables:"
- echo "MY_VAR: $MY_VAR"
- echo "ANOTHER_VAR: $ANOTHER_VAR"
port:
- 8082:8080
env:
- MY_VAR=value
- ANOTHER_VAR=another_value
In this example, the environment variables MY_VAR and ANOTHER_VAR are set and printed during job execution.
Pin supports automatic job retries with configurable parameters for handling transient failures.
default: no retry (attempts: 1)
Configure automatic retry behavior for jobs that fail due to temporary issues like network problems, resource constraints, or external service unavailability.
retry:
attempts: 3 # Number of attempts (1-10, default: 1)
delay: 5 # Initial delay in seconds (0-300, default: 1)
backoff: 2.0 # Exponential backoff multiplier (0.1-10.0, default: 1.0)
# Simple retry - 3 attempts with 2 second delays
workflow:
- unstable-service
unstable-service:
image: alpine:latest
retry:
attempts: 3
delay: 2
script:
- echo "Attempting to connect to service..."
- curl https://unstable-api.example.com/health
# Advanced retry with exponential backoff
workflow:
- network-dependent
network-dependent:
image: alpine:latest
retry:
attempts: 5 # Try 5 times total
delay: 1 # Start with 1 second delay
backoff: 2.0 # Double delay each retry (1s, 2s, 4s, 8s)
script:
- wget https://external-resource.com/data.zip
backoff: 1.0, delays remain constantbackoff > 1.0, delays increase exponentiallyYou can specify conditions for job execution using the condition field. Jobs will only run if the condition evaluates to true.
Example:
workflow:
- build
- test
- deploy
build:
image: golang:alpine3.15
copyFiles: true
script:
- go build -o app .
test:
image: golang:alpine3.15
copyFiles: true
script:
- go test ./...
deploy:
image: alpine:latest
condition: $BRANCH == "main"
script:
- echo "Deploying to production..."
- ./deploy.sh
$VAR == "value" - Check if variable equals value$VAR != "value" - Check if variable does not equal value$VAR1 == "value1" && $VAR2 == "value2" - Both conditions must be true$VAR1 == "value1" || $VAR2 == "value2" - At least one condition must be true$VAR - Check if variable exists and is not empty/false/0# Run only on main branch
deploy:
condition: $BRANCH == "main"
# Run on main or develop branch
deploy:
condition: $BRANCH == "main" || $BRANCH == "develop"
# Run only when both conditions are met
deploy:
condition: $BRANCH == "main" && $DEPLOY == "true"
# Run when variable exists
cleanup:
condition: $CLEANUP_ENABLED
# Run when environment is not test
deploy:
condition: $ENV != "test"
You can set environment variables before running pin:
BRANCH=main pin apply -f pipeline.yaml
You can use a custom Dockerfile to build your own image for the job instead of pulling a pre-built image.
Example:
workflow:
- custom-build
custom-build:
dockerfile: "./Dockerfile"
copyFiles: true
script:
- echo "Hello from custom Docker image!"
- ls -la
<job-name>-custom:latestFROM alpine:latest
RUN apk add --no-cache \
bash \
curl \
git \
make
WORKDIR /app
USER nobody
CMD ["/bin/bash"]
Note: When using dockerfile, you donβt need to specify the image field. Pin will use the built image automatically.
go test ./...
For comprehensive documentation, examples, and guides:
Contributions are welcome! Please feel free to submit a Pull Request.
Muhammed Δ°kinci - muhammedikinci@outlook.com