Quickstart
Get the Control Plane running on your laptop in about ten minutes. This guide sets up a local sandbox using Kind — a lightweight Kubernetes cluster that runs inside Docker — so you can explore the platform without touching a real environment.
This setup is intended for trying out the Control Plane. It is not suitable for production use. For production deployments, see Installation.
What you will get
By the end of this guide you will have:
- A Kind cluster with all Control Plane controllers running
- A sample environment with two zones
- A team with a dedicated namespace, identity client, and gateway consumer
- A sample API exposure and subscription
Prerequisites
Make sure the following tools are installed on your machine:
| Tool | Purpose |
|---|---|
| Docker | Container runtime for Kind |
| kubectl | Kubernetes CLI |
| Kind | Local Kubernetes cluster |
| ko | Build Go container images |
| Helm | Install dependencies |
| Go | Required by ko to build images |
Step 0 — Clone the repository
This quickstart assumes you already cloned the Control Plane repository and are running commands from the repository root:
git clone https://github.com/telekom/controlplane.git
cd controlplane
Step 1 — Run the setup script
Clone the repository and run the local setup script from the repository root:
./hack/local-setup.sh
This single command:
- Creates a Kind cluster named
controlplane - Installs cert-manager, trust-manager, and Prometheus Operator CRDs
- Builds all controller images and loads them into the cluster
- Deploys the full Control Plane
The script is idempotent — if the cluster already exists, it skips creation and moves on.
After making code changes, you do not need to re-run the full setup. Use the --build-only flag to rebuild and redeploy:
# Rebuild all controllers
./hack/local-setup.sh --build-only
# Rebuild a single controller
./hack/local-setup.sh --build-only --only gateway
# Rebuild specific controllers
./hack/local-setup.sh --build-only --only gateway,rover
Step 2 — Apply sample resources
Once the controllers are running, apply the bundled sample resources to create an environment, a team, and an API exposure:
The files under install/overlays/local/resources/admin (especially the zone definitions) contain placeholder values such as:
- Identity provider URLs and admin credentials
- Gateway URLs and admin client secrets
- Redis host and password
Before running kubectl apply, copy the zone example files and then replace placeholders in your local copies:
cp install/overlays/local/resources/admin/zones/dataplane1.example.yaml install/overlays/local/resources/admin/zones/dataplane1.yaml
cp install/overlays/local/resources/admin/zones/dataplane2.example.yaml install/overlays/local/resources/admin/zones/dataplane2.yaml
The copied dataplane1.yaml and dataplane2.yaml files are gitignored to help prevent accidental secret commits.
For details, see install/overlays/local/README.md.
For background on required zone infrastructure, see Installation → Zone infrastructure.
# Create the environment and zones
kubectl apply -k install/overlays/local/resources/admin
# Create a group and team
kubectl apply -k install/overlays/local/resources/org
# Create a sample API exposure and subscription
kubectl apply -k install/overlays/local/resources/rover
Step 3 — Verify
Check that all controllers are running:
kubectl get pods -n controlplane-system
All pods should show Running status. Then verify the sample resources were created:
# Environment
kubectl get environments -n controlplane
# Zones
kubectl get zones -n controlplane
# Team (and its auto-provisioned namespace)
kubectl get teams -n controlplane
kubectl get namespaces | grep controlplane--
You should see the namespace controlplane--phoenix--firebirds — this was automatically created when the team resource was applied.
Explore the platform
With the platform running, here are a few things to try:
Inspect the Rover resources
kubectl get rovers -n controlplane--phoenix--firebirds
kubectl get apispecifications -n controlplane--phoenix--firebirds
Connect to the Secret Manager
kubectl port-forward -n controlplane-system svc/secret-manager 8443:8443
Connect to the File Manager
kubectl port-forward -n controlplane-system svc/file-manager 8444:8443
Clean up
To remove the local cluster entirely:
kind delete cluster --name controlplane
Next steps
- Installation — Deploy the Control Plane on a production cluster
- First Steps — Bootstrap your first environment, zones, and teams
- User Journey: Onboarding — Start using the platform as an application team