Skip to main content

Quickstart

Get the Control Plane running on your laptop in about ten minutes. This guide sets up a local sandbox using Kind — a lightweight Kubernetes cluster that runs inside Docker — so you can explore the platform without touching a real environment.

Evaluation only

This setup is intended for trying out the Control Plane. It is not suitable for production use. For production deployments, see Installation.

What you will get

By the end of this guide you will have:

  • A Kind cluster with all Control Plane controllers running
  • A sample environment with two zones
  • A team with a dedicated namespace, identity client, and gateway consumer
  • A sample API exposure and subscription

Prerequisites

Make sure the following tools are installed on your machine:

ToolPurpose
DockerContainer runtime for Kind
kubectlKubernetes CLI
KindLocal Kubernetes cluster
koBuild Go container images
HelmInstall dependencies
GoRequired by ko to build images

Step 0 — Clone the repository

This quickstart assumes you already cloned the Control Plane repository and are running commands from the repository root:

git clone https://github.com/telekom/controlplane.git
cd controlplane

Step 1 — Run the setup script

Clone the repository and run the local setup script from the repository root:

./hack/local-setup.sh

This single command:

  1. Creates a Kind cluster named controlplane
  2. Installs cert-manager, trust-manager, and Prometheus Operator CRDs
  3. Builds all controller images and loads them into the cluster
  4. Deploys the full Control Plane

The script is idempotent — if the cluster already exists, it skips creation and moves on.

Selective rebuilds

After making code changes, you do not need to re-run the full setup. Use the --build-only flag to rebuild and redeploy:

# Rebuild all controllers
./hack/local-setup.sh --build-only

# Rebuild a single controller
./hack/local-setup.sh --build-only --only gateway

# Rebuild specific controllers
./hack/local-setup.sh --build-only --only gateway,rover

Step 2 — Apply sample resources

Once the controllers are running, apply the bundled sample resources to create an environment, a team, and an API exposure:

Update placeholder values before applying admin resources

The files under install/overlays/local/resources/admin (especially the zone definitions) contain placeholder values such as:

  • Identity provider URLs and admin credentials
  • Gateway URLs and admin client secrets
  • Redis host and password

Before running kubectl apply, copy the zone example files and then replace placeholders in your local copies:

cp install/overlays/local/resources/admin/zones/dataplane1.example.yaml install/overlays/local/resources/admin/zones/dataplane1.yaml
cp install/overlays/local/resources/admin/zones/dataplane2.example.yaml install/overlays/local/resources/admin/zones/dataplane2.yaml

The copied dataplane1.yaml and dataplane2.yaml files are gitignored to help prevent accidental secret commits.

For details, see install/overlays/local/README.md.

For background on required zone infrastructure, see Installation → Zone infrastructure.

# Create the environment and zones
kubectl apply -k install/overlays/local/resources/admin

# Create a group and team
kubectl apply -k install/overlays/local/resources/org

# Create a sample API exposure and subscription
kubectl apply -k install/overlays/local/resources/rover

Step 3 — Verify

Check that all controllers are running:

kubectl get pods -n controlplane-system

All pods should show Running status. Then verify the sample resources were created:

# Environment
kubectl get environments -n controlplane

# Zones
kubectl get zones -n controlplane

# Team (and its auto-provisioned namespace)
kubectl get teams -n controlplane
kubectl get namespaces | grep controlplane--

You should see the namespace controlplane--phoenix--firebirds — this was automatically created when the team resource was applied.

Explore the platform

With the platform running, here are a few things to try:

Inspect the Rover resources

kubectl get rovers -n controlplane--phoenix--firebirds
kubectl get apispecifications -n controlplane--phoenix--firebirds

Connect to the Secret Manager

kubectl port-forward -n controlplane-system svc/secret-manager 8443:8443

Connect to the File Manager

kubectl port-forward -n controlplane-system svc/file-manager 8444:8443

Clean up

To remove the local cluster entirely:

kind delete cluster --name controlplane

Next steps