Quickstart Guide
Quickstart Guide
Set up a complete Control Plane environment in minutes
This guide will help you quickly set up a development environment and deploy the Control Plane on a local Kubernetes cluster.
What you'll accomplish
By the end of this guide, you'll have:
- A running local Kubernetes cluster
- The Control Plane components installed and running
- Sample resources deployed to verify functionality
Prerequisites
Before starting, ensure you have the following tools installed:
🔄 kind
Kubernetes in Docker - for running a lightweight local cluster.
🎮 kubectl
Kubernetes command-line tool for cluster management.
🔑 GitHub CLI
Used for authentication with GitHub repositories.
All tools can be installed following their official documentation:
Setup Process
1️⃣ Step 1
Prepare Local Kind Cluster
2️⃣ Step 2
Install Control Plane
3️⃣ Step 3
Install Sample Resources
Step 1: Prepare Local Kind Cluster
First, we'll create a kind cluster and prepare it for the Control Plane installation.
# [optional] clone the controlplane repo
git clone --branch main https://github.com/telekom/controlplane.git
cd controlplane
# kind cluster
kind create cluster
# namespace
kubectl create namespace controlplane-system
# gh-auth
export GITHUB_TOKEN=$(gh auth token)
# install
bash ./install.sh --with-cert-manager --with-trust-manager --with-monitoring-crds
What happens in this step
We create a Kubernetes cluster using kind, set up the required namespace, configure GitHub authentication, and install necessary dependencies like certificate management and monitoring components.
Step 2: Install Control Plane
Now that we have a Kubernetes cluster with the necessary dependencies, we can install the Control Plane components. The Control Plane consists of multiple controllers and custom resources that work together to manage workloads across Kubernetes clusters.
# Navigate to the installation directory
cd install/local
# Apply the Kustomization to install controlplane components
kubectl apply -k .
# Verify the installation by checking the controller-manager pods
kubectl get pods -A -l control-plane=controller-manager
You should see the Control Plane controller pods running. The controllers are responsible for reconciling the custom resources and managing the platform.
Developer mode
If you want to develop and test your own controllers against the Control Plane, please refer to the Installation Guide and the section "Development: Testing Local Controllers" for detailed instructions.
Step 3: Install Sample Resources
To verify that the Control Plane is working correctly and to understand its functionality, we'll deploy some sample resources. The Control Plane uses a hierarchical resource model with admin, organization, and rover resources.
Admin Resources
Define infrastructure foundation, including zones, clusters, and platform configurations.
Organization Resources
Represent teams and projects that use the platform with ownership and permissions.
Rover Resources
Represent the actual workloads that run on the platform, scheduled based on requirements.
Admin Custom Resources
Admin resources define the infrastructure foundation, including zones, clusters, and platform-level configurations. These resources are typically managed by platform administrators.
kubectl apply -k resources/admin
# Verify
kubectl wait --for=condition=Ready -n controlplane zones/dataplane1
Organization Custom Resources
Organization resources represent teams and projects that use the platform. These resources define ownership, permissions, and organizational boundaries.
kubectl apply -k resources/org
# Verify
kubectl wait --for=condition=Ready -n controlplane teams/phoenix--firebirds
Rover Custom Resources
Rover resources represent the actual workloads that run on the platform. Rovers are deployable units that can be scheduled on different clusters based on requirements and zone capabilities.
kubectl apply -k resources/rover
# Verify the rover is ready
kubectl wait --for=condition=Ready -n controlplane--phoenix--firebirds rovers/rover-echo-v1
Verification & Troubleshooting
Expected Results
After successful installation, you should see:
✅ Controller Pods
Running in the controlplane namespace
✅ Custom Resources
All resources in Ready state
✅ Team Namespace
Created for the sample team (controlplane--phoenix--firebirds)
Checking Resource Status
# Check all controlplane resources
kubectl get zones,teams,rovers -A
# Check that rover workloads are deployed
kubectl get pods -n controlplane--phoenix--firebirds
Common Issues
If you encounter issues during the installation, here are some common troubleshooting steps:
Controller Logs
Check the controller logs:
kubectl logs -n controlplane deploy/controlplane-controller-manager -c manager
CRD Verification
Verify that all custom resource definitions are installed:
kubectl get crds | grep controlplane
Namespace Check
Ensure the correct namespace is created for your resources:
kubectl get ns | grep controlplane
Next Steps
What's next?
Now that you have a working Control Plane installation, you can explore its capabilities and deploy your own workloads.
Explore the architecture
Learn about the Control Plane components and how they work together
Deploy your own workloads
Create custom Rover resources for your applications
Understand resource relationships
See how admin, organization and rover resources interact
Dive deeper
Check out the detailed installation guide for more advanced options
For complete documentation, visit the GitHub repository.