Installation Guide
Installation Guide
Comprehensive guide for setting up the Control Plane on a local Kubernetes environment
This guide provides detailed instructions for setting up the Open Telekom Integration Platform Control Plane in a local Kubernetes environment.
Quick installation
For a condensed version, see our Quickstart Guide.
Prerequisites
Before beginning the installation, ensure you have the following tools installed:
Git
Version control system for source code management
Docker
Container runtime for building and running containers
Kubectl
Kubernetes command-line tool for cluster management
Kind
Kubernetes IN Docker - for local Kubernetes clusters
GitHub CLI
Command-line tool for GitHub authentication
Ko
Container image builder for Go applications (for development)
Installation Overview
Process summary
The installation process consists of four main steps:
1️⃣ Setup
Setting up a local Kubernetes cluster
2️⃣ Installation
Installing the Control Plane components
3️⃣ Resources
Creating the required resources
4️⃣ Verification
Verifying the installation
Local Environment Setup
The Control Plane runs on Kubernetes. For local development and testing, we'll use kind (Kubernetes IN Docker) to create a lightweight local Kubernetes cluster.
Before you start
Please read through the entire section before executing commands to ensure proper setup and avoid potential issues.
Step 1: Set Up a Local Kind Cluster
# Clone the repository (if you haven't already)
git clone --branch main https://github.com/telekom/controlplane.git
cd controlplane
# Create a new Kind cluster
kind create cluster
# Verify you're connected to the correct Kubernetes context
kubectl config current-context
# Output should be: kind-kind
# Create the required namespace
kubectl create namespace controlplane-system
# Set GitHub token to avoid rate limits
export GITHUB_TOKEN=$(gh auth token)
# Install required components with the script
bash ./install.sh --with-cert-manager --with-trust-manager --with-monitoring-crds
# Now your local kind-cluster is set up with the required components:
# - cert-manager (for certificate management)
# - trust-manager (for trust bundle management)
# - monitoring CRDs (for Prometheus monitoring)
Step 2: Install the Control Plane
Components overview
The Control Plane consists of multiple controllers and custom resources that manage workloads across Kubernetes clusters.
Follow these steps to install the Control Plane components:
# Navigate to the installation directory
cd install/local
# Apply the kustomization to install controlplane components
kubectl apply -k .
# Verify installation by checking that controller pods are running
kubectl get pods -A -l control-plane=controller-manager
Step 3: Create Required Resources
Admin Resources
Configure the core platform and infrastructure components
Organization Resources
Set up teams and organizational structure
Rover Resources
Deploy workloads and applications on the platform
3.1 Create Admin Resources
Navigate to the example admin resource. Adjust these resource as needed.
Install the admin resources to your local cluster:
# Apply the admin resources
kubectl apply -k resources/admin
# Verify that the zone is ready
kubectl wait --for=condition=Ready -n controlplane zones/dataplane1
3.2 Create Organization Resources
Navigate to the example organization resource. Adjust these resource as needed.
Install the organization resources to your local cluster:
# Apply the organization resources
kubectl apply -k resources/org
# Verify that the team is ready
kubectl wait --for=condition=Ready -n controlplane teams/phoenix--firebirds
3.3 Create Rover Resources
Navigate to the example rover resource. Adjust these resource as needed.
Install the rover resources to your local cluster:
# Apply the rover resources
kubectl apply -k resources/rover
# Verify that the rover is ready
kubectl wait --for=condition=Ready -n controlplane--phoenix--firebirds rovers/rover-echo-v1
Development: Testing Local Controllers
Development workflow
For developers who want to test their own controller implementations, there are two approaches available.
To test your locally developed controller, you have multiple options that are described below.
Option 1: Replace Controller Images
You now have the stable
release deployed on your local kind-cluster. To change specific versions of controllers, you
can do:
1. Navigate
Go to install/local/kustomization.yaml
2. Build Controller
Build the controller you want to test
3. Update Image
Update the kustomization file with your new image
4. Apply Changes
Deploy the updated configuration to the cluster
# Example for rover controller
cd rover
export KO_DOCKER_REPO=ko.local
ko build --bare cmd/main.go --tags rover-test
kind load docker-image ko.local:rover-test
Update install/local/kustomization.yaml
with the new rover-controller image, then install again:
# [optional] navigate back to the install/local directory
cd install/local
# Apply the changes
kubectl apply -k .
Success! Your controller is now running with your local version.
Option 2: Run Controllers Locally
Webhook limitations
You cannot test any webhook logic by running the controller like this!
- Before you run Install the Control Plane, navigate to
install/local/kustomization.yaml
- Edit the file and comment out all controller that you want to test locally, e.g. if I want to test rover, api and gateway:
resources:
- ../../secret-manager/config/default
- ../../identity/config/default
# - ../../gateway/config/default
- ../../approval/config/default
# - ../../rover/config/default
- ../../application/config/default
- ../../organization/config/default
# - ../../api/config/default
- ../../admin/config/default - Apply it using
kubectl apply -k .
- Now, the rover-, api- and gateway-controllers were not deployed. You need to do it manually
# forEach controller
cd rover
make install # install the CRDs
export ENABLE_WEBHOOKS=false # if applicable, disable webhooks
go run cmd/main.go # start the controller process - Success! Your controllers are now running as separate processes connected to the kind-cluster
Troubleshooting
Common issues
If you encounter issues during installation, here are some common troubleshooting steps.
Controller Logs
Check controller logs for errors:
kubectl logs -n controlplane deploy/controlplane-controller-manager -c manager
CRD Verification
Verify custom resource definitions:
kubectl get crds | grep controlplane
Resource Status
Check status of created resources:
kubectl get zones,teams,rovers -A
Next Steps
After successful installation, you can:
🔍 Explore Resources
Use kubectl commands to examine the created resources and understand their relationships
🚀 Deploy Workloads
Create and deploy your own workloads using Rover resources
Learn Architecture
Study the Control Plane architecture and component interactions
🔌 Test APIs
Interact with API endpoints for resource management