About these diagrams: These diagrams illustrate the infrastructure setup, network topology, and service configurations for various OpenShift deployment scenarios on PowerVM.
❓ Why Bastion Services? Wondering why we need dedicated bastion LPARs instead of using existing corporate infrastructure?
Read the detailed explanation →
🏗️ SNO (Single Node OpenShift) Architecture
This diagram shows the complete SNO setup with two LPARs: a Bastion LPAR running essential services (DNS, DHCP, HTTP, TFTP) and a SNO LPAR running the OpenShift single-node cluster. The bastion provides PXE boot capabilities and serves ignition files for automated installation.
graph TB
subgraph PowerVM["PowerVM Environment
Network: 9.47.80.0/20 | Gateway: 9.47.95.254"] subgraph Bastion["Bastion LPAR
IP: 9.47.87.83
vCPU: 2 | Memory: 8GB | Storage: 50GB"] DNS["🌐 dnsmasq
━━━━━━━━━━━━━━
DNS Server
• api.sno.ocp.io
• api-int.sno.ocp.io
• *.apps.sno.ocp.io"] DHCP["📡 dnsmasq
━━━━━━━━━━━━━━
DHCP Server
• IP Assignment
• PXE Boot Config"] TFTP["📦 dnsmasq
━━━━━━━━━━━━━━
TFTP Server
• /var/lib/tftpboot
• PXE Boot Files"] HTTP["🌍 httpd:8000
━━━━━━━━━━━━━━
HTTP Server
• /ignition/sno.ign
• /install/rootfs.img"] GRUB["⚙️ grub2
━━━━━━━━━━━━━━
Network Boot
• RHCOS Kernel
• RHCOS Initramfs"] end subgraph SNO["SNO LPAR
IP: 9.47.87.82 | MAC: fa:b0:45:27:43:20
vCPU: 8 | Memory: 16GB | Storage: 120GB"] OCP["🎯 OpenShift Single Node
━━━━━━━━━━━━━━━━━━━━
• Control Plane
• Worker Node
• All OCP Services"] end end DNS -.->|DNS Resolution| SNO DHCP -->|1. DHCP Offer
IP Assignment| SNO TFTP -->|2. PXE Boot
Kernel + Initramfs| SNO HTTP -->|3. Download
Rootfs + Ignition| SNO GRUB -.->|Boot Config| SNO style PowerVM fill:#e3f2fd,stroke:#1565c0,stroke-width:3px style Bastion fill:#fff3e0,stroke:#f57c00,stroke-width:2px style SNO fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px style DNS fill:#e1f5fe,stroke:#0277bd,stroke-width:2px style DHCP fill:#e1f5fe,stroke:#0277bd,stroke-width:2px style TFTP fill:#e1f5fe,stroke:#0277bd,stroke-width:2px style HTTP fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px style GRUB fill:#fce4ec,stroke:#c2185b,stroke-width:2px style OCP fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
Network: 9.47.80.0/20 | Gateway: 9.47.95.254"] subgraph Bastion["Bastion LPAR
IP: 9.47.87.83
vCPU: 2 | Memory: 8GB | Storage: 50GB"] DNS["🌐 dnsmasq
━━━━━━━━━━━━━━
DNS Server
• api.sno.ocp.io
• api-int.sno.ocp.io
• *.apps.sno.ocp.io"] DHCP["📡 dnsmasq
━━━━━━━━━━━━━━
DHCP Server
• IP Assignment
• PXE Boot Config"] TFTP["📦 dnsmasq
━━━━━━━━━━━━━━
TFTP Server
• /var/lib/tftpboot
• PXE Boot Files"] HTTP["🌍 httpd:8000
━━━━━━━━━━━━━━
HTTP Server
• /ignition/sno.ign
• /install/rootfs.img"] GRUB["⚙️ grub2
━━━━━━━━━━━━━━
Network Boot
• RHCOS Kernel
• RHCOS Initramfs"] end subgraph SNO["SNO LPAR
IP: 9.47.87.82 | MAC: fa:b0:45:27:43:20
vCPU: 8 | Memory: 16GB | Storage: 120GB"] OCP["🎯 OpenShift Single Node
━━━━━━━━━━━━━━━━━━━━
• Control Plane
• Worker Node
• All OCP Services"] end end DNS -.->|DNS Resolution| SNO DHCP -->|1. DHCP Offer
IP Assignment| SNO TFTP -->|2. PXE Boot
Kernel + Initramfs| SNO HTTP -->|3. Download
Rootfs + Ignition| SNO GRUB -.->|Boot Config| SNO style PowerVM fill:#e3f2fd,stroke:#1565c0,stroke-width:3px style Bastion fill:#fff3e0,stroke:#f57c00,stroke-width:2px style SNO fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px style DNS fill:#e1f5fe,stroke:#0277bd,stroke-width:2px style DHCP fill:#e1f5fe,stroke:#0277bd,stroke-width:2px style TFTP fill:#e1f5fe,stroke:#0277bd,stroke-width:2px style HTTP fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px style GRUB fill:#fce4ec,stroke:#c2185b,stroke-width:2px style OCP fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
📋 Installation Flow
- Step 1 - Bastion Setup: Configure dnsmasq (DNS/DHCP/TFTP), httpd, PXE boot files, create ignition file, download RHCOS images
- Step 2 - SNO Installation: Network boot SNO LPAR via PXE → DHCP assigns IP → PXE loads kernel & initramfs → Downloads rootfs from HTTP → Applies ignition configuration → Installs OpenShift
- Step 3 - Monitoring: Use
openshift-install wait-for bootstrap-completeandwait-for install-complete, then verify withoc get nodesandoc get co
🔧 Key Services on Bastion LPAR
- dnsmasq: Provides DNS, DHCP, and TFTP services for PXE boot
- httpd: HTTP server (port 8000) serving ignition files and RHCOS images
- grub2: Network boot configuration for PowerVM
📁 Important File Locations
- Configuration: /etc/dnsmasq.conf, /etc/dnsmasq.d/addnhosts
- PXE Boot: /var/lib/tftpboot/boot/grub2/grub.cfg, /var/lib/tftpboot/rhcos/
- HTTP Content: /var/www/html/ignition/, /var/www/html/install/
🔧 Day 2 Operations - Adding Worker Nodes
This diagram illustrates the Day 2 architecture where worker nodes are added to an existing OpenShift cluster. The bastion LPAR runs additional services including HAProxy for load balancing API and ingress traffic across master and worker nodes.
graph TB
subgraph PowerVM["PowerVM Environment - Day 2 Architecture"]
subgraph Bastion["Bastion LPAR
All Services Running"] DNS2["🌐 dnsmasq
━━━━━━━━━━━━━━
DNS Server
• api.cluster.domain
• api-int.cluster.domain
• *.apps.cluster.domain"] DHCP2["📡 dnsmasq
━━━━━━━━━━━━━━
DHCP Server
• IP Assignment
• PXE Boot Config"] TFTP2["📦 dnsmasq
━━━━━━━━━━━━━━
TFTP Server
• /var/lib/tftpboot
• PXE Boot Files"] HTTP2["🌍 httpd:8000
━━━━━━━━━━━━━━
HTTP Server
• Ignition files
• RHCOS images"] HAPROXY["⚖️ HAProxy
━━━━━━━━━━━━━━
Load Balancer
• API: 6443
• Machine Config: 22623/22624
• HTTP Ingress: 80
• HTTPS Ingress: 443
• Stats: 9000"] NFS["💾 NFS (optional)
━━━━━━━━━━━━━━
Storage Server
• Persistent Volumes
• Image Registry"] end subgraph Cluster["OpenShift Cluster"] subgraph Masters["Master Nodes"] M1["🎯 Master-1
━━━━━━━━━━
Control Plane
API: 6443
MCS: 22623"] M2["🎯 Master-2
━━━━━━━━━━
Control Plane
API: 6443
MCS: 22623"] M3["🎯 Master-3
━━━━━━━━━━
Control Plane
API: 6443
MCS: 22623"] end subgraph Workers["Worker Nodes (Day 2)"] W1["⚙️ Worker-1
━━━━━━━━━━
Workload Node
HTTP: 80
HTTPS: 443"] W2["⚙️ Day2-Worker-2
━━━━━━━━━━
Workload Node
HTTP: 80
HTTPS: 443"] end end end DNS2 -.->|DNS Resolution| Cluster DHCP2 -->|IP Assignment| Workers TFTP2 -->|PXE Boot| Workers HTTP2 -->|Ignition + RHCOS| Workers NFS -.->|Storage| Cluster HAPROXY -->|API Load Balance
Port 6443| Masters HAPROXY -->|Machine Config
Port 22623| Masters HAPROXY -->|Day2 Machine Config
Port 22624| Masters HAPROXY -->|HTTP Ingress
Port 80| Workers HAPROXY -->|HTTPS Ingress
Port 443| Workers style PowerVM fill:#e3f2fd,stroke:#1565c0,stroke-width:3px style Bastion fill:#fff3e0,stroke:#f57c00,stroke-width:2px style Cluster fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px style Masters fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px style Workers fill:#fff9c4,stroke:#f57f17,stroke-width:2px style DNS2 fill:#e1f5fe,stroke:#0277bd,stroke-width:2px style DHCP2 fill:#e1f5fe,stroke:#0277bd,stroke-width:2px style TFTP2 fill:#e1f5fe,stroke:#0277bd,stroke-width:2px style HTTP2 fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px style HAPROXY fill:#ffebee,stroke:#c62828,stroke-width:2px style NFS fill:#e0f2f1,stroke:#00695c,stroke-width:2px style M1 fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style M2 fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style M3 fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style W1 fill:#fff59d,stroke:#f57f17,stroke-width:2px style W2 fill:#fff59d,stroke:#f57f17,stroke-width:2px
All Services Running"] DNS2["🌐 dnsmasq
━━━━━━━━━━━━━━
DNS Server
• api.cluster.domain
• api-int.cluster.domain
• *.apps.cluster.domain"] DHCP2["📡 dnsmasq
━━━━━━━━━━━━━━
DHCP Server
• IP Assignment
• PXE Boot Config"] TFTP2["📦 dnsmasq
━━━━━━━━━━━━━━
TFTP Server
• /var/lib/tftpboot
• PXE Boot Files"] HTTP2["🌍 httpd:8000
━━━━━━━━━━━━━━
HTTP Server
• Ignition files
• RHCOS images"] HAPROXY["⚖️ HAProxy
━━━━━━━━━━━━━━
Load Balancer
• API: 6443
• Machine Config: 22623/22624
• HTTP Ingress: 80
• HTTPS Ingress: 443
• Stats: 9000"] NFS["💾 NFS (optional)
━━━━━━━━━━━━━━
Storage Server
• Persistent Volumes
• Image Registry"] end subgraph Cluster["OpenShift Cluster"] subgraph Masters["Master Nodes"] M1["🎯 Master-1
━━━━━━━━━━
Control Plane
API: 6443
MCS: 22623"] M2["🎯 Master-2
━━━━━━━━━━
Control Plane
API: 6443
MCS: 22623"] M3["🎯 Master-3
━━━━━━━━━━
Control Plane
API: 6443
MCS: 22623"] end subgraph Workers["Worker Nodes (Day 2)"] W1["⚙️ Worker-1
━━━━━━━━━━
Workload Node
HTTP: 80
HTTPS: 443"] W2["⚙️ Day2-Worker-2
━━━━━━━━━━
Workload Node
HTTP: 80
HTTPS: 443"] end end end DNS2 -.->|DNS Resolution| Cluster DHCP2 -->|IP Assignment| Workers TFTP2 -->|PXE Boot| Workers HTTP2 -->|Ignition + RHCOS| Workers NFS -.->|Storage| Cluster HAPROXY -->|API Load Balance
Port 6443| Masters HAPROXY -->|Machine Config
Port 22623| Masters HAPROXY -->|Day2 Machine Config
Port 22624| Masters HAPROXY -->|HTTP Ingress
Port 80| Workers HAPROXY -->|HTTPS Ingress
Port 443| Workers style PowerVM fill:#e3f2fd,stroke:#1565c0,stroke-width:3px style Bastion fill:#fff3e0,stroke:#f57c00,stroke-width:2px style Cluster fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px style Masters fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px style Workers fill:#fff9c4,stroke:#f57f17,stroke-width:2px style DNS2 fill:#e1f5fe,stroke:#0277bd,stroke-width:2px style DHCP2 fill:#e1f5fe,stroke:#0277bd,stroke-width:2px style TFTP2 fill:#e1f5fe,stroke:#0277bd,stroke-width:2px style HTTP2 fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px style HAPROXY fill:#ffebee,stroke:#c62828,stroke-width:2px style NFS fill:#e0f2f1,stroke:#00695c,stroke-width:2px style M1 fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style M2 fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style M3 fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style W1 fill:#fff59d,stroke:#f57f17,stroke-width:2px style W2 fill:#fff59d,stroke:#f57f17,stroke-width:2px
⚖️ HAProxy Load Balancer Configuration
- API Server (6443): Load balances OpenShift API requests across all master nodes
- Machine Config Server (22623): Routes machine configuration requests during Day 1 installation
- Machine Config Server Day 2 (22624): Routes machine configuration requests for Day 2 worker additions
- HTTP Ingress (80): Load balances HTTP traffic to worker nodes (or masters if no workers)
- HTTPS Ingress (443): Load balances HTTPS traffic to worker nodes (or masters if no workers)
- Stats Dashboard (9000): HAProxy statistics and health monitoring interface
📋 Day 2 Installation Flow
- Step 1 - Service Setup: Update bastion services (dnsmasq, HAProxy) with new worker node information
- Step 2 - Create Ignition: Generate Day 2 ignition files for new worker nodes using existing cluster credentials
- Step 3 - Network Boot: PXE boot new worker LPARs using
lpar_netbootcommand - Step 4 - Monitor: Watch worker nodes join the cluster and approve CSRs (Certificate Signing Requests)
🔧 Complete Bastion Services for HA Clusters
- dnsmasq: DNS, DHCP, and TFTP services
- httpd: HTTP server for ignition files and RHCOS images
- HAProxy: Load balancer for API, machine config, and ingress traffic
- NFS (optional): Network storage for persistent volumes and image registry
- Squid (optional): HTTP proxy for restricted network environments
- Chrony (optional): Time synchronization service
🏢 Multi-Cluster Architecture - Dedicated Bastion per Cluster
This diagram illustrates a multi-cluster deployment strategy where each OpenShift cluster has its own dedicated bastion node. This architecture provides isolation, independent lifecycle management, and enhanced security for enterprise environments managing multiple clusters.
graph TB
subgraph PowerVM["PowerVM Infrastructure - Multi-Cluster Environment"]
subgraph Cluster1["🔵 Production Cluster"]
B1["🛡️ Bastion-Prod
━━━━━━━━━━━━━━
• dnsmasq
• httpd
• HAProxy
• NFS"] C1M1["Master-1"] C1M2["Master-2"] C1M3["Master-3"] C1W1["Worker-1"] C1W2["Worker-2"] B1 -.->|Manages| C1M1 B1 -.->|Manages| C1M2 B1 -.->|Manages| C1M3 B1 -.->|Manages| C1W1 B1 -.->|Manages| C1W2 end subgraph Cluster2["🟢 Development Cluster"] B2["🛡️ Bastion-Dev
━━━━━━━━━━━━━━
• dnsmasq
• httpd
• HAProxy
• NFS"] C2M1["Master-1"] C2M2["Master-2"] C2M3["Master-3"] C2W1["Worker-1"] B2 -.->|Manages| C2M1 B2 -.->|Manages| C2M2 B2 -.->|Manages| C2M3 B2 -.->|Manages| C2W1 end subgraph Cluster3["🟡 Testing Cluster"] B3["🛡️ Bastion-Test
━━━━━━━━━━━━━━
• dnsmasq
• httpd
• HAProxy
• NFS"] C3M1["Master-1"] C3M2["Master-2"] C3M3["Master-3"] C3W1["Worker-1"] B3 -.->|Manages| C3M1 B3 -.->|Manages| C3M2 B3 -.->|Manages| C3M3 B3 -.->|Manages| C3W1 end subgraph Cluster4["🔴 DR Cluster"] B4["🛡️ Bastion-DR
━━━━━━━━━━━━━━
• dnsmasq
• httpd
• HAProxy
• NFS"] C4M1["Master-1"] C4M2["Master-2"] C4M3["Master-3"] C4W1["Worker-1"] C4W2["Worker-2"] B4 -.->|Manages| C4M1 B4 -.->|Manages| C4M2 B4 -.->|Manages| C4M3 B4 -.->|Manages| C4W1 B4 -.->|Manages| C4W2 end end style PowerVM fill:#e3f2fd,stroke:#1565c0,stroke-width:3px style Cluster1 fill:#e3f2fd,stroke:#1565c0,stroke-width:2px style Cluster2 fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px style Cluster3 fill:#fff9c4,stroke:#f57f17,stroke-width:2px style Cluster4 fill:#ffebee,stroke:#c62828,stroke-width:2px style B1 fill:#bbdefb,stroke:#1565c0,stroke-width:3px style B2 fill:#c8e6c9,stroke:#2e7d32,stroke-width:3px style B3 fill:#fff59d,stroke:#f57f17,stroke-width:3px style B4 fill:#ffcdd2,stroke:#c62828,stroke-width:3px style C1M1 fill:#90caf9,stroke:#1565c0,stroke-width:2px style C1M2 fill:#90caf9,stroke:#1565c0,stroke-width:2px style C1M3 fill:#90caf9,stroke:#1565c0,stroke-width:2px style C1W1 fill:#64b5f6,stroke:#1565c0,stroke-width:2px style C1W2 fill:#64b5f6,stroke:#1565c0,stroke-width:2px style C2M1 fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style C2M2 fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style C2M3 fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style C2W1 fill:#81c784,stroke:#2e7d32,stroke-width:2px style C3M1 fill:#fff59d,stroke:#f57f17,stroke-width:2px style C3M2 fill:#fff59d,stroke:#f57f17,stroke-width:2px style C3M3 fill:#fff59d,stroke:#f57f17,stroke-width:2px style C3W1 fill:#ffee58,stroke:#f57f17,stroke-width:2px style C4M1 fill:#ef9a9a,stroke:#c62828,stroke-width:2px style C4M2 fill:#ef9a9a,stroke:#c62828,stroke-width:2px style C4M3 fill:#ef9a9a,stroke:#c62828,stroke-width:2px style C4W1 fill:#e57373,stroke:#c62828,stroke-width:2px style C4W2 fill:#e57373,stroke:#c62828,stroke-width:2px
━━━━━━━━━━━━━━
• dnsmasq
• httpd
• HAProxy
• NFS"] C1M1["Master-1"] C1M2["Master-2"] C1M3["Master-3"] C1W1["Worker-1"] C1W2["Worker-2"] B1 -.->|Manages| C1M1 B1 -.->|Manages| C1M2 B1 -.->|Manages| C1M3 B1 -.->|Manages| C1W1 B1 -.->|Manages| C1W2 end subgraph Cluster2["🟢 Development Cluster"] B2["🛡️ Bastion-Dev
━━━━━━━━━━━━━━
• dnsmasq
• httpd
• HAProxy
• NFS"] C2M1["Master-1"] C2M2["Master-2"] C2M3["Master-3"] C2W1["Worker-1"] B2 -.->|Manages| C2M1 B2 -.->|Manages| C2M2 B2 -.->|Manages| C2M3 B2 -.->|Manages| C2W1 end subgraph Cluster3["🟡 Testing Cluster"] B3["🛡️ Bastion-Test
━━━━━━━━━━━━━━
• dnsmasq
• httpd
• HAProxy
• NFS"] C3M1["Master-1"] C3M2["Master-2"] C3M3["Master-3"] C3W1["Worker-1"] B3 -.->|Manages| C3M1 B3 -.->|Manages| C3M2 B3 -.->|Manages| C3M3 B3 -.->|Manages| C3W1 end subgraph Cluster4["🔴 DR Cluster"] B4["🛡️ Bastion-DR
━━━━━━━━━━━━━━
• dnsmasq
• httpd
• HAProxy
• NFS"] C4M1["Master-1"] C4M2["Master-2"] C4M3["Master-3"] C4W1["Worker-1"] C4W2["Worker-2"] B4 -.->|Manages| C4M1 B4 -.->|Manages| C4M2 B4 -.->|Manages| C4M3 B4 -.->|Manages| C4W1 B4 -.->|Manages| C4W2 end end style PowerVM fill:#e3f2fd,stroke:#1565c0,stroke-width:3px style Cluster1 fill:#e3f2fd,stroke:#1565c0,stroke-width:2px style Cluster2 fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px style Cluster3 fill:#fff9c4,stroke:#f57f17,stroke-width:2px style Cluster4 fill:#ffebee,stroke:#c62828,stroke-width:2px style B1 fill:#bbdefb,stroke:#1565c0,stroke-width:3px style B2 fill:#c8e6c9,stroke:#2e7d32,stroke-width:3px style B3 fill:#fff59d,stroke:#f57f17,stroke-width:3px style B4 fill:#ffcdd2,stroke:#c62828,stroke-width:3px style C1M1 fill:#90caf9,stroke:#1565c0,stroke-width:2px style C1M2 fill:#90caf9,stroke:#1565c0,stroke-width:2px style C1M3 fill:#90caf9,stroke:#1565c0,stroke-width:2px style C1W1 fill:#64b5f6,stroke:#1565c0,stroke-width:2px style C1W2 fill:#64b5f6,stroke:#1565c0,stroke-width:2px style C2M1 fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style C2M2 fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style C2M3 fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style C2W1 fill:#81c784,stroke:#2e7d32,stroke-width:2px style C3M1 fill:#fff59d,stroke:#f57f17,stroke-width:2px style C3M2 fill:#fff59d,stroke:#f57f17,stroke-width:2px style C3M3 fill:#fff59d,stroke:#f57f17,stroke-width:2px style C3W1 fill:#ffee58,stroke:#f57f17,stroke-width:2px style C4M1 fill:#ef9a9a,stroke:#c62828,stroke-width:2px style C4M2 fill:#ef9a9a,stroke:#c62828,stroke-width:2px style C4M3 fill:#ef9a9a,stroke:#c62828,stroke-width:2px style C4W1 fill:#e57373,stroke:#c62828,stroke-width:2px style C4W2 fill:#e57373,stroke:#c62828,stroke-width:2px
✅ Benefits of Dedicated Bastion per Cluster
- 🔒 Security Isolation: Each cluster has its own isolated bastion, preventing cross-cluster security breaches. If one bastion is compromised, other clusters remain protected.
- 🎯 Independent Lifecycle Management: Upgrade, patch, or maintain each bastion independently without affecting other clusters. Different clusters can run different OpenShift versions.
- ⚡ Performance & Scalability: Dedicated resources per cluster prevent resource contention. Each bastion can be sized appropriately for its cluster's needs.
- 🛠️ Configuration Flexibility: Each bastion can have unique configurations (DNS zones, DHCP ranges, HAProxy rules) tailored to its cluster's requirements.
- 🔄 Simplified Troubleshooting: Issues are isolated to specific clusters. Logs, services, and configurations are separate, making debugging easier.
- 📊 Resource Accountability: Clear resource allocation and cost tracking per cluster. Each environment (prod, dev, test, DR) has dedicated infrastructure.
- 🚀 Parallel Operations: Perform Day 2 operations (adding workers, upgrades) on multiple clusters simultaneously without conflicts.
- 🔐 Network Segmentation: Each cluster can operate in different network zones with appropriate firewall rules and access controls.
🏗️ Typical Multi-Cluster Use Cases
- Production Cluster (Blue): Mission-critical workloads with high availability, strict SLAs, and enhanced monitoring
- Development Cluster (Green): Active development environment with frequent deployments and testing
- Testing/QA Cluster (Yellow): Pre-production validation, integration testing, and quality assurance
- Disaster Recovery Cluster (Red): Standby cluster in different location for business continuity
📋 Bastion Services per Cluster
- dnsmasq: Cluster-specific DNS zones (e.g., prod.ocp.io, dev.ocp.io, test.ocp.io, dr.ocp.io)
- httpd: Serves ignition files and RHCOS images for its cluster nodes
- HAProxy: Load balances API and ingress traffic for its cluster only
- NFS: Provides storage for its cluster's persistent volumes and image registry
- Monitoring: Each bastion can run cluster-specific monitoring and alerting
⚠️ Alternative: Shared Bastion Considerations
While a shared bastion for multiple clusters is possible, it introduces:
- ❌ Single point of failure affecting all clusters
- ❌ Complex configuration management with potential for conflicts
- ❌ Resource contention during simultaneous operations
- ❌ Reduced security isolation between environments
- ❌ Difficult troubleshooting with mixed logs and services
Recommendation: Use dedicated bastions for production and DR clusters at minimum. Dev/Test clusters may share a bastion if budget constraints exist.
🌐 DNS Architecture: Corporate DNS vs Bastion DNS
This diagram compares two DNS approaches in a hybrid environment with x86 and Power clusters: using only corporate DNS versus using dedicated bastion DNS for each cluster. Understanding the trade-offs helps make informed architectural decisions.
graph TB
subgraph Corporate["Corporate Infrastructure"]
CorpDNS["🏢 Corporate DNS
━━━━━━━━━━━━━━
• company.com zone
• Centrally managed
• Change control process
• High availability"] CorpDHCP["🏢 Corporate DHCP
━━━━━━━━━━━━━━
• Enterprise-wide
• Standardized"] CorpLB["🏢 Corporate Load Balancer
━━━━━━━━━━━━━━
• F5 / NetScaler
• Expensive
• Long provisioning"] end subgraph X86["x86 Infrastructure"] X86Cluster1["☁️ x86 OCP Cluster 1
━━━━━━━━━━━━━━
prod-x86.company.com
• Uses Corporate DNS
• Uses Corporate LB
• Established process"] X86Cluster2["☁️ x86 OCP Cluster 2
━━━━━━━━━━━━━━
dev-x86.company.com
• Uses Corporate DNS
• Uses Corporate LB
• Established process"] end subgraph PowerApproach1["❌ Approach 1: Corporate DNS Only (Not Recommended)"] PowerCluster1A["⚡ Power OCP Cluster 1
━━━━━━━━━━━━━━
prod-power.company.com
❌ Depends on Corp DNS
❌ Slow changes
❌ No self-service"] PowerCluster2A["⚡ Power OCP Cluster 2
━━━━━━━━━━━━━━
dev-power.company.com
❌ Depends on Corp DNS
❌ Slow changes
❌ No self-service"] end subgraph PowerApproach2["✅ Approach 2: Bastion DNS (Recommended)"] subgraph Bastion1["Bastion 1"] B1DNS["🛡️ Bastion DNS
━━━━━━━━━━━━━━
• prod-power.company.com
• Self-service
• Instant changes
• Forwards to Corp DNS"] B1HAProxy["⚖️ HAProxy
━━━━━━━━━━━━━━
• API: 6443
• Ingress: 80/443"] end PowerCluster1B["⚡ Power OCP Cluster 1
━━━━━━━━━━━━━━
prod-power.company.com
✅ Independent
✅ Fast deployment
✅ Self-service"] subgraph Bastion2["Bastion 2"] B2DNS["🛡️ Bastion DNS
━━━━━━━━━━━━━━
• dev-power.company.com
• Self-service
• Instant changes
• Forwards to Corp DNS"] B2HAProxy["⚖️ HAProxy
━━━━━━━━━━━━━━
• API: 6443
• Ingress: 80/443"] end PowerCluster2B["⚡ Power OCP Cluster 2
━━━━━━━━━━━━━━
dev-power.company.com
✅ Independent
✅ Fast deployment
✅ Self-service"] end CorpDNS -.->|Manages zones| X86Cluster1 CorpDNS -.->|Manages zones| X86Cluster2 CorpLB -.->|Load balances| X86Cluster1 CorpLB -.->|Load balances| X86Cluster2 CorpDNS -.->|❌ Slow process| PowerCluster1A CorpDNS -.->|❌ Slow process| PowerCluster2A B1DNS -->|Authoritative for cluster| PowerCluster1B B1DNS -.->|Forwards external queries| CorpDNS B1HAProxy -->|Load balances| PowerCluster1B B2DNS -->|Authoritative for cluster| PowerCluster2B B2DNS -.->|Forwards external queries| CorpDNS B2HAProxy -->|Load balances| PowerCluster2B CorpDNS -.->|Delegates subdomain
prod-power.company.com| B1DNS CorpDNS -.->|Delegates subdomain
dev-power.company.com| B2DNS style Corporate fill:#f5f5f5,stroke:#757575,stroke-width:2px style X86 fill:#e3f2fd,stroke:#1565c0,stroke-width:2px style PowerApproach1 fill:#ffebee,stroke:#c62828,stroke-width:3px style PowerApproach2 fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px style CorpDNS fill:#e0e0e0,stroke:#616161,stroke-width:2px style CorpDHCP fill:#e0e0e0,stroke:#616161,stroke-width:2px style CorpLB fill:#e0e0e0,stroke:#616161,stroke-width:2px style X86Cluster1 fill:#90caf9,stroke:#1565c0,stroke-width:2px style X86Cluster2 fill:#90caf9,stroke:#1565c0,stroke-width:2px style PowerCluster1A fill:#ef9a9a,stroke:#c62828,stroke-width:2px style PowerCluster2A fill:#ef9a9a,stroke:#c62828,stroke-width:2px style Bastion1 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px style Bastion2 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px style B1DNS fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style B1HAProxy fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style B2DNS fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style B2HAProxy fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style PowerCluster1B fill:#81c784,stroke:#2e7d32,stroke-width:2px style PowerCluster2B fill:#81c784,stroke:#2e7d32,stroke-width:2px
━━━━━━━━━━━━━━
• company.com zone
• Centrally managed
• Change control process
• High availability"] CorpDHCP["🏢 Corporate DHCP
━━━━━━━━━━━━━━
• Enterprise-wide
• Standardized"] CorpLB["🏢 Corporate Load Balancer
━━━━━━━━━━━━━━
• F5 / NetScaler
• Expensive
• Long provisioning"] end subgraph X86["x86 Infrastructure"] X86Cluster1["☁️ x86 OCP Cluster 1
━━━━━━━━━━━━━━
prod-x86.company.com
• Uses Corporate DNS
• Uses Corporate LB
• Established process"] X86Cluster2["☁️ x86 OCP Cluster 2
━━━━━━━━━━━━━━
dev-x86.company.com
• Uses Corporate DNS
• Uses Corporate LB
• Established process"] end subgraph PowerApproach1["❌ Approach 1: Corporate DNS Only (Not Recommended)"] PowerCluster1A["⚡ Power OCP Cluster 1
━━━━━━━━━━━━━━
prod-power.company.com
❌ Depends on Corp DNS
❌ Slow changes
❌ No self-service"] PowerCluster2A["⚡ Power OCP Cluster 2
━━━━━━━━━━━━━━
dev-power.company.com
❌ Depends on Corp DNS
❌ Slow changes
❌ No self-service"] end subgraph PowerApproach2["✅ Approach 2: Bastion DNS (Recommended)"] subgraph Bastion1["Bastion 1"] B1DNS["🛡️ Bastion DNS
━━━━━━━━━━━━━━
• prod-power.company.com
• Self-service
• Instant changes
• Forwards to Corp DNS"] B1HAProxy["⚖️ HAProxy
━━━━━━━━━━━━━━
• API: 6443
• Ingress: 80/443"] end PowerCluster1B["⚡ Power OCP Cluster 1
━━━━━━━━━━━━━━
prod-power.company.com
✅ Independent
✅ Fast deployment
✅ Self-service"] subgraph Bastion2["Bastion 2"] B2DNS["🛡️ Bastion DNS
━━━━━━━━━━━━━━
• dev-power.company.com
• Self-service
• Instant changes
• Forwards to Corp DNS"] B2HAProxy["⚖️ HAProxy
━━━━━━━━━━━━━━
• API: 6443
• Ingress: 80/443"] end PowerCluster2B["⚡ Power OCP Cluster 2
━━━━━━━━━━━━━━
dev-power.company.com
✅ Independent
✅ Fast deployment
✅ Self-service"] end CorpDNS -.->|Manages zones| X86Cluster1 CorpDNS -.->|Manages zones| X86Cluster2 CorpLB -.->|Load balances| X86Cluster1 CorpLB -.->|Load balances| X86Cluster2 CorpDNS -.->|❌ Slow process| PowerCluster1A CorpDNS -.->|❌ Slow process| PowerCluster2A B1DNS -->|Authoritative for cluster| PowerCluster1B B1DNS -.->|Forwards external queries| CorpDNS B1HAProxy -->|Load balances| PowerCluster1B B2DNS -->|Authoritative for cluster| PowerCluster2B B2DNS -.->|Forwards external queries| CorpDNS B2HAProxy -->|Load balances| PowerCluster2B CorpDNS -.->|Delegates subdomain
prod-power.company.com| B1DNS CorpDNS -.->|Delegates subdomain
dev-power.company.com| B2DNS style Corporate fill:#f5f5f5,stroke:#757575,stroke-width:2px style X86 fill:#e3f2fd,stroke:#1565c0,stroke-width:2px style PowerApproach1 fill:#ffebee,stroke:#c62828,stroke-width:3px style PowerApproach2 fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px style CorpDNS fill:#e0e0e0,stroke:#616161,stroke-width:2px style CorpDHCP fill:#e0e0e0,stroke:#616161,stroke-width:2px style CorpLB fill:#e0e0e0,stroke:#616161,stroke-width:2px style X86Cluster1 fill:#90caf9,stroke:#1565c0,stroke-width:2px style X86Cluster2 fill:#90caf9,stroke:#1565c0,stroke-width:2px style PowerCluster1A fill:#ef9a9a,stroke:#c62828,stroke-width:2px style PowerCluster2A fill:#ef9a9a,stroke:#c62828,stroke-width:2px style Bastion1 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px style Bastion2 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px style B1DNS fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style B1HAProxy fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style B2DNS fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style B2HAProxy fill:#a5d6a7,stroke:#2e7d32,stroke-width:2px style PowerCluster1B fill:#81c784,stroke:#2e7d32,stroke-width:2px style PowerCluster2B fill:#81c784,stroke:#2e7d32,stroke-width:2px
❌ Approach 1: Corporate DNS Only - Challenges
- Slow Change Management:
- ❌ DNS changes require tickets and approvals (days/weeks)
- ❌ Each cluster creation/destruction needs DNS team involvement
- ❌ Dev/test clusters can't be created on-demand
- ❌ Troubleshooting requires waiting for DNS changes
- No Load Balancer:
- ❌ Corporate LB provisioning is expensive and slow
- ❌ May not support OpenShift-specific requirements
- ❌ Overkill for dev/test clusters
- PXE Boot Issues:
- ❌ Corporate DHCP can't provide PowerVM-specific boot parameters
- ❌ No integration with TFTP for netboot
- ❌ Can't serve ignition files during boot
- Operational Overhead:
- ❌ Every cluster operation requires coordination with DNS team
- ❌ Day 2 operations (adding workers) need DNS updates
- ❌ Can't quickly iterate during troubleshooting
✅ Approach 2: Bastion DNS - Benefits
- Self-Service & Speed:
- ✅ Create/destroy clusters instantly without DNS tickets
- ✅ Modify DNS entries immediately during troubleshooting
- ✅ Dev/test clusters can be spun up on-demand
- ✅ Day 2 operations (add workers) are instant
- Integrated Services:
- ✅ DNS, DHCP, TFTP, HTTP work together seamlessly
- ✅ PXE boot works out-of-the-box for PowerVM
- ✅ HAProxy provides free, integrated load balancing
- ✅ All services configured for OpenShift requirements
- Isolation & Security:
- ✅ Each cluster's DNS is isolated
- ✅ Cluster compromise doesn't affect corporate DNS
- ✅ Different security policies per cluster
- ✅ Clear audit trail per cluster
- Hybrid Integration:
- ✅ Bastion DNS forwards external queries to corporate DNS
- ✅ Corporate DNS can delegate subdomains to bastion
- ✅ Best of both worlds: self-service + corporate integration
🔄 DNS Delegation Strategy (Recommended)
Best Practice: Use DNS delegation to integrate bastion DNS with corporate DNS:
- Corporate DNS delegates subdomain:
- Corporate DNS: "For prod-power.company.com, ask bastion-1 at 10.1.1.10"
- Corporate DNS: "For dev-power.company.com, ask bastion-2 at 10.1.1.20"
- Bastion DNS is authoritative for its subdomain:
- Bastion-1 answers: api.prod-power.company.com → 10.1.1.11
- Bastion-1 answers: *.apps.prod-power.company.com → 10.1.1.11
- Bastion DNS forwards other queries:
- Query for google.com → Forward to corporate DNS
- Query for other-cluster.company.com → Forward to corporate DNS
Result: Users can resolve cluster DNS from anywhere in the corporate network, but cluster teams have full control over their DNS zone.
📊 Comparison Table
| Aspect | Corporate DNS Only | Bastion DNS |
|---|---|---|
| Cluster Creation Time | ❌ Days/weeks (DNS tickets) | ✅ Minutes (self-service) |
| Day 2 Operations | ❌ Requires DNS team | ✅ Instant, self-service |
| Load Balancer | ❌ Expensive, slow provisioning | ✅ Free HAProxy included |
| PXE Boot Support | ❌ Complex/impossible | ✅ Built-in, tested |
| Troubleshooting | ❌ Slow (wait for DNS changes) | ✅ Fast (immediate changes) |
| Security Isolation | ❌ Shared infrastructure | ✅ Per-cluster isolation |
| Cost | ❌ High (LB, DNS team time) | ✅ Low (2 vCPU, 8GB RAM) |
| Corporate Integration | ✅ Native | ✅ Via delegation/forwarding |
🎯 Recommendation
Use Bastion DNS for Power clusters while keeping x86 clusters on corporate DNS if that's already working well.
- ✅ x86 clusters: Continue using corporate DNS if established processes work
- ✅ Power clusters: Use bastion DNS for speed, flexibility, and PowerVM-specific requirements
- ✅ Integration: Use DNS delegation so both approaches work together seamlessly
- ✅ Best of both: Corporate governance + self-service agility
🔄 Assisted Installer - Installation Sequence
This sequence diagram shows the step-by-step workflow for installing OpenShift using the Assisted Installer automation. The process involves bastion setup, Red Hat Console interaction, PXE boot, and automated cluster installation.
sequenceDiagram
participant User
participant Bastion as Bastion LPAR
participant Console as Red Hat Console
(console.redhat.com) participant Nodes as OCP Nodes
(Masters/Workers) participant HMC as HMC/PowerVM Note over User,HMC: Phase 1: Bastion Setup User->>Bastion: 1. Provision bastion LPAR
(2 vCPU, 8GB RAM, 50GB) User->>Bastion: 2. Set SELINUX=permissive User->>Bastion: 3. Run Ansible playbook
(setup-bastion.yaml) Bastion->>Bastion: Install packages:
dnsmasq, httpd, haproxy,
coreos-installer Bastion->>Bastion: Configure dnsmasq
(DNS, DHCP, TFTP) Bastion->>Bastion: Configure HAProxy
(API: 6443, Ingress: 80/443) Bastion->>Bastion: Setup PXE boot
(grub2-mknetdir) Note over User,HMC: Phase 2: Create Discovery ISO User->>Console: 4. Login to Assisted Installer User->>Console: 5. Click "Create Cluster" User->>Console: 6. Fill cluster details
(name, domain, version) User->>Console: 7. Skip operators selection User->>Console: 8. Add hosts → Generate ISO Console->>Console: Generate Discovery ISO
with embedded ignition Console-->>User: 9. Provide ISO download URL Note over User,HMC: Phase 3: Extract & Deploy ISO User->>Bastion: 10. Download discovery ISO Bastion->>Bastion: 11. Extract ignition file
(coreos-installer iso ignition show) Bastion->>Bastion: 12. Extract PXE files
(kernel, initramfs, rootfs) Bastion->>Bastion: 13. Copy to directories:
• /var/www/html/ignition/
• /var/www/html/install/
• /var/lib/tftpboot/rhcos/ Bastion->>Bastion: 14. Update grub.cfg
with node MAC addresses Note over User,HMC: Phase 4: Network Boot Nodes User->>HMC: 15. Execute lpar_netboot
for each node HMC->>Nodes: 16. Initiate PXE boot Nodes->>Bastion: 17. DHCP request Bastion-->>Nodes: 18. DHCP offer + PXE config Nodes->>Bastion: 19. TFTP: Download kernel
& initramfs Nodes->>Bastion: 20. HTTP: Download rootfs.img Nodes->>Bastion: 21. HTTP: Download ignition Nodes->>Nodes: 22. Boot RHCOS with ignition Nodes->>Console: 23. Register with
Assisted Service Note over User,HMC: Phase 5: Monitor & Install Console->>Console: 24. Discover nodes
(hardware inventory) User->>Console: 25. Verify all nodes "Ready" User->>Console: 26. Configure storage User->>Console: 27. Select "User-Managed
Networking" User->>Console: 28. Review & click
"Install Cluster" Console->>Nodes: 29. Start installation Nodes->>Nodes: 30. Install RHCOS to disk Nodes->>Nodes: 31. Bootstrap cluster Nodes->>Nodes: 32. Deploy control plane Nodes->>Nodes: 33. Deploy operators Console-->>User: 34. Installation progress
& events Note over User,HMC: Phase 6: Completion Nodes->>Console: 35. Report installation
complete Console-->>User: 36. Display "Cluster Ready" User->>Console: 37. Download kubeconfig User->>Console: 38. Get kubeadmin password User->>Bastion: 39. Set KUBECONFIG User->>Nodes: 40. Verify cluster:
oc get nodes, oc get co
(console.redhat.com) participant Nodes as OCP Nodes
(Masters/Workers) participant HMC as HMC/PowerVM Note over User,HMC: Phase 1: Bastion Setup User->>Bastion: 1. Provision bastion LPAR
(2 vCPU, 8GB RAM, 50GB) User->>Bastion: 2. Set SELINUX=permissive User->>Bastion: 3. Run Ansible playbook
(setup-bastion.yaml) Bastion->>Bastion: Install packages:
dnsmasq, httpd, haproxy,
coreos-installer Bastion->>Bastion: Configure dnsmasq
(DNS, DHCP, TFTP) Bastion->>Bastion: Configure HAProxy
(API: 6443, Ingress: 80/443) Bastion->>Bastion: Setup PXE boot
(grub2-mknetdir) Note over User,HMC: Phase 2: Create Discovery ISO User->>Console: 4. Login to Assisted Installer User->>Console: 5. Click "Create Cluster" User->>Console: 6. Fill cluster details
(name, domain, version) User->>Console: 7. Skip operators selection User->>Console: 8. Add hosts → Generate ISO Console->>Console: Generate Discovery ISO
with embedded ignition Console-->>User: 9. Provide ISO download URL Note over User,HMC: Phase 3: Extract & Deploy ISO User->>Bastion: 10. Download discovery ISO Bastion->>Bastion: 11. Extract ignition file
(coreos-installer iso ignition show) Bastion->>Bastion: 12. Extract PXE files
(kernel, initramfs, rootfs) Bastion->>Bastion: 13. Copy to directories:
• /var/www/html/ignition/
• /var/www/html/install/
• /var/lib/tftpboot/rhcos/ Bastion->>Bastion: 14. Update grub.cfg
with node MAC addresses Note over User,HMC: Phase 4: Network Boot Nodes User->>HMC: 15. Execute lpar_netboot
for each node HMC->>Nodes: 16. Initiate PXE boot Nodes->>Bastion: 17. DHCP request Bastion-->>Nodes: 18. DHCP offer + PXE config Nodes->>Bastion: 19. TFTP: Download kernel
& initramfs Nodes->>Bastion: 20. HTTP: Download rootfs.img Nodes->>Bastion: 21. HTTP: Download ignition Nodes->>Nodes: 22. Boot RHCOS with ignition Nodes->>Console: 23. Register with
Assisted Service Note over User,HMC: Phase 5: Monitor & Install Console->>Console: 24. Discover nodes
(hardware inventory) User->>Console: 25. Verify all nodes "Ready" User->>Console: 26. Configure storage User->>Console: 27. Select "User-Managed
Networking" User->>Console: 28. Review & click
"Install Cluster" Console->>Nodes: 29. Start installation Nodes->>Nodes: 30. Install RHCOS to disk Nodes->>Nodes: 31. Bootstrap cluster Nodes->>Nodes: 32. Deploy control plane Nodes->>Nodes: 33. Deploy operators Console-->>User: 34. Installation progress
& events Note over User,HMC: Phase 6: Completion Nodes->>Console: 35. Report installation
complete Console-->>User: 36. Display "Cluster Ready" User->>Console: 37. Download kubeconfig User->>Console: 38. Get kubeadmin password User->>Bastion: 39. Set KUBECONFIG User->>Nodes: 40. Verify cluster:
oc get nodes, oc get co
📋 Installation Phases Explained
- Phase 1 - Bastion Setup (Steps 1-3): Provision and configure bastion LPAR with all required services (DNS, DHCP, TFTP, HTTP, HAProxy)
- Phase 2 - Create Discovery ISO (Steps 4-9): Use Red Hat Console to generate cluster-specific discovery ISO with embedded ignition configuration
- Phase 3 - Extract & Deploy (Steps 10-14): Download ISO, extract components, and deploy to bastion's PXE/HTTP directories
- Phase 4 - Network Boot (Steps 15-23): PXE boot all nodes, download RHCOS and ignition, register with Assisted Service
- Phase 5 - Monitor & Install (Steps 24-34): Monitor node discovery, configure cluster settings, start automated installation
- Phase 6 - Completion (Steps 35-40): Download credentials, verify cluster health
🔑 Key Components & Their Roles
- Bastion LPAR:
- Runs dnsmasq (DNS, DHCP, TFTP for PXE boot)
- Runs httpd (serves ignition files and RHCOS images)
- Runs HAProxy (load balances API and ingress traffic)
- Stores extracted ISO components
- Red Hat Console (Assisted Installer):
- Generates cluster-specific discovery ISO
- Manages cluster configuration and validation
- Orchestrates installation process
- Monitors installation progress and events
- Discovery ISO:
- Contains RHCOS kernel, initramfs, and rootfs
- Embeds cluster-specific ignition configuration
- Includes Assisted Installer agent
- Registers nodes with Assisted Service
- HMC/PowerVM:
- Manages LPAR lifecycle
- Executes lpar_netboot for PXE boot
- Provides hardware virtualization
⚡ Automation with Ansible
The repository provides Ansible playbooks to automate most of these steps:
- setup-bastion.yaml: Automates Phase 1 (bastion setup)
- step-1-setup-services.yaml: Configures dnsmasq, httpd, HAProxy
- step-2-create-ignition.yaml: Downloads and extracts discovery ISO
- step-3-netboot-nodes.yaml: Executes lpar_netboot commands
- step-4-monitor.yaml: Monitors installation progress
Usage: ansible-playbook -e @vars.yaml playbooks/main.yaml
🆚 Assisted Installer vs Agent-Based Installer
| Aspect | Assisted Installer | Agent-Based Installer |
|---|---|---|
| Interface | Web UI + REST API | CLI only |
| Internet Required | Yes (console.redhat.com) | No (fully disconnected) |
| Monitoring | Real-time in web UI | CLI commands only |
| Ease of Use | ⭐⭐⭐⭐⭐ (Very easy) | ⭐⭐⭐ (Moderate) |
| Best For | Connected environments, first-time users | Air-gapped, automated deployments |
More diagrams coming soon: Additional architecture diagrams for Agent-based installer and other deployment scenarios will be added here.