Automated Quick Start Guide

Single Node OpenShift (SNO) Installation on PowerVM – Using Ansible Automation

ocp4-ai-power  ·  ppc64le  ·  Ansible  ·  Automated
⚙ Manual guide available: This guide uses the Ansible automation built into this repository. If you prefer to configure each service manually, see the Manual Step-by-Step Quick Start Guide.
What the automation does for you: The Ansible playbooks in ansible-bastion/ automatically set up all required bastion services (DNS, DHCP, TFTP, HTTP, HAProxy), download RHCOS and OCP binaries, generate the SNO ignition config, trigger the network boot via HMC, and monitor the installation to completion — all from a single command.
1
Clone Repo
2
Prepare Bastion
3
Configure vars.yaml
4
Run Playbook
5
Monitor
6
Access Cluster

Prerequisites & Hardware Requirements

You need two VMs (LPARs): one bastion host and one SNO node. Both must have internet access and static IP addresses assigned before starting.

VM / LPARvCPUMemoryStorageNotes
Bastion28 GB50 GB RHEL 8/9 or CentOS; must run as root; passwordless SSH to HMC
SNO Node816 GB120 GB Static IP; PXE-bootable NIC; HMC-managed LPAR

Required Access & Accounts

  • Root access on the bastion host
  • Passwordless SSH from bastion to HMC (required for lpar_netboot)
  • Red Hat pull secret – download from cloud.redhat.com
  • Internet access from both VMs (or a configured mirror registry)
Important: The bastion's SELinux must be set to permissive mode. Edit /etc/selinux/config, set SELINUX=permissive, then reboot before running the playbook.

Ansible Playbook Flow

The main playbook (playbooks/main.yaml) runs these roles in sequence:

services
DNS/DHCP/PXE/HTTP
ignition
install-config + SNO ign
netboot
lpar_netboot via HMC
update-inventory
dynamic inventory
sno-set-boot-order
disk boot after install
monitor
wait for completion

You can also run each step individually using the step-N-*.yaml playbooks.

1

Clone the Repository & Prepare the Bastion

Clone the ocp4-ai-power repository to your bastion host:

git clone https://github.com/ocp-power-automation/ocp4-ai-power.git
cd ocp4-ai-power/ansible-bastion

Run the Bastion Preparation Script

The prepare-bastion.sh script (included in the repository root) installs all required packages, Ansible, Ansible collections, and sets up the PXE TFTP directory:

cd ..
sudo bash prepare-bastion.sh
cd ansible-bastion
What prepare-bastion.sh does:
  • Detects OS (RHEL 8/9 or CentOS 8/9) and installs Ansible
  • Installs required Ansible collections (community.crypto, community.general, ansible.posix, kubernetes.core)
  • Installs system packages: wget, jq, coreos-installer, grub2-tools-extra, etc.
  • Runs grub2-mknetdir to create the PowerVM PXE TFTP directory
  • Installs and configures httpd on port 8000
  • Installs dnsmasq

Setup Passwordless SSH to HMC

The Ansible playbook uses lpar_netboot via SSH to the HMC. Configure passwordless access:

# Generate SSH key if not already present
ssh-keygen -t rsa -b 2048 -N '' -C 'BASTION-SSHKEY' -f ~/.ssh/id_rsa

# Add the public key to HMC (replace  and )
ssh @
mkauthkeys -a "$(cat ~/.ssh/id_rsa.pub)"

Save Pull Secret

mkdir -p ~/.openshift
# Paste your pull secret from cloud.redhat.com
vi ~/.openshift/pull-secret
2

Configure vars.yaml

Copy the example vars file and edit it for your environment:

cp example-vars.yaml vars.yaml
vi vars.yaml
Important: Do NOT edit example-vars.yaml directly. Always work with your own copy (vars.yaml).

Key Variables for SNO Installation

Set install_type: sno and configure the following variables:

VariableDescriptionRequired
install_type Set to sno for Single Node OpenShift installation REQUIRED
helper.ipaddr IP address of the bastion host REQUIRED
helper.name Hostname for the bastion (e.g. helper) REQUIRED
dns.domain Base domain for the cluster (e.g. ocp.io) REQUIRED
dns.clusterid Cluster name / ID (e.g. sno) REQUIRED
dns.forwarder1 Upstream DNS forwarder (e.g. 9.9.9.9) REQUIRED
dhcp.router Default gateway for the network REQUIRED
dhcp.netmask Network mask (e.g. 255.255.240.0) REQUIRED
dhcp.subnet Network subnet in CIDR (e.g. 9.47.80.0/20) REQUIRED
masters[0].name Hostname for the SNO node (e.g. sno-82) REQUIRED
masters[0].ipaddr Static IP address of the SNO node REQUIRED
masters[0].macaddr MAC address of the SNO node's NIC REQUIRED
masters[0].pvmcec CEC/system name in HMC where the SNO LPAR resides REQUIRED
masters[0].pvmlpar LPAR name in HMC for the SNO node REQUIRED
masters[0].disk Installation disk device (e.g. /dev/sda) REQUIRED
pvm_hmc HMC connection string (e.g. hscroot@9.1.2.3) REQUIRED
rhcos_rhcos_base RHCOS version base (e.g. 4.13) REQUIRED
ocp_client_tag OCP client version tag (e.g. latest-4.13) REQUIRED
workdir Working directory for generated files (e.g. /home/cloud-user/ocp4-sno) REQUIRED
helper.networkifacename Override bastion network interface name (auto-detected if not set) OPTIONAL

Minimal SNO vars.yaml Example

helper:
  name: "helper"
  ipaddr: "9.47.87.83"          # Bastion IP

dns:
  domain: "ocp.io"
  clusterid: "sno"
  forwarder1: "9.9.9.9"
  forwarder2: "8.8.4.4"

dhcp:
  router: "9.47.95.254"
  netmask: "255.255.240.0"
  subnet: "9.47.80.0/20"

masters:
  - name: "sno-82"
    ipaddr: "9.47.87.82"
    macaddr: "fa:b0:45:27:43:20"
    pvmcec: "Server-9080-HEX-SN786E288"
    pvmlpar: "sno-lpar-name"
    disk: /dev/sda

pvm_hmc: hscroot@9.1.2.3

install_type: sno

rhcos_arch: "ppc64le"
rhcos_rhcos_base: "4.13"
rhcos_rhcos_tag: "latest"

ocp_client_base: "ocp"
ocp_client_tag: "latest-4.13"

workdir: "/home/cloud-user/ocp4-sno"
log_level: info
Note: For SNO, define only one entry under masters (the SNO node). Do not define workers or bootstrap sections. See vars-doc.md for a full description of all available variables.
3

Run the Ansible Playbook

Option A – Full Automated Run (Recommended)

Run the complete end-to-end playbook from the ansible-bastion directory:

cd ansible-bastion
ansible-playbook -e @vars.yaml playbooks/main.yaml

This single command will:

  1. Set up all bastion services (dnsmasq, httpd, haproxy, tftp)
  2. Download RHCOS images and OCP binaries
  3. Generate the SNO install-config.yaml and ignition file
  4. Trigger network boot of the SNO LPAR via lpar_netboot on HMC
  5. Set the boot order to disk after installation
  6. Monitor the installation until completion
Expected duration: The full automated run typically takes 60–90 minutes depending on hardware and network speed.

Option B – Step-by-Step Playbooks

If you prefer to run each phase individually (e.g. for troubleshooting or partial re-runs):

cd ansible-bastion

# Step 1: Setup bastion services (DNS, DHCP, PXE, HTTP)
ansible-playbook -e @vars.yaml playbooks/step-1-setup-services.yaml

# Step 2: Generate ignition config
ansible-playbook -e @vars.yaml playbooks/step-2-create-ignition.yaml

# Step 3: Network boot the SNO LPAR via HMC
ansible-playbook -e @vars.yaml playbooks/step-3-netboot-nodes.yaml

# Step 4: Monitor installation progress
ansible-playbook -e @vars.yaml playbooks/step-4-monitor.yaml
Tip: Add -v or -vvv to any playbook command for verbose output. You can also set log_level: debug in vars.yaml for more detailed OpenShift installer logs.

Option C – Day 1 Setup Only (No Boot/Monitor)

To only set up bastion services and generate the ignition file (without triggering the network boot):

ansible-playbook -e @vars.yaml playbooks/setup-day1.yaml
4

Monitor Installation Progress

If you ran the full main.yaml playbook, monitoring is included automatically. If you ran step-by-step, use step-4-monitor.yaml or monitor manually:

Using openshift-install (from workdir)

# Wait for bootstrap to complete
cd /home/cloud-user/ocp4-sno   # your workdir
./openshift-install wait-for bootstrap-complete --log-level=info

# Wait for full installation to complete
./openshift-install wait-for install-complete --log-level=info

Using oc CLI

export KUBECONFIG=/home/cloud-user/ocp4-sno/auth/kubeconfig

# Check node status
oc get nodes

# Check cluster version / installation progress
oc get clusterversion

# Check cluster operators (all Available=True when done)
oc get co

# Check pod status
oc get pod -A
Installation complete when oc get clusterversion shows Progressing=False and Available=True, and all cluster operators report Available=True.
5

Cluster Access

After a successful installation, cluster credentials are stored in /auth/:

  • kubeconfig – kubeconfig file for CLI access
  • kubeadmin-password – password for the kubeadmin user

CLI Access

# Set kubeconfig (adjust workdir path)
export KUBECONFIG=/home/cloud-user/ocp4-sno/auth/kubeconfig

# Or login with kubeadmin
oc login https://api.sno.ocp.io:6443 \
  -u kubeadmin \
  -p $(cat /home/cloud-user/ocp4-sno/auth/kubeadmin-password)

oc cluster-info
oc get nodes

Web Console

https://console-openshift-console.apps.sno.ocp.io

Login with user kubeadmin and the password from /auth/kubeadmin-password.

Download oc CLI

Security: Store the cluster credentials securely. Consider removing kubeadmin-password from the bastion after saving it to a secure location.

Troubleshooting

Re-run a Specific Step

If a step fails, fix the issue and re-run only that step playbook:

# Re-run services setup only
ansible-playbook -e @vars.yaml playbooks/step-1-setup-services.yaml

# Re-run ignition generation only
ansible-playbook -e @vars.yaml playbooks/step-2-create-ignition.yaml

Check Ansible Logs

# Run with verbose output
ansible-playbook -e @vars.yaml playbooks/main.yaml -vvv 2>&1 | tee ansible-run.log

Check Service Status on Bastion

systemctl status dnsmasq
systemctl status httpd
# Verify DNS resolution
dig api.sno.ocp.io @localhost
# Verify HTTP server
curl http://localhost:8000/ignition/master-sno.ign | head -c 100

Check OpenShift Installer Logs

cat /home/cloud-user/ocp4-sno/.openshift_install.log | tail -50
Further Reading:  Manual Step-by-Step Quick Start  ·  Full SNO Installation Guide  ·  vars.yaml Documentation  ·  Agent-based Installer Guide  ·  Red Hat SNO Documentation