Manual Quick Start Guide

Single Node OpenShift (SNO) Installation on PowerVM – Step-by-Step

ocp4-ai-power  ·  ppc64le  ·  PXE / netboot  ·  Manual
⚙ Looking for the automated guide? This is the manual step-by-step guide. If you prefer to use the Ansible automation included in this repository, see the Automated Quick Start Guide.
About this guide: This quickstart walks you through manually installing a Single Node OpenShift (SNO) cluster on a PowerVM LPAR using the PXE/netboot method. Each configuration step is explained in detail. A bastion host serves DNS, DHCP, HTTP, and TFTP. The SNO node boots from the network, downloads the ignition config, and installs RHCOS automatically.
1
Clone & Prepare
2
Install Packages
3
Configure dnsmasq
4
PXE GRUB
5
Download RHCOS
6
Create Ignition
7
Network Boot
8
Monitor

Prerequisites & Hardware Requirements

You need two VMs (LPARs): one bastion host and one SNO node. Both must have internet access and static IP addresses assigned before starting.

VM / LPARvCPUMemoryStorageNotes
Bastion28 GB50 GB RHEL 8/9 or CentOS; must run as root
SNO Node816 GB120 GB Static IP; PXE-bootable NIC

Required Access & Accounts

  • Root access on the bastion host
  • HMC access (for lpar_netboot) or SMS console access on the SNO LPAR
  • Red Hat pull secret – download from cloud.redhat.com
  • Internet access from both VMs (or a configured mirror registry)

Services Configured on Bastion

DNSvia dnsmasq
DHCPvia dnsmasq
TFTP / PXEvia dnsmasq
HTTPvia httpd (port 8000)
Important: The bastion's SELinux must be set to permissive mode. Edit /etc/selinux/config, set SELINUX=permissive, then reboot the bastion before proceeding.

Sample Network Values Used in This Guide

SNO node IP9.47.87.82
Bastion IP9.47.87.83
Gateway9.47.95.254
Netmask255.255.240.0
Domainocp.io
Cluster IDsno
SNO MAC addressfa:b0:45:27:43:20
Machine network CIDR9.47.80.0/20

Replace these values with your actual network configuration throughout this guide.

1

Clone the Repository & Prepare the Bastion

Clone the ocp4-ai-power repository to your bastion host:

git clone https://github.com/ocp-power-automation/ocp4-ai-power.git
cd ocp4-ai-power

Generate SSH Key

Create the SSH key that will be used for OCP installation:

ssh-keygen -t rsa -b 2048 -N '' -C 'BASTION-SSHKEY' -f ~/.ssh/id_rsa

Save Pull Secret

Download your pull secret from cloud.redhat.com and save it on the bastion:

mkdir -p ~/.openshift
# Paste your pull secret content into this file
vi ~/.openshift/pull-secret

Setup Passwordless SSH to HMC

Configure passwordless access from the bastion to your HMC (required for lpar_netboot):

# Replace  with your HMC user and  with your HMC IP
ssh @
mkauthkeys -a "$(cat ~/.ssh/id_rsa.pub)"
2

Install Required Packages on Bastion

All commands in this step are bundled in the prepare-bastion.sh script included in this repository. Run it directly on the bastion as root:

sudo bash prepare-bastion.sh
Or follow the manual steps below if you prefer to run each command individually.

Manual steps – run the following on the bastion (supports RHEL 8/9 and CentOS 8/9):

# Detect OS and install Ansible
DISTRO=$(lsb_release -ds 2>/dev/null || cat /etc/*release 2>/dev/null | head -n1 || echo "")
OS_VERSION=$(lsb_release -rs 2>/dev/null || grep "VERSION_ID" /etc/*release 2>/dev/null \
  | awk -F= '{print $2}' | tr -d '"' || echo "")

if [[ "$DISTRO" != *CentOS* ]]; then
  # Red Hat
  if [[ $(cat /etc/redhat-release | sed 's/[^0-9.]*//g') > 8.5 ]]; then
    sudo subscription-manager repos --enable codeready-builder-for-rhel-9-ppc64le-rpms
    sudo yum install -y ansible-core
  else
    sudo subscription-manager repos --enable ansible-2.9-for-rhel-8-ppc64le-rpms
    sudo yum install -y ansible
  fi
else
  # CentOS
  if [[ $OS_VERSION != "8"* ]]; then
    sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
    sudo yum install -y ansible-core
  else
    sudo yum install -y epel-release epel-next-release
    sudo yum config-manager --set-enabled powertools
    sudo yum install -y ansible
  fi
fi

# Install required Ansible collections
sudo ansible-galaxy collection install community.crypto --upgrade
sudo ansible-galaxy collection install community.general --upgrade
sudo ansible-galaxy collection install ansible.posix --upgrade
sudo ansible-galaxy collection install kubernetes.core --upgrade

# Install required system packages
sudo yum install -y wget jq git net-tools vim tar unzip python3 python3-pip \
  python3-jmespath coreos-installer grub2-tools-extra bind-utils

# Create PXE TFTP directory structure
sudo grub2-mknetdir --net-directory=/var/lib/tftpboot
Note: The grub2-mknetdir command creates the PowerPC GRUB2 network boot files under /var/lib/tftpboot/boot/grub2/powerpc-ieee1275/.
3

Configure dnsmasq (DNS / DHCP / PXE)

Install dnsmasq to provide DNS, DHCP, and PXE services for the SNO node:

sudo yum install -y dnsmasq

Create /etc/dnsmasq.conf

##################################
# DNS
##################################
bogus-priv
enable-ra
bind-dynamic
no-hosts
expand-hosts

interface=        # e.g. env32
domain=sno.ocp.io
local=/sno.ocp.io/
address=/apps.sno.ocp.io/9.47.87.82  # SNO node IP
server=9.9.9.9

addn-hosts=/etc/dnsmasq.d/addnhosts

##################################
# DHCP
##################################
dhcp-ignore=tag:!known
dhcp-leasefile=/var/lib/dnsmasq/dnsmasq.leases

dhcp-range=9.47.87.82,static

dhcp-option=option:router,9.47.95.254
dhcp-option=option:netmask,255.255.240.0
dhcp-option=option:dns-server,9.47.87.83  # Bastion IP

dhcp-host=fa:b0:45:27:43:20,sno-82,9.47.87.82,infinite

###############################
# PXE
###############################
enable-tftp
tftp-root=/var/lib/tftpboot
dhcp-boot=boot/grub2/powerpc-ieee1275/core.elf

Create /etc/dnsmasq.d/addnhosts

9.47.87.82 sno-82 api api-int

Enable and Start dnsmasq

sudo systemctl enable --now dnsmasq
sudo systemctl status dnsmasq
Tip: Replace all IP addresses, MAC address, and interface name with your actual values. The interface= line must match the bastion's network interface name (e.g. env32, eth0).
4

Configure PXE GRUB Boot Menu

Create the GRUB configuration file that tells the SNO node how to boot RHCOS and load the ignition config.

Create /var/lib/tftpboot/boot/grub2/grub.cfg

default=0
fallback=1
timeout=1

if [ ${net_default_mac} == fa:b0:45:27:43:20 ]; then
default=0
fallback=1
timeout=1
menuentry "CoreOS (BIOS)" {
   echo "Loading kernel"
   linux "/rhcos/kernel" ip=dhcp rd.neednet=1 \
     ignition.platform.id=metal ignition.firstboot \
     coreos.live.rootfs_url=http://9.47.87.83:8000/install/rootfs.img \
     ignition.config.url=http://9.47.87.83:8000/ignition/sno.ign

   echo "Loading initrd"
   initrd  "/rhcos/initramfs.img"
}
fi
Note: Replace fa:b0:45:27:43:20 with the actual MAC address of your SNO LPAR's NIC, and 9.47.87.83 with your bastion's IP address.
5

Download RHCOS Images & Setup httpd

Download the RHCOS kernel, initramfs, and rootfs images for ppc64le. These are served to the SNO node during PXE boot.

# Set the RHCOS base URL (adjust version to match your target OCP version)
export RHCOS_URL=https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/4.12/latest/

# Create required directories
sudo mkdir -p /var/lib/tftpboot/rhcos
sudo mkdir -p /var/www/html/install
sudo mkdir -p /var/www/html/ignition

# Download PXE kernel and initramfs (served via TFTP)
cd /var/lib/tftpboot/rhcos
sudo wget ${RHCOS_URL}/rhcos-live-kernel-ppc64le -O kernel
sudo wget ${RHCOS_URL}/rhcos-live-initramfs.ppc64le.img -O initramfs.img

# Download rootfs image (served via HTTP)
cd /var/www/html/install
sudo wget ${RHCOS_URL}/rhcos-live-rootfs.ppc64le.img -O rootfs.img
Version: Check the latest available RHCOS version for ppc64le at mirror.openshift.com. Match the RHCOS version to your target OpenShift version.

Install and Configure httpd

Install Apache httpd to serve the rootfs and ignition files on port 8000:

sudo yum install -y httpd

# Configure httpd to listen on port 8000
sudo sed -i 's/^Listen 80$/Listen 8000/' /etc/httpd/conf/httpd.conf

# Fix SELinux context on web root
sudo restorecon -vR /var/www/html || true

sudo systemctl enable --now httpd
6

Create the SNO Ignition Configuration

Download openshift-install

# Download the openshift-install binary for ppc64le (adjust version as needed)
wget https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.12.0/openshift-install-linux-4.12.0.tar.gz
tar xzvf openshift-install-linux-4.12.0.tar.gz

Create the Work Directory

mkdir -p ~/sno-work
cd ~/sno-work

Create ~/sno-work/install-config.yaml

apiVersion: v1
baseDomain:                   # e.g. ocp.io
compute:
- name: worker
  replicas: 0                        # No workers for SNO
controlPlane:
  name: master
  replicas: 1                        # Single control plane node
metadata:
  name:                # e.g. sno
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr:      # e.g. 9.47.80.0/20
  networkType: OVNKubernetes
  serviceNetwork:
  - 172.30.0.0/16
platform:
  none: {}
bootstrapInPlace:
  installationDisk:           # e.g. /dev/sda
pullSecret: ''
sshKey: |
  
Configuration Notes:
  • – Base domain, e.g. ocp.io
  • – Cluster name, e.g. sno. Full domain will be sno.ocp.io
  • – Subnet containing your SNO node's IP, e.g. 9.47.80.0/20
  • – Installation disk, e.g. /dev/sda or /dev/disk/by-id/scsi-36005076d0281005ef000000000026803
  • – Contents of ~/.openshift/pull-secret
  • – Contents of ~/.ssh/id_rsa.pub

Generate the Ignition File

# Generate the single-node ignition config
./openshift-install --dir=~/sno-work create single-node-ignition-config

# Copy the ignition file to the HTTP server directory
sudo cp ~/sno-work/single-node-ignition-config.ign /var/www/html/ignition/sno.ign
sudo restorecon -vR /var/www/html || true
Ready! Verify the HTTP server is accessible before proceeding:
curl http://9.47.87.83:8000/ignition/sno.ign | head -c 100
7

Network Boot the SNO Node

There are two ways to PXE-boot the SNO LPAR on PowerVM:

Option A – Using lpar_netboot on HMC (Recommended)

Run the following command from the HMC command line:

lpar_netboot -i -D -f -t ent \
  -m  \
  -s auto -d auto \
  -S  \
  -C  \
  -G  \
   default_profile 
-m MAC address of the SNO LPAR's NIC – e.g. fa:b0:45:27:43:20
-S Bastion IP address (PXE server) – e.g. 9.47.87.83
-C SNO node IP address – e.g. 9.47.87.82
-G Network gateway – e.g. 9.47.95.254
Name of the SNO LPAR in HMC
Name of the CEC/system hosting the LPAR

Option B – Using SMS Console (Interactive)

Access the LPAR's SMS (System Management Services) console and manually select the network boot device. Refer to the HMC documentation for detailed SMS navigation steps.

What happens next: The SNO LPAR boots from the network, downloads the RHCOS kernel and initramfs via TFTP, then fetches the rootfs and ignition config via HTTP from the bastion. RHCOS is written to the installation disk and the node reboots automatically to complete the OpenShift installation.
8

Monitor Installation Progress

After the SNO LPAR boots from PXE, monitor the installation from the bastion using openshift-install:

# Step 1: Wait for bootstrap to complete (typically 15-20 minutes)
cd ~/sno-work
./openshift-install wait-for bootstrap-complete --log-level=info

# Step 2: After bootstrap completes, wait for full installation
./openshift-install wait-for install-complete --log-level=info
Expected duration: The full SNO installation typically takes 45–90 minutes depending on hardware and network speed.

Check Status with oc

You can also monitor progress using the OpenShift CLI:

# Set kubeconfig
export KUBECONFIG=~/sno-work/auth/kubeconfig

# Check node status
oc get nodes

# Check overall cluster version / installation progress
oc get clusterversion

# Check cluster operators (all should be Available=True when done)
oc get co

# Check pod status across all namespaces
oc get pod -A
Installation complete when oc get clusterversion shows Progressing=False and Available=True, and all cluster operators report Available=True.

Cluster Access

After a successful installation, cluster credentials are stored in ~/sno-work/auth/:

  • kubeconfig – kubeconfig file for CLI access
  • kubeadmin-password – password for the kubeadmin user

CLI Access

# Set kubeconfig
export KUBECONFIG=~/sno-work/auth/kubeconfig

# Or login with kubeadmin credentials
oc login https://api.sno.ocp.io:6443 \
  -u kubeadmin \
  -p $(cat ~/sno-work/auth/kubeadmin-password)

# Verify cluster info
oc cluster-info
oc get nodes

Web Console

Open the following URL in your browser and login with user kubeadmin:

https://console-openshift-console.apps..
# Example: https://console-openshift-console.apps.sno.ocp.io

Download oc CLI

Security: Store the cluster credentials securely. Consider removing kubeadmin-password from the bastion after saving it to a secure location.
Further Reading:  Full SNO Installation Guide  ·  Agent-based Installer Guide  ·  Assisted Installer Guide  ·  Red Hat SNO Documentation