Skip to content

Tutorial: Deploy a Scalable Workload

This tutorial deploys a horizontally-scalable web tier using the ScalableWorkload spell: an OCI Load Balancer in the public subnet, an instance pool in the private subnet, and CPU-based autoscaling.

What you will build:

Internet (HTTP port 80)
   │
   ▼
┌─────────────────────────────────────────────────────┐
│ Public subnet  ← Internet Gateway                   │
│  OCI Load Balancer                                   │
└─────────────────────────────────────────────────────┘
          │  health checks + traffic
          ▼
┌─────────────────────────────────────────────────────┐
│ Private subnet  ← NAT GW + Service GW               │
│  Instance pool (1–3 nginx VMs)                       │
│  Autoscales on CPU: out >80%, in <20%                │
└─────────────────────────────────────────────────────┘

What gets created: 1 VCN, 1 load balancer, 1 instance configuration, 1 instance pool, 1 autoscaling policy, security list rules for HTTP and SSH.


Prerequisites


Step 1 — Initialise the stack

cd examples/autoscale

pulumi stack init dev
pulumi config set compartment_ocid ocid1.compartment.oc1..aaaa...

Optionally provide an SSH key (skip to auto-generate):

pulumi config set ssh_key "$(cat ~/.ssh/id_rsa.pub)"

Step 2 — Walk through the code

Open examples/autoscale/__main__.py.

2a. Prepare a cloud-init script

The example passes a plain cloud-init script to install nginx and create a health check endpoint:

user_data_script = """#!/bin/bash
set -e
yum install -y --disablerepo='*' --enablerepo='ol8_appstream,ol8_baseos_latest' nginx
systemctl enable nginx && systemctl start nginx
firewall-cmd --permanent --add-service=http && firewall-cmd --reload
echo "OK" > /usr/share/nginx/html/health
"""

CloudSpells base64-encodes user_data internally before passing it to OCI — pass the plain string or bytes directly.

The load balancer health check polls /health on port 80. Instances that fail health checks are removed from the rotation.

2b. Create the VCN and workload

from cloudspells.providers.oci.network import Vcn
from cloudspells.providers.oci.autoscale import ScalableWorkload

vcn = Vcn(
    name="scalable",
    compartment_id=compartment_id,
)

scalable_pool = ScalableWorkload(
    name="web-pool",
    compartment_id=compartment_id,
    vcn=vcn,
    ssh_public_key=config.get("ssh_key"),
    user_data=user_data_script,
    max_instances=3,
)

ScalableWorkload encapsulates the entire tier:

  • Load balancer in the public subnet, listening on port 80
  • Instance pool in the private subnet, bootstrapped with your user_data
  • CPU autoscaling: scale out when CPU > 80%, scale in when CPU < 20%
  • 300-second cooldown between scaling events
  • Defaults to VM.Standard.E4.Flex with 1 OCPU / 16 GB RAM
  • Minimum 1 instance; set max_instances to control the ceiling

Step 3 — Deploy

pulumi preview   # verify the plan
pulumi up

Deployment takes 5–10 minutes (instance pool provisioning is the slow step).


Step 4 — Test the load balancer

LB_IP=$(pulumi stack output web_pool_lb_ip)
curl http://$LB_IP/

Each response shows the hostname of the instance that handled the request. Reload a few times to see requests distributed across pool members.

Check the health endpoint:

curl http://$LB_IP/health
# OK

Step 5 — Inspect outputs

pulumi stack output

Key outputs:

Output Description
web_pool_lb_ip Public IP of the load balancer
web_pool_lb_id Load balancer OCID
web_pool_pool_id Instance pool OCID
web_pool_ssh_private_key (secret) Only present when auto-generated

Scaling policy reference

Metric Threshold Action
CPU utilisation > 80% Add 1 instance
CPU utilisation < 20% Remove 1 instance
Cooldown 300 seconds between events
Min instances 1
Max instances Set via max_instances

SSH access to pool members

Pool instances are in the private subnet and have no public IP. Use the OCI Bastion service or a jump host in the public subnet.

If you need direct SSH access during development, add a bastion to the same stack:

from cloudspells.providers.oci.bastion import Bastion

bastion = Bastion(
    name="mgmt",
    compartment_id=compartment_id,
    vcn=vcn,
)

See the Bastion how-to guide for session creation steps.


Teardown

pulumi destroy

What's next