# ProxmoxKVM WHMCS module

# Description

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**

##### [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## Proxmox KVM WHMCS Module

The PUQ Proxmox KVM module for WHMCS provides automated provisioning, management, and billing of KVM virtual machines on Proxmox VE clusters. The module consists of two parts: the **Server Module** that handles VM provisioning and the client/admin interface, and the **Addon Module** that manages IP address pools, DNS zones, cron tasks, and provides a centralized management dashboard.

The module allows your customers to provision and manage KVM machines on your Proxmox server or Proxmox cluster. It exposes virtually all functions of Proxmox directly from the WHMCS panel without forcing the user (or admin) to log in to Proxmox itself. This greatly simplifies customer account management, improves customer satisfaction and reduces the number of support requests.

The module is intended for advanced users — installation and correct configuration require knowledge and experience in server and network administration. Although the documentation is detailed and allows the module to be installed by an intermediate user, we strongly suggest following the order defined in the installation chapter.

> **Changed in v3.0.** Starting from v3.0 the module ships its own dedicated addon module (`puq_proxmox_kvm`) — the separate **PUQ Customization** addon required for v1.3–v2.x is **no longer needed**. On first activation the new addon automatically migrates IP pools, DNS zones and VM records from the old `puq_customization` tables.

> **New in v3.3.** Full WHMCS Configurable Options coverage — 18 distinct options with clean plain-English names cover every per-service customisation (CPU, RAM, every disk size / bandwidth / IOPS, network bandwidth, IP counts, OS, backups, snapshots). Every option has a sensible default in Module Settings so products work out of the box. Disk downgrades are blocked by a three-layer safety net; selecting `Additional Disk = 0` cleanly removes the extra disk. See the [Changelog](02-changelog.md) and the [Configurable Options chapter](05-admin-area/03-configurable-options.md) for details.

### Installation service

If you don't feel comfortable performing the installation yourself, PUQcloud offers an installation service in two variants — **module installation and configuration** and **full implementation**. See [puqcloud.com](https://puqcloud.com/whmcs-module-proxmox-kvm.php) for details.

- - - - - -

## Main Features

- **Automated VM provisioning** — automatic deployment of KVM virtual machines via linked clone or full clone with state machine-based deploy pipeline
- **Post-clone migration** — automatic VM migration to the target node with correct storage after cloning, supporting cross-node deployment with local storage
- **VM lifecycle management** — create, suspend, unsuspend, terminate, reinstall, change package (upgrade/downgrade) with step-by-step state machine and retry logic
- **Firewall management** — configurable firewall policies, anti-spoofing IPSet rules, and client-side firewall rule management (add, delete, reorder)
- **Snapshot management** — create, rollback, and remove snapshots with configurable lifetime and automatic cleanup
- **Backup management** — manual and scheduled backups with restore capability, per-day schedule configuration
- **IPv4/IPv6 IP address pool management** — centralized IP allocation with per-server pools, automatic bridge/VLAN selection
- **DNS zone management** — Cloudflare and HestiaCP integration for forward and reverse DNS automation
- **noVNC web console** — secure browser-based console access via VNC proxy with one-time authentication links
- **Cloud-init support** — automatic hostname, IP, DNS, user, and password configuration via cloud-init
- **ISO mounting** — mount and unmount ISO images from Proxmox storage for OS installation
- **Resource usage charts** — real-time CPU, RAM, disk I/O, and network usage graphs with historical data
- **Usage-based billing** — network traffic metering (inbound/outbound) with WHMCS Metric Billing
- **Configurable client permissions** — per-product control over which features are available to customers (start, stop, noVNC, charts, reinstall, reset password, revDNS, ISO mount, firewall)
- **Multi-language support** — 25 languages including Arabic, Azerbaijani, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Farsi, French, German, Hebrew, Hungarian, Italian, Macedonian, Norwegian, Polish, Romanian, Russian, Spanish, Swedish, Turkish, and Ukrainian
- **Cron system** — flexible cron with two modes (WHMCS hook or standalone), configurable intervals per task, CLI tools, lock management
- **Admin product settings UI** — custom Bootstrap-based settings panel for VM configuration, storage, network, firewall, integrations, email templates, and client permissions

- - - - - -

## System Requirements

<table id="bkmrk-requirement-minimum-"><thead><tr><th>Requirement</th><th>Minimum</th></tr></thead><tbody><tr><td>WHMCS</td><td>8.x or higher</td></tr><tr><td>PHP</td><td>7.4, 8.1, or 8.2</td></tr><tr><td>Proxmox VE</td><td>8.x or higher</td></tr><tr><td>ionCube Loader</td><td>v13 or newer</td></tr></tbody></table>

- - - - - -

## Module Components

<table id="bkmrk-component-type-direc"><thead><tr><th>Component</th><th>Type</th><th>Directory</th></tr></thead><tbody><tr><td>Server Module</td><td>`puqProxmoxKVM`</td><td>`modules/servers/puqProxmoxKVM/`</td></tr><tr><td>Addon Module</td><td>`puq_proxmox_kvm`</td><td>`modules/addons/puq_proxmox_kvm/`</td></tr></tbody></table>

The **Addon Module** is required for the server module to function. It manages:

- IP address pools (IPv4 and IPv6)
- DNS zones (Cloudflare, HestiaCP)
- VM management dashboard
- Cron task orchestration
- Global settings (API timeouts, migration, cron intervals)

- - - - - -

## Links

- **Product page:** [https://puqcloud.com/whmcs-module-proxmox-kvm.php](https://puqcloud.com/whmcs-module-proxmox-kvm.php)
- **Documentation:** [https://doc.puq.info/books/proxmoxkvm-whmcs-module](https://doc.puq.info/books/proxmoxkvm-whmcs-module)
- **GitHub:** [https://github.com/puqcloud/WHMCS-Module-Proxmox-KVM](https://github.com/puqcloud/WHMCS-Module-Proxmox-KVM)
- **Support:** [https://puqcloud.com/submitticket.php](https://puqcloud.com/submitticket.php?step=2&deptid=1)
- **Community:** [https://community.puqcloud.com/](https://community.puqcloud.com/)

- - - - - -

## Screenshots

### Client Area — VM Overview

![Client area overview](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-jdso8rbz.png)

### Client Area — Firewall Rules

![Client area firewall](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-4cntxs7d.png)

### Admin Area — Product Configuration

![Admin product config](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-c3uo9gjl.png)

### Addon Module — Dashboard

![Addon dashboard](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-die3mufv.png)# Changelog

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**

##### [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

- - - - - -

## v3.3.1 — 15-05-2026

Compatibility patch for **Proxmox VE 9.1.1** and templates that don't use VirtIO.

### Proxmox 9.x strict schema — fixed

Proxmox VE 9.x hardened its API schema validation. The «Set system disk I/O» step started failing with `HTTP 400: {"_root":"property is not defined in schema"}` whenever the module couldn't pin down the system disk from the VM config. Now bulletproof — see below.

### Robust system-disk detection

Any combination of the following templates is now handled correctly:

- **SCSI / SATA / IDE system disks**.
- **Legacy `bootdisk: scsi0`** parameter from older Proxmox versions.
- **Boot order with CD-ROM first** (`boot: order=ide2;scsi0`) — cloudinit / CD-ROM / network entries are skipped, the first real data disk wins.
- **Templates without a `boot:` line at all** — the first detected data disk is used as a fallback.

### Additional disk creation — works on any storage

Format detection for the new additional disk used to fail when the system disk volid contained extra dots (storage names with `.`, path-style volids on certain plugins). The parser now extracts only the trailing extension and validates against the Proxmox enum (`raw`, `qcow2`, `vmdk`, …), falling back to `raw` for block storage and volids without an extension. No more `format error: value 'qcow2/101/vm-101-disk-0' does not have a value in the enumeration` on creation.

### Bandwidth step — no more useless API calls

`Set system / additional disk I/O` now skips the API call entirely when all four bandwidth/IOPS values are `0`. Faster deploys, clean cron logs, and one less surface for schema-strict Proxmox versions to fail on.

- - - - - -

## v3.3 — 14-05-2026

Configurable Options release: full coverage with plain-English names, Module Settings defaults for every resource, disk shrink protection.

### Configurable Options

- **18 supported options** with plain-English names: `CPU Cores`, `RAM`, `System Disk`, `Additional Disk`, `System Disk Read/Write Bandwidth`, `System Disk Read/Write IOPS`, `Additional Disk Read/Write Bandwidth`, `Additional Disk Read/Write IOPS`, `Network Bandwidth`, `IPv4 Addresses`, `IPv6 Addresses`, `Backups`, `Snapshots`, `Operating System`.
- **11 are new in v3.3** — all disk size / bandwidth / IOPS parameters and Network Bandwidth.
- Legacy prefix names (`B|...`, `S|...`, `CPU|...`, `RAM|...`, `ipv4|...`, `ipv6|...`, `OS|...`) still work. When both forms exist on the same product, the plain name wins.

### Module Settings defaults

A product now works without any Configurable Options at all — every resource has a default in Module Settings, overridden only when a matching option is assigned to the service. New default fields: `Backups` and `Snapshots` count in VM Configuration; `IPv4 count` and `IPv6 count` in Network.

### Disk shrink protection

Proxmox cannot shrink disks. v3.3 blocks downgrades at three layers:

1. **Upgrade page** — smaller sub-options are visually disabled with `(downgrade not allowed)`, warning banner above the form, client-side submit guard.
2. **Change-package state machine** — backend skips the resize step with `skip — shrink not allowed by Proxmox`, VM is not stopped, snapshots are not removed.
3. **Post-backup-restore** — re-applying a smaller package size after restore is treated the same way.

### Additional Disk = 0 deletes the disk

Selecting `0` for Additional Disk now detaches the disk and purges the file from storage. Upgrade form labels the sub-option `(removes the existing disk — data will be lost)` and requires a JavaScript `confirm()` before submit. To disallow this for clients, omit the `0|...` sub-option from the Additional Disk dropdown.

### Faster change-package

The Start VM step at the end of a change-package now polls for up to 60 seconds in the same cron pass. Slower-starting VMs (cloud-init, large memory) no longer force a one-minute wait for the next cron tick.

- - - - - -

## v3.2 — 18-04-2026

A DNS, lifecycle and admin-UX release. Key goal: long-running operations (provisioning many DNS records, tearing down a service with large backups) must never time out the WHMCS request. Both **Set DNS records** and **Terminate** now run asynchronously in cron with live progress streamed to the cron output. Under the hood — full null-safety hardening across both modules for PHP 8.1/8.2 stability.

### PowerDNS provider

Native support for the **PowerDNS Authoritative Server REST API** as a third DNS provider (alongside Cloudflare and HestiaCP). Works out of the box with standard PowerDNS installations — configure `server` URL and `api_key`, the module takes care of the rest. Fully integrated with forward and reverse zones, automatic `ensureTrailingDot` / FQDN normalization, and PowerDNS-strict content formatting for PTR / CNAME / NS records.

### Asynchronous Set DNS records

The **Set DNS records** admin button used to call the Proxmox and DNS APIs synchronously — on a service with many reverse-DNS records it would exceed the PHP execution limit and fail with a blank error page. The button now queues the job by setting the VM status to `set_dns_records` and returns `success` instantly. The cron task picks it up on the next tick, runs `DeleteDNSRecords` + `SetDNSRecords`, and writes a full step-by-step log to the VM record.

### Asynchronous Terminate

Same treatment for service termination. When an admin clicks Terminate, the module sends a fire-and-forget "stop" request to Proxmox, sets `vm_status = 'terminate'`, returns `'success'` — and WHMCS marks the service Terminated immediately. The actual heavy work (graceful stop with polling, backups removal, DNS deletion, VM DELETE API call, DB cleanup) is done by cron.

Benefits:

- Client loses access to the service instantly; no waiting 30+ seconds for the admin action to complete.
- Large backup cleanup and bulk DNS deletion can't time out the HTTP request anymore.
- The VM starts shutting down in the background while the cron queue is still processing other VMs — by the time cron picks it up, it's often already stopped.

### Robust VM stop polling

The terminate flow previously used a fixed 15-second stop window which was insufficient for VMs with large memory footprints or QEMU guest-agent filesystem freeze. It now issues a single stop request and polls the remote status every 5 seconds for up to 120 seconds (graceful), then a 60-second force-stop window. Live progress is emitted every 15 seconds so admins see what's happening.

### New `error_terminate` status + Reset / Delete Record actions

When termination fails (for example, the Proxmox API DELETE call returns an error), the VM no longer silently falls back to `remove`. It's now marked `error_terminate`:

- Cron **never automatically retries** — the admin has to act.
- The VM record is **not deleted** from the database and the IPs stay allocated in the pool, so they cannot be accidentally reassigned to another client while the failing VM is still present on Proxmox.
- A clear red error banner in the VM Log modal shows the failure reason.

The **Reset VM Status** modal has been expanded with `terminate` (retry) and `remove` (force-mark) options, plus an embedded reference table explaining when to use each status. A new **Delete Record** button (trash icon) appears for rows in `error_terminate` / `remove` status — it removes the row from `puqProxmoxKVM_vm_info` only, with an explicit confirmation dialog warning that Proxmox state is not touched.

### Live cron output

The standalone cron (`php cron.php`) now streams every individual step in real time with timestamps. During a deploy you can watch DNS records being created zone by zone, IP by IP, instead of waiting 60 seconds and seeing only the summary. During a terminate you see `stop request sent`, periodic `still running, waited Xs / 120s` heartbeats, each DNS deletion, the final `VM deleted`. Output is flushed after every line — nothing is buffered.

### DNS zones UX + credentials never leave the server

The DNS Zones page now shows three provider types (Cloudflare, HestiaCP, PowerDNS) with a single unified CRUD interface. Secret fields (API tokens, admin passwords, API keys) are **no longer returned to the browser** — the edit form shows `(unchanged — enter new to replace)` placeholders, and the save flow preserves the stored value if the field is left empty.

### IP Pools — automatic reverse-DNS zone hint

When configuring an IP Pool, the required reverse-DNS zone name for the prefix is now computed automatically and shown as a hint both in the add/edit modal and as a second line in the Addresses column of the pool list. For example, a `2001:db8::/120` pool shows:

```
2001:db8::2 - 2001:db8::50
rDNS zone: 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa

```

Admins no longer have to compute nibble reversals by hand — copy the value straight into the DNS Zones form. Both IPv4 (`/8`, `/16`, `/24`) and IPv6 (any nibble-aligned prefix) are supported; non-aligned prefixes show a "classless delegation required" hint.

### DNS record creation — reliability

A collection of DNS bugs fixed in one pass:

- **IPv6 PTR names** — previously computed via `str_replace(':', '')` which produced a garbage PTR for any compressed IPv6 address (e.g. `2001:db8::1` became `1.8.b.d.1.0.0.2.ip6.arpa` instead of the correct 32-nibble form). Now uses `inet_pton` + `bin2hex`, correct for every IPv6 form.
- **PowerDNS PTR/CNAME/NS content** — automatically wrapped with a trailing dot so PowerDNS strict mode does not reject records with "Record content malformed".
- **HestiaCP server URL** — normalized on save so it always ends with `/`, regardless of how the admin types it.
- **Zone suffix matching** — zones saved with a trailing dot (`example.com.`) now match correctly against record names.
- **IPv6 DNS1-only / DNS2-only** — an IP pool with only `dns1` or only `dns2` now correctly sets the VM's DNS server; previously only the `dns1+dns2` combination worked reliably.

### Non-blocking DNS errors

DNS API failures (zone missing, provider down, auth error) **never block** deploy, change-package, or terminate. Each zone and each record is wrapped in individual try-catch and logged as a non-blocking event. The operation proceeds with the rest. A summary (`forward_ok/err`, `rev_ok/err`, per-zone messages) is written to the VM log and, when errors occurred, to the WHMCS module log as well.

### VM Management page polish

- **IPs column** — each IP is now shown together with its rDNS on the line below (smaller, muted) as one visual block. Easy to scan.
- **Actions column** — all buttons stay on a single row.
- **Filters** — service-status and vm-status filters remember the admin's last choice in `localStorage` and restore it on the next visit. Default is now "All" on first visit (used to be "Active").

### Admin UX — clearer error surfacing

- VM Log modal shows a prominent red alert banner at the top when `last_error_action` / `last_error_message` are present in the last action log.
- Client Activity Log gets exactly one entry per terminate attempt — "terminated successfully" on success, "termination FAILED — admin attention required: <reason>" on error. No spam.</reason>

### Under the hood

- **Full null-safety audit** across both modules: all typed-property assignments, `$params['...']`, `$_GET['...']`, and `explode()[n]` reads now use `?? default` or bounds checks. Prevents `TypeError: cannot access offset on null` warnings on PHP 8.1/8.2 when Proxmox API responses or DB rows omit optional fields.
- Removed dead code: legacy pre-ProxmoxApi ticket parsing, unused `isOk()` helper, commented-out stubs (~40 lines total).
- Cloudflare / HestiaCP DNS clients harden `json_decode` results against non-JSON / empty responses.
- `cleanupFirewall` now distinguishes benign "IPSet does not exist" from real API errors (auth, 500) and logs the latter.
- Live-cron logger is shared across deploy, change-package, and terminate flows — consistent output format everywhere.

- - - - - -

## v3.1 — 16-04-2026

A stability and admin UX release on top of v3.0. Focused on making product configuration self-explanatory and hardening the cron against bad data.

### Actionable errors on the Product Configuration page

The custom Module Settings UI no longer fails silently when something is wrong with the product's Server Group. Instead of a generic "No server found" message, the page now shows a contextual banner with an exact fix-it hint and highlights the affected fields (Node, OS Template, Storages):

- **"Server Group is not selected"** — when the product hasn't been saved or no group is assigned.
- **"Server Group no longer exists"** — when the referenced group was deleted.
- **"Server Group has no servers assigned"** — with a direct path to `Setup → Products/Services → Servers → Edit group`.
- **"Server Group references a missing server"** — when the group still exists but points to a deleted server.

### Cron stability — safe handling of incomplete network data

Fixed a regression where one service with a missing IP-pool entry or server address field could crash the entire `processVirtualMachines` cron run on PHP 8.0+. All assignments from `server_address_list` and IP pool data (netmask, gateway, DNS, bridge, VLAN) are now null-safe, so the cron continues processing the rest of the queue even if a single service has stale or incomplete network configuration.

### Statistics collection fix

`GetStatistics()` now resolves the VM's current Proxmox node before collecting RRD data and safely skips services whose remote node is not yet known (for example, services still in the deployment queue). Prevents spurious errors in the statistics cron.

- - - - - -

## v3.0 (April 2026) — Major Release

Version 3.0 is a complete rewrite with a new architecture, dedicated addon module, and dozens of new features. This is the biggest update since the initial release.

### New Dedicated Addon Module

The PUQ Customization addon module is **no longer required**. Version 3.0 includes its own dedicated addon module (`puq_proxmox_kvm`) with:

- **Dashboard** — centralized overview of all resources: IP pools, DNS zones, KVM services
- **IP Pool Management** — redesigned with per-server pools, usage visualization, improved validation
- **DNS Zone Management** — Cloudflare and HestiaCP integration for automatic forward/reverse DNS
- **VM Management** — centralized view of all VMs across all servers with deploy logs, status monitoring, retry/reset actions, and database record inspection
- **Settings** — multi-page settings (General + Cron) with API timeouts, migration configuration, and per-task cron intervals
- **Auto-migration** — seamless migration from old `puq_customization` tables on first activation
- **Access control** — configurable admin role groups for addon access

### Deploy State Machine

The VM deployment process has been completely rewritten as a **step-by-step state machine**:

- Each deployment step is executed individually with status tracking
- If any step fails, deployment pauses and **resumes automatically** on the next cron run
- Full deploy log with per-step timing, status transitions, and error messages
- Visible in both CLI output and admin UI

**Deploy steps:** Allocate IP → DNS + Clone → Migrate to target node → Set CPU &amp; RAM → Resize system disk → Set disk I/O → Create additional disk → Resize additional disk → Set additional disk I/O → Configure network → Configure firewall → Configure cloud-init → Start VM → Verify running + Email

### Post-Clone VM Migration

New intelligent migration system for cross-node deployment:

- After cloning to the template node, VMs are **automatically migrated** to the target node with the correct storage
- Supports offline migration with storage mapping (`targetstorage` parameter)
- Finds suitable target nodes based on storage availability and free RAM
- Configurable migration timeout with cron-based retry
- If migration fails or no suitable node is found, VM stays on the template node and deployment continues

### Change Package State Machine

Package upgrades/downgrades have been rewritten with the same state machine approach:

- **12 individual steps**: Update IP/DNS → Stop VM → Set CPU/RAM → Resize disks → Configure I/O → Configure network → Configure firewall → Start VM → Verify running
- Each step checks if a change is actually needed and **skips if no change** detected
- Full logging to `vm_last_action_log` with step-by-step detail
- Resilient to failures — continues from last successful step on next cron run

### Firewall Management

Complete firewall feature — both for deployment and client self-service:

- **Deploy configuration** — firewall options (enable, DHCP, NDP, MAC filter, IP filter, log levels), policies (input/output), and anti-spoofing IPSet are configured during deployment
- **Client area** — full firewall rules management page: add rules, delete rules, drag-and-drop reorder, change input/output policies
- **Admin product settings** — new "Firewall" panel with all options configurable per product
- **Rule validation** — server-side validation of action, direction, protocol, IP/CIDR, port ranges

### Cron System

Flexible cron with two modes:

- **WHMCS Hook mode** (default) — runs automatically with WHMCS cron
- **Standalone mode** — independent cron file with CLI tools (`--task`, `--force`, `--list`, `--help`)
- **Per-task intervals** — each cron task has its own configurable interval (set to 0 to disable)
- **Lock management** — flock-based locking with stale PID detection and configurable timeout
- **Structured results** — each cron function returns `{processed, errors, details}` for monitoring

### Client Area Redesign

All client area pages have been fully redesigned:

- **AJAX-based** — no Proxmox API calls on page load, all data loaded asynchronously
- **Modern UI** — consistent card-based design with PUQ styling
- **Session cache** — 30-second cache for VM status to reduce API load
- **Fast poll** — 1-second status polling after start/stop for instant feedback
- **New Firewall page** — rules management with drag reorder, policies, add/delete
- **Network info message** — notification when additional IPs need manual configuration
- **Translation support** — all UI text through `L()` helper, 25 languages

### Admin Area Improvements

- **Custom product settings UI** — Bootstrap-based panels replacing default WHMCS configoptions for VM Configuration, Storage, Network, Firewall, Integrations, Email Templates, Client Permissions
- **Real-time VM information** — JSON AJAX panel with status, CPU, RAM, disk, network details
- **Deploy log viewer** — expandable step-by-step deploy history with timing
- **Change package log** — step-by-step change history with skip indicators
- **noVNC + Redeploy buttons** — quick actions in admin service view
- **Metric billing** — bandwidth usage in/out with WHMCS Metric Billing integration

### Security &amp; Stability

- **Path traversal fix** — whitelist validation for addon page routing
- **Admin session check** — explicit `$_SESSION['adminid']` verification in addon AJAX
- **Input validation** — firewall rules, IP/CIDR, port ranges validated server-side
- **Error handling** — safe `explode()` operations, operator precedence fixes, try-catch on database operations
- **DNS log filtering** — sensitive tokens/passwords removed from log output
- **PHP 7.4 compatibility** — `str_contains()` replaced with `strpos()`, no PHP 8.0+ features required

### Compatibility

<table id="bkmrk-component-supported-"><thead><tr><th>Component</th><th>Supported Versions</th></tr></thead><tbody><tr><td>**WHMCS**</td><td>8.x, 9.x</td></tr><tr><td>**PHP**</td><td>7.4, 8.1, 8.2</td></tr><tr><td>**Proxmox VE**</td><td>8.x, 9.x</td></tr><tr><td>**ionCube Loader**</td><td>v13, v14, v15</td></tr></tbody></table>

- - - - - -

## v2.4 — 31-08-2025

- Accounted for a custom path to the WHMCS admin panel.
- Direct links now take the `WHMCS System URL` parameter into consideration.

## v2.3 — 09-08-2025

- **Breaking change:** authentication switched from login/password to **Proxmox API token**. Users who update **must** create a token and enter the new credentials in the Proxmox server settings. Username must be in the format `root@pam!whmcs-dev` (token ID), password — the token value itself.
- Renamed the anti-spoofing rule filter from `wm-VMID` to `ipfilter-net0`.
- Various performance improvements that increased the module's response speed.

> **Warning:** before updating to v2.3+, create a Proxmox API token and enter its details in the server settings.

## v2.2 — 14-07-2025

- Backup restoration mechanism improved.
- Security fixes implemented.
- Client web interface updated: button-related bugs fixed, loaders added.
- Adapted for compatibility with Proxmox v8.4.

## v2.0 — 23-09-2024

- Module is now coded with **ionCube v13**.
- Supported PHP versions: 
    - PHP 7.4 — WHMCS ≤ 8.11.0
    - PHP 8.1 — WHMCS ≥ 8.11.0
    - PHP 8.2 — WHMCS ≥ 8.11.0
- Added an active check for PUQ Customization + extension `Module PuqProxmoxKVM`.

## v1.5 — 04-03-2024

- Fixed an `No IPv6 addresses available` bug that occurred during IPv6 assignment in some cases.
- Fixes in client-zone templates.
- Changed the display of the product in the admin area.
- Added metrics for incoming and outgoing traffic (usage-based billing now possible).

## v1.4.5 — 11-10-2023

- Added support for WHMCS 8.8.0.
- Translations added/updated for 25 languages: Arabic, Azerbaijani, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Farsi, French, German, Hebrew, Hungarian, Italian, Macedonian, Norwegian, Polish, Romanian, Russian, Spanish, Swedish, Turkish, Ukrainian.

## v1.4 — 24-07-2023

- Added synchronization of forward and reverse DNS zones (required PUQ Customization): **Cloudflare**, **HestiaCP**.
- The "Change Package" function moved to cron.
- Fixed a bug related to the default operating system template.
- Added Virtual Machine Templates (CentOS 9).

## v1.3 — 11-07-2023

- Integration with **PUQ Customization** (FREE addon).
- **IPv6** support (required PUQ Customization).
- Ability to create VMs with IPv6 only.
- Added **pools of IP addresses** (required PUQ Customization).
- Added ability to define multiple IPv4 and IPv6 addresses.
- Added configurable options for RAM, CPU, IPv4, IPv6.
- Added a check that re-runs cloning if the previous cloning attempt failed.
- Redesigned the main screen of the client area (dropdown list with VM network settings).
- Changed the display of VM graphs in the admin area (3 graphs per row).
- Removed the "name servers" fields from the order form.
- Added VM templates (Debian 12, Ubuntu 22.04).

## v1.2.1 — 04-03-2023

- Support for PHP 8.1 and PHP 7.4.
- Changes made to templates.

## v1.2 — 06-01-2023

- Support for WHMCS 8.6.
- Support for ionCube PHP Loader v12.
- Support for PHP 8.1.
- Changes made to templates.

## v1.1 — 12-10-2022

- Modified security.
- Remote debug logging in the admin panel.
- Corrections in translations.
- Fixed a bug that incorrectly checked whether a service belongs to the logged-in client in the client area.

## v1.0 — 19-09-2022

First public release.# Installation and Configuration

Step-by-step guide to installing the PUQ Proxmox KVM module in WHMCS, connecting it to a Proxmox server or cluster, preparing VM templates, and configuring the noVNC console, email templates and cron. Follow the pages in order for a clean setup.

# Basic concepts and requirements

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## System Requirements

| Requirement | Supported Versions |
|-------------|---------------|
| WHMCS | 8.x, 9.x |
| PHP | 7.4, 8.1, 8.2 |
| Proxmox VE | 7.x, 8.x |
| ionCube Loader | v13 or newer |

## Required PHP Extensions

The following PHP extensions must be enabled on the WHMCS server:

- **cURL** (`curl`) — required for API communication with Proxmox
- **JSON** (`json`) — required for parsing API responses

## Network Requirements

The WHMCS server must be able to reach the Proxmox API over the network on **port 8006** (HTTPS). Ensure that any firewalls between the WHMCS server and the Proxmox host allow outbound TCP connections on this port.

## Module Components

The PUQ Proxmox KVM module consists of **two components**. Both are **required** and must be installed for the module to function.

| Component | Type | Directory |
|-----------|------|-----------|
| Server Module | `puqProxmoxKVM` | `modules/servers/puqProxmoxKVM/` |
| Addon Module | `puq_proxmox_kvm` | `modules/addons/puq_proxmox_kvm/` |

The **Server Module** handles VM provisioning, client area interface, admin service management, and all direct Proxmox API operations.

The **Addon Module** manages IP address pools, DNS zones, VM management dashboard, cron task orchestration, and global settings. The server module depends on the addon module for IP allocation, cron processing, and centralized configuration.

> **Note:** The **PUQ Customization** addon module is **no longer required**. All functionality previously provided by PUQ Customization has been replaced by the built-in addon module (`puq_proxmox_kvm`). If you are upgrading from a version prior to v3.0, you may safely remove PUQ Customization after installing the new addon module.

## Proxmox Requirements

- API access enabled on the Proxmox host (enabled by default)
- A user account with appropriate permissions for VM management (e.g., `root@pam` or a dedicated API token user)
- At least one storage configured for VM disks
- At least one network bridge configured (e.g., `vmbr0`)
- Cloud-init support on VM templates (recommended)

## WHMCS Requirements

- Administrator access to the WHMCS admin area
- File upload permissions to the WHMCS installation directory
- A valid PUQ Proxmox KVM license key
- WHMCS cron job properly configured (for automated provisioning)

## Supported Languages

The module includes translations for 25 languages:

| | | | | |
|---|---|---|---|---|
| Arabic | Azerbaijani | Catalan | Chinese | Croatian |
| Czech | Danish | Dutch | English | Estonian |
| Farsi | French | German | Hebrew | Hungarian |
| Italian | Macedonian | Norwegian | Polish | Romanian |
| Russian | Spanish | Swedish | Turkish | Ukrainian |

## Additional operational requirements

- **Continuous and stable network connectivity** between the WHMCS host, the Proxmox cluster and the VNCproxy host. Brief network drops cause deployments to pause and resume on the next cron tick — in v3.0 that's handled by the state machine, but a persistently flaky network will stall provisioning.
- **Static IPs** — if you use static IPv4/IPv6, you need the required number of free IP addresses reserved for virtual machines.
- **DHCP** — if the VM network uses a DHCP server, it must be configured correctly. When the module is set to DHCP, it does **not** manage IP allocation or firewall rules, only bandwidth, bridge and VLAN on the network card.
- **VLANs** — if the network uses VLANs, your internal networking must carry the VLAN to every node of the cluster.
- **noVNC WEB console** — requires a separate VNCproxy installation with access both to the internet and to the Proxmox cluster's VNC port range (5900–5999). See the [VNCproxy / noVNC](06-vncproxy-novnc.md) chapter.
- **DNS synchronization** — for forward/reverse DNS sync you need a supported DNS provider (in v3.0: **Cloudflare**, **HestiaCP** or **PowerDNS**, configured in the addon). For legacy setups a DNS API proxy / external automation against the `dns.php` endpoint is still supported.
- **Single-node installs** — a **Directory** or **NFS** datastore is required for VM disks.
- **ISO storage** — ISO images can live on a separate network storage configured as ISO storage in the product.
- **Backup storage** — backups also need network storage that is reachable from every node. Proxmox Backup Server is supported. Make sure the datastore intended for backups does **not** aggressively rotate copies, or that its rotation is aligned with the backup count defined in the client's package.
- **Anti-spoofing firewall** — if you want firewall rules that protect against IP spoofing, the firewall on the Proxmox server/cluster must be preconfigured with an incoming/outgoing **DENY** policy. The module then adds permissive rules matching the VM's own IP.

## The logic of the module

This section is a high-level overview of what happens for each lifecycle operation. In v3.0 every stage is driven by a **state machine** with resume-on-failure semantics; in v2.x the same steps were executed as one monolithic cron call.

### Creating a new virtual machine

1. After the client orders and pays for a virtual machine service, WHMCS calls the `CreateAccount` function.
2. An available IP address is selected from the server's IP pool. *Note: IPs of **Terminated** services are recycled back into the free pool and may be reused.*
3. A free virtual machine **VMID** is chosen — unique both in WHMCS and in Proxmox.
4. The hostname and VM name are generated from the package template (`<prefix>-<client_id>-<service_id>`).
5. The module starts cloning the virtual machine from the configured template.
6. The client is notified by email that the virtual machine is being created (**Welcome email**).
7. From this point the internal **cron** takes over and walks the VM through the deploy state machine. Each run of cron advances one or more steps depending on what's ready.

> **Changed in v3.0.** The deploy pipeline is a proper state machine — on any failure the VM stays in the last successful state and the next cron tick resumes from there. Earlier versions ran all steps in one go and would stall the whole service on a single transient error.

Deploy steps (v3.0):

1. `VMSetDedicatedIp` — allocate a free IP from the pool
2. `VMSetDNSRecords` — create forward/reverse DNS records (non-blocking — a failed DNS provider does not stop deployment)
3. `VMClone` — clone from the template (always on the template node)
4. **`migrateToTargetNode`** *(new in v3.0)* — offline migrate the freshly cloned VM to the target node / target storage
5. `VMSetCpuRam`
6. `VMSetSystemDiskSize`
7. `VMSetSystemDiskBandwidth`
8. `VMSetCreatedAdditionalDisk` (skipped if additional disk is not configured)
9. `VMSetAdditionalDiskSize`
10. `VMSetAdditionalDiskBandwidth`
11. `VMSetNetwork` — bridge, VLAN, MAC, bandwidth
12. `VMSetFirewall` — options, anti-spoofing IPSet, policies *(extended in v3.0 via the new `VMFirewall` class)*
13. `VMSetCloudinit` — user, password, network config, hostname
14. `VMStart`
15. **Verify running + `ServiceSendEmailVMReady`** — success email with access parameters (IP, user, password)

Example of a successful cron run (v3.0 output format):

![Deploy success — cron output](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-wumbhlsr.png)

```
2026-04-10 03:49:18 PUQ Proxmox KVM Cron Start
2026-04-10 03:49:18 [processVirtualMachines] Running (interval: 1m, last: 03:34:00)
--- Deploy: service=5546 vm=2002 status=clone ---
[+]  1. Migrate to target node      success (clone -> migrated)                                [30.23s]
[+]  2. Set CPU & RAM                success (migrated -> set_cpu_ram)                         [0.11s]
[+]  3. Resize system disk           success (set_cpu_ram -> set_system_disk_size)             [0.12s]
[+]  4. Set system disk I/O          success (set_system_disk_size -> set_system_disk_bandwidth) [0.13s]
[+]  5. Create additional disk       success (set_system_disk_bandwidth -> set_created_additional_disk) [3.14s]
[+]  6. Resize additional disk       success (set_created_additional_disk -> set_additional_disk_size)  [1.08s]
[+]  7. Set additional disk I/O      success (set_additional_disk_size -> set_additional_disk_bandwidth)[0.13s]
[+]  8. Configure network            success (set_additional_disk_bandwidth -> set_network)    [0.12s]
[+]  9. Configure firewall           success (set_network -> set_firewall)                     [0.39s]
[+] 10. Configure cloud-init         success (set_firewall -> set_cloudinit)                   [1.31s]
[+] 11. Start VM                     success (set_cloudinit -> starting)                       [8.18s]
[+] 12. Verify running + Email       success (starting -> ready)                               [0.46s]
--- Deploy complete: service=5546 status=ready ---
2026-04-10 03:50:16 [processVirtualMachines] Done (57.5s) — processed: 2, errors: 0
2026-04-10 03:50:16   sid:5546 action:deploy result:success
2026-04-10 03:50:16 PUQ Proxmox KVM Cron End (57.6s total)
```

Key things to notice in the v3.0 format:

- Every cron tick is wrapped between `PUQ Proxmox KVM Cron Start` / `... Cron End` lines, so you can see exactly which tick did what.
- Each deploy is introduced by `--- Deploy: service=<sid> vm=<vmid> status=<current_status> ---` and terminated by `--- Deploy complete: ... status=ready ---`.
- Every step prints a human label (`1. Migrate to target node`, `2. Set CPU & RAM`, ...), the `success` keyword, a **state transition** (`clone -> migrated`) and the **duration in seconds** in square brackets. This makes it trivial to spot the one step that is slow.
- After the deploy block the cron task summary is printed: `[processVirtualMachines] Done (X.Xs) — processed: N, errors: M`. Non-zero `errors` means at least one VM ran into a problem — look at the per-sid lines directly below.
- At the end every task's structured result (`processed`, `errors`) is rolled up into a short single line — useful for external monitors that tail `stdout`.

> **Changed in v3.0.** The v1.x–v2.x output was a flat list like `VMSetCpuRam: success`. It is gone entirely — if you see that format anywhere, you are running an older version.

### Changing the package of an existing virtual machine

Package upgrades and downgrades (WHMCS → client or admin → *Upgrade/Downgrade*) use the **same state machine pattern as deploy** — they walk the VM through a separate set of `cp_*` states, one per change to apply. See also the [`change_package` admin section](../05-admin-area/02-service-management.md) for the full step list and state values.

Every `change_package` step also:

- checks whether the new package value actually differs from what the VM has right now and **skips the step if there is no change** (you will see `skip — no change` in the output),
- prints its own timing and transition in the same bracketed `[x.xxs]` format as deploy,
- stops the VM once at the beginning (`cp_stop`) and starts it again at the end (`cp_start`) — this is the only part of the pipeline that is guaranteed to happen.

Example of a successful change-package cron run:

![Change package success — cron output](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-wnojvkh6.png)

```
2026-04-10 03:54:38 PUQ Proxmox KVM Cron Start
2026-04-10 03:54:38 [processVirtualMachines] Running (interval: 1m, last: 03:53:45)
--- ChangePackage: service=5546 vm=2002 status=cp_firewall ---
[+] 1. Configure firewall   success (cp_firewall -> cp_start) [0.05s]
[+] 2. Verify VM            success (cp_start -> ready)       [0.05s]
--- ChangePackage complete: service=5546 status=ready ---
2026-04-10 03:54:40 [processVirtualMachines] Done (2.1s) — processed: 2, errors: 0
2026-04-10 03:54:40   sid:5546 action:change_package result:success
2026-04-10 03:54:40 PUQ Proxmox KVM Cron End (2.2s total)
```

This second run only does two steps because the **previous** cron tick had already performed the heavy part of the change (`cp_cpu_ram`, `cp_system_disk_*`, `cp_additional_disk_*`, `cp_network`) and the state machine stopped at `cp_firewall`. On resume it picks up from exactly where it was — which is the whole point of the state machine.

Example of a change-package that partially failed and is scheduled for retry:

![Change package retry — cron output](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-dziwswuv.png)

```
2026-04-10 03:53:23 PUQ Proxmox KVM Cron Start
2026-04-10 03:53:23 [processVirtualMachines] Running (interval: 1m, last: 03:50:16)
--- ChangePackage: service=5546 vm=2002 status=change_package ---
[+]  1. Update IP + DNS + Firewall            success (change_package -> cp_stop)                             [0.54s]
[+]  2. Stop VM                               success (cp_stop -> cp_cpu_ram)                                 [10.19s]
[+]  3. Set CPU & RAM                         success (cp_cpu_ram -> cp_system_disk_size)                     [0.22s]
[+]  4. Resize system disk                    success (cp_system_disk_size -> cp_system_disk_bandwidth)       [0.19s]
[+]  5. Set system disk I/O                   success (cp_system_disk_bandwidth -> cp_additional_disk)        [0.22s]
[+]  6. Create additional disk                success (cp_additional_disk -> cp_additional_disk_size)         [1.16s]
[+]  7. Resize additional disk                success (cp_additional_disk_size -> cp_additional_disk_bandwidth)[0.17s]
[+]  8. Set additional disk I/O               success (cp_additional_disk_bandwidth -> cp_network)            [0.04s]
[+]  9. Configure network (skip — no change)  success (cp_network -> cp_firewall)                             [0.07s]
[+] 10. Configure firewall                    success (cp_firewall -> cp_start)                               [ ... ]
[+] 11. Start VM                              VM failed to start (status: stopped). Will retry.               [5.11s]
--- ChangePackage paused: waiting at (cp_firewall) ---
2026-04-10 03:53:51 [processVirtualMachines] Done (28.1s) — processed: 2, errors: 1
2026-04-10 03:53:51   sid:5546 action:change_package VM failed to start (status: stopped). Will retry. (cp_firewall -> cp_start) [5.11s]
2026-04-10 03:53:51 PUQ Proxmox KVM Cron End (28.1s total)
```

Things to notice in this retry run:

- Step 9 printed `(skip — no change)` because the product's network bridge/VLAN did not actually change — the state machine still advances the status (`cp_network -> cp_firewall`) but does not touch Proxmox at all.
- Step 11 failed on `VM failed to start`. Instead of rolling back the whole change, the state machine **pauses** and the overall cron summary ends with `ChangePackage paused: waiting at (cp_firewall)` + `errors: 1`.
- On the next cron tick the module picks up at `cp_firewall` and runs steps 10–11 again. That is exactly the previous successful run shown above.

> **Changed in v3.0.** `change_package` was an atomic single-shot operation in v1.x–v2.x — a failure during the resize or the start step would leave the VM in an inconsistent half-changed state and the admin had to fix it by hand. The v3.0 state machine makes the whole thing idempotent and recoverable on the next cron tick.

### Reinstalling the virtual machine

The reinstallation procedure removes the VM and recreates it using the current package parameters while keeping the original IP, MAC address, VLAN and VMID.

1. Snapshots are deleted. **Backups are kept intact** — you can still restore a backup from the pre-reinstall state.
2. The VM is removed.
3. A fresh clone is started from the template chosen during the reinstall action (can be a different OS than before).
4. The state machine then runs through: `VMSetCpuRam → VMDeleteDNSRecords → VMSetDNSRecords → VMSetSystemDiskSize → VMSetSystemDiskBandwidth → additional disk steps → VMSetNetwork → VMSetFirewall → VMSetCloudinit → VMStart`.
5. A success email is sent to the client with the access parameters.

### Snapshots

- The client can create, delete and restore snapshots of their VM from the client area.
- The number of snapshots is limited in the package configuration.
- Snapshot lifetime is configured per-product (1–10 days maximum).
- A cron task automatically deletes snapshots older than the configured lifetime.

### Backups

- The client can create, delete and restore backups directly from the client area.
- The number of backups is limited in the package configuration.
- **Automatic backups**: the client selects the days on which the backup should run. The exact time-of-day is assigned automatically and randomly by the cron system each time the schedule is saved — this is done to spread the backup load across your Proxmox storage.
- On each cron tick, the module:
  1. Checks whether today's schedule entry for this VM is due (i.e. the target time is in the past compared to "now").
  2. Checks whether a backup for today already exists.
  3. Checks whether the backup slot limit has been reached — if yes, deletes the oldest backup to make room.
  4. Creates the new backup.

### Backup recovery

- The VM must be **powered off** before the backup restore runs.
- Once the restore completes, the module:
  - Re-applies CPU/RAM if the package values differ from the restored VM.
  - Re-applies disk size and bandwidth.
  - Re-creates additional disks if needed.
  - Re-applies network configuration (bridge, VLAN, bandwidth, MAC).
  - Starts the VM.
  - Sends the **Backup restored** email to the client.
- If the restore fails, the client is given the option to retry the restore or to reinstall the VM.
- While a backup is being created or restored, **all other management operations on the VM are suspended**.

### Reset password

The password reset procedure relies on cloud-init — it works only if the `cloud-init` packages have not been removed from the VM (see the [Virtual Machine Templates](05-virtual-machine-templates.md) chapter).

1. The VM must be **powered off**.
2. A new random password is generated and saved in the WHMCS service settings.
3. Cloud-init is rewritten with the new credentials.
4. The VM is started.
5. The **Reset password** email is sent to the client with the new access parameters.

### Mounting an ISO image

ISO images are stored on Proxmox in the usual way (shared storage in clusters; directory storage is fine on single nodes). Upload ISOs to Proxmox in advance.

To make the selection easier in the client UI, the module groups ISO images by the part of the filename **before the first `-` character**:

- `Debian-12.5.0-amd64-netinst.iso` → group **Debian**
- `Ubuntu-22.04.4-live-server-amd64.iso` → group **Ubuntu**
- `proxmox-ve_8.1-2.iso` → group **proxmox** (no dash → the `_` separator is **not** used, so this file lands in the **OTHER** group — rename your files accordingly)
- `myfile.iso` (no dash) → group **OTHER**

Pay attention to your file naming conventions when uploading ISOs.


<!-- sync:6e6a850fdfd0dd6d -->

# WHMCS Module Installation and Update

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## Download

The module is distributed as a single ZIP archive. A separate build is published for each supported PHP major version — pick the one that matches the PHP runtime used by your WHMCS installation.

All versions and historical builds are available in the index:

- [https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/)

### Direct "latest" downloads

#### PHP 8.2

```bash
wget https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/php82/PUQ_WHMCS-Proxmox-KVM-latest.zip
```

#### PHP 8.1

```bash
wget https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/php81/PUQ_WHMCS-Proxmox-KVM-latest.zip
```

#### PHP 7.4

```bash
wget https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/php74/PUQ_WHMCS-Proxmox-KVM-latest.zip
```

> Not sure which PHP version your WHMCS runs on? Check **Utilities > System > PHP Info** in the WHMCS admin area.

## Installation

### Step 1: Unzip the Archive

On your WHMCS server (or locally, before uploading):

```bash
unzip PUQ_WHMCS-Proxmox-KVM-latest.zip
```

The archive extracts into a `PUQ_WHMCS-Proxmox-KVM/` directory containing two module folders: `puqProxmoxKVM` (server module) and `puq_proxmox_kvm` (addon module).

### Step 2: Copy the Server Module

Copy and replace `puqProxmoxKVM` from the extracted `PUQ_WHMCS-Proxmox-KVM/` directory to your WHMCS installation:

```
PUQ_WHMCS-Proxmox-KVM/puqProxmoxKVM  →  WHMCS_WEB_DIR/modules/servers/puqProxmoxKVM/
```

Example:

```bash
cp -r PUQ_WHMCS-Proxmox-KVM/puqProxmoxKVM /var/www/html/whmcs/modules/servers/
```

### Step 3: Copy the Addon Module

Copy and replace `puq_proxmox_kvm` from the extracted directory to your WHMCS installation:

```
PUQ_WHMCS-Proxmox-KVM/puq_proxmox_kvm  →  WHMCS_WEB_DIR/modules/addons/puq_proxmox_kvm/
```

Example:

```bash
cp -r PUQ_WHMCS-Proxmox-KVM/puq_proxmox_kvm /var/www/html/whmcs/modules/addons/
```

### Step 4: Activate the Addon Module

1. Log in to the WHMCS admin area
2. Navigate to **Setup > Addon Modules**
3. Find **PUQ Proxmox KVM** in the list
4. Click **Activate**
5. Enter your license key
6. Configure access control to grant the appropriate admin roles access to the addon

![Addon activation and access control](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-lsnsjhkj.png)

### Step 5: Verify Installation

After activation, navigate to **Addons > PUQ Proxmox KVM** in the admin menu. You should see the addon dashboard confirming a successful installation.

## File Structure

After installation, the module files should be located at:

```
whmcs/
├── modules/
│   ├── servers/
│   │   └── puqProxmoxKVM/          # Server module
│   │       ├── puqProxmoxKVM.php
│   │       └── ...
│   └── addons/
│       └── puq_proxmox_kvm/        # Addon module
│           ├── puq_proxmox_kvm.php
│           └── ...
```

## Update Procedure

To update the module to a newer version:

1. **Deactivate** the addon module in **Setup > Addon Modules**
2. Download the latest module archive from puqcloud.com
3. Upload and overwrite the server module files in `modules/servers/puqProxmoxKVM/`
4. Upload and overwrite the addon module files in `modules/addons/puq_proxmox_kvm/`
5. **Reactivate** the addon module in **Setup > Addon Modules**

> **Important:** Database tables and all configuration data are preserved during the deactivate/reactivate cycle. Your IP pools, DNS zones, VM records, and settings will remain intact.

> **Tip:** Always back up your WHMCS installation before performing an update.


<!-- sync:8496fad1dd76c450 -->

# Addon Module Setup

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## After Activation

When the addon module is activated for the first time, it automatically:

- Creates all required database tables
- Sets default values for all settings
- Initializes the cron system

No manual database setup is required.

## Accessing the Addon

Navigate to **Addons > PUQ Proxmox KVM** in the WHMCS admin menu. The addon dashboard provides a centralized management interface.

![Addon dashboard](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-hmwpzqdl.png)

## Addon Features

The addon module provides:

- **IP Pools** — Manage IPv4 and IPv6 address pools per server, with automatic allocation during VM deployment
- **DNS Zones** — Configure Cloudflare and HestiaCP integration for forward and reverse DNS automation
- **VM Management** — Overview of all provisioned VMs with deploy logs and status tracking
- **Cron Tasks** — Configure cron intervals, view task status, and manage lock files
- **Settings** — Global module settings including API timeouts, migration behavior, and cron configuration

## PUQ Customization Addon No Longer Required

Starting from v3.0, the PUQ Proxmox KVM module includes its own dedicated addon module (`puq_proxmox_kvm`). The old PUQ Customization addon with the ModulePuqProxmoxKVM extension is **no longer needed**.

If you are upgrading from an earlier version that used PUQ Customization:

1. Install and activate the new standalone addon module
2. Existing data (IP pools, DNS zones) will be migrated automatically on activation
3. Verify that all IP pools and DNS zones are present in the new addon
4. You can then safely deactivate the ModulePuqProxmoxKVM extension in PUQ Customization

> **Note:** The server module supports both the new standalone addon and the old PUQ Customization addon simultaneously during the migration period, so there is no downtime during the transition.


<!-- sync:e589e2b21db32626 -->

# Create new server for Proxmox in WHMCS

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## Preface

For the module to work properly, you must configure the server settings in your main WHMCS panel. This is the place where you register a Proxmox server (or Proxmox cluster) which will then be used by the module to build KVM virtual machines. Here you define access credentials, IP ranges and additional settings.

> **Attention.** If you have only one server, or you do not use server groups, you need to make this server the **active default** for new signups by opening the server entry in WHMCS and ticking *"Make this server the active default for new signups"*.

## Server creation

Log in to your WHMCS panel and create a new Proxmox server:

**System Settings → Products/Services → Servers → Add New Server**

![Navigate to Servers and click Add New Server](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-cdvlalpj.png)

### Step 1: Name, Hostname and Assigned IP Addresses

- Enter the correct **Name** and **Hostname** of the Proxmox node.
- In the **Assigned IP Addresses** field enter the list of IP addresses that will be reserved for virtual machines built on this server.

![Name, Hostname and Assigned IP Addresses fields](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-sywouchc.png)

> **Note.** Starting with module version **1.3**, the module supports IPv4/IPv6 pools managed in the addon. For new installations this is the recommended way to manage IP addresses — see the **IP Pools** chapter of this documentation. The "Assigned IP Addresses" field described below is the legacy format and is kept for backward compatibility.

#### Format to follow in the Assigned IP Addresses field

To define the available pool of IP addresses, enter one line per IP, with fields separated by the `|` character. Each line has the following structure:

```
<bridge>|<vlan_tag>|<IP_address>|<net_mask>|<Gateway>|<DNS1>,<DNS2>
```

| Field | Description |
|-------|-------------|
| `<bridge>` | The virtual bridge to which the VM network interface is connected (e.g. `vmbr0`). |
| `<vlan_tag>` | VLAN tag that will be set on the VM's network card. If VLANs are not used, enter `0`. |
| `<IP_address>` | IPv4 address that will be assigned to the VM. |
| `<net_mask>` | Network mask in CIDR form (e.g. `24`). |
| `<Gateway>` | Default gateway for the subnet. |
| `<DNS1>,<DNS2>` | Comma-separated list of DNS servers. |

##### Example

```
vmbr0|10|192.168.10.2|24|192.168.10.1|8.8.8.8,1.1.1.1
vmbr0|10|192.168.10.3|24|192.168.10.1|8.8.8.8,1.1.1.1
vmbr0|10|192.168.10.4|24|192.168.10.1|8.8.8.8,1.1.1.1
vmbr0|30|192.168.20.2|24|192.168.20.1|8.8.8.8,1.1.1.1
vmbr0|30|192.168.20.3|24|192.168.20.1|8.8.8.8,1.1.1.1
vmbr0|30|192.168.20.4|24|192.168.20.1|8.8.8.8,1.1.1.1
vmbr1|333|172.16.5.2|24|172.16.5.1|8.8.8.8,1.1.1.1
vmbr1|333|172.16.5.3|24|172.16.5.1|8.8.8.8,1.1.1.1
vmbr1|333|172.16.5.4|24|172.16.5.1|8.8.8.8,1.1.1.1
vmbr3|0|10.0.25.2|24|10.0.25.1|10.0.10.10,10.0.10.20
vmbr3|0|10.0.25.3|24|10.0.25.1|10.0.10.10,10.0.10.20
vmbr3|0|10.0.25.4|24|10.0.25.1|10.0.10.10,10.0.10.20
vmbr3|0|10.0.25.5|24|10.0.25.1|10.0.10.10,10.0.10.20
vmbr3|0|10.0.25.6|24|10.0.25.1|10.0.10.10,10.0.10.20
```

### Step 2: Server Details — module and credentials

In the **Server Details** section select the **PUQ Proxmox KVM** module and enter the correct credentials for the Proxmox API. Then click **Test connection** to verify.

> **Attention.** Starting from module version **2.3**, authentication has been changed to **token-based**.
> - **Username** — Proxmox token ID in the format `root@pam!whmcs-dev`
> - **Password** — the token secret value
>
> If you are using a version earlier than 2.3, enter the Proxmox username in the format `root@pam` in the **Username** field and the corresponding password in the **Password** field.
>
> During operation, the module will automatically fill in the **Access Hash** field. You do not need to fill it manually.

#### Version 2.3+ — Token authentication

![Server Details with Proxmox token authentication (v2.3+)](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-bprrnwun.png)

#### Version 2.2 and earlier — Password authentication

![Server Details with Proxmox password authentication (v2.2-)](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-gvj04vch.png)

##### Creating a Proxmox API Token

1. Log in to the Proxmox web UI.
2. Go to **Datacenter → Permissions → API Tokens**.
3. Click **Add**.
4. Select the **User** (e.g. `root@pam`).
5. Enter a **Token ID** (e.g. `whmcs`).
6. **Uncheck** *Privilege Separation* if the token should inherit the user's full permissions. If privilege separation is enabled, you must assign permissions to the token itself.
7. Click **Add**.
8. **Copy the generated token secret immediately** — it is displayed only once and cannot be retrieved later.

The resulting username for WHMCS will look like `root@pam!whmcs` and the password will be the token secret (a UUID-like string).

### Step 3: Make the server default (single-server installs)

If you have only one Proxmox server, or you do not use server groups, open the server entry and tick **"Make this server the active default for new signups"**. Otherwise newly ordered products will not be assigned to this server automatically.

![Make this server the active default for new signups](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-j652ywv9.png)

## Test Connection

After saving the server configuration, always use the **Test Connection** button to verify:

- Network connectivity to the Proxmox host on port **8006**
- Authentication credentials are valid (token ID + secret, or username + password)
- The API user / token has sufficient permissions on the target nodes and storages

If the test fails, check:

- The WHMCS server can reach the Proxmox host on port `8006`
- The username and password/token are correct
- The Proxmox API service (`pveproxy`) is running
- No firewall is blocking the connection between WHMCS and Proxmox

## Server Groups

You can organize multiple Proxmox servers into **Server Groups** for automatic server selection during provisioning. This is useful when you have a Proxmox cluster with multiple nodes.

1. Go to **System Settings → Products/Services → Servers**
2. Click the **Server Groups** tab
3. Create a new group and assign your Proxmox servers to it
4. Set the **Fill Type**:
   - **Fill** — fills one server before moving to the next
   - **Round Robin** — distributes VMs evenly across servers
5. When configuring a product, select the server group instead of a specific server

> **Tip.** When using server groups with a Proxmox cluster, ensure the required storages exist on every node, or enable the VM migration step in the addon settings so the module moves freshly-cloned VMs to the correct target node automatically.


<!-- sync:6925cb0764dbc33d -->

# Virtual Machine Templates

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## Overview

The module provisions virtual machines by cloning existing Proxmox VM templates. Before using the module, you need to prepare at least one VM template in your Proxmox environment. Templates must be properly prepared inside the Proxmox panel — the module does not install an operating system from ISO, it only clones a ready template and applies configuration via cloud-init.

## Physical Requirements

VM templates must meet the following specifications:

- **Resource sizing**: all template parameters (CPU cores, RAM, system disk size) must be **smaller** than the smallest package you plan to offer clients. The module can grow disks and increase CPU/RAM during deployment, but it **cannot shrink** them.
- **Multi-disk layout**: if you plan to offer VMs with multiple disks on different storage locations, create those disks on the appropriate storage already in the template. Otherwise Proxmox may consolidate all disks on the same storage during clone.
- **Cloud-init drive**: a cloud-init disk is mandatory for automatic VM configuration after cloning.
- **Partition layout**: partitions of the system disk must be arranged so that the **root partition is the LAST** partition in the table. This is required for automatic root filesystem expansion by `cloud-initramfs-growroot` during first boot.

## Cloud-Init Requirement

Cloud-init is **required** for automatic VM configuration. The module uses cloud-init to set:

- Hostname
- IP address and network configuration
- DNS servers
- User account and password
- SSH keys

Without cloud-init, the module cannot automatically configure the VM's network and credentials after cloning.

## Creating a Template

### Step 1: Create a Base VM

Create a new VM in Proxmox with your desired operating system. When partitioning the system disk, ensure the **root partition is the LAST** one in the partition table so that it can be automatically expanded on first boot.

### Step 2: Enable Root SSH Access

The module uses the `root` user to push cloud-init configuration and manage the VM. Enable root login via SSH inside the template:

```bash
# /etc/ssh/sshd_config
PermitRootLogin yes
PasswordAuthentication yes

systemctl restart sshd
```

### Step 3: Install Cloud-Init

Install cloud-init together with the growroot and utility packages inside the VM:

```bash
# Debian/Ubuntu
apt update && apt install -y cloud-initramfs-growroot cloud-init cloud-utils

# CentOS/RHEL/AlmaLinux
yum install -y cloud-init cloud-utils-growpart

# openSUSE
zypper install cloud-init growpart
```

> `cloud-initramfs-growroot` / `cloud-utils-growpart` is what actually expands the root partition to fill the resized disk during deployment. Without it the client VM will boot with the original template disk size.

### Step 4: Enable Cloud-Init Services

Cloud-init is split into four systemd services — all of them must be enabled so the VM picks up configuration on every boot:

```bash
systemctl enable cloud-init-local.service
systemctl enable cloud-init.service
systemctl enable cloud-config.service
systemctl enable cloud-final.service
```

### Step 5: Clean Up the VM

Before converting to a template, clean up the VM so that every clone starts fresh:

```bash
# Remove default users created during OS install (ubuntu, debian, centos, etc.)
# The module creates the user defined in the product configuration.
userdel -r ubuntu 2>/dev/null || true
userdel -r debian 2>/dev/null || true

# Remove SSH host keys (they will be regenerated on first boot)
rm -f /etc/ssh/ssh_host_*

# Clean cloud-init state
cloud-init clean --logs

# Remove machine ID (will be regenerated)
truncate -s 0 /etc/machine-id
rm -f /var/lib/dbus/machine-id
ln -s /etc/machine-id /var/lib/dbus/machine-id

# Clear logs
find /var/log -type f -exec truncate -s 0 {} \;

# Clear bash history
cat /dev/null > ~/.bash_history && history -c
```

### Step 6: Add Cloud-Init Drive

In the Proxmox web UI:

1. Select the VM
2. Go to **Hardware**
3. Click **Add > CloudInit Drive**
4. Select the storage for the cloud-init drive

### Step 7: Convert to Template

In the Proxmox web UI:

1. Right-click the VM
2. Select **Convert to Template**

## Template Configuration Tips

- **Use a unique VMID** for each template to avoid conflicts
- **Keep templates on shared storage** if you have a multi-node cluster, or use local storage with migration enabled in the module settings
- **Install the QEMU Guest Agent** (`qemu-guest-agent` package) for improved VM management and status reporting
- **Configure serial console** if you want noVNC console access to work properly
- **Minimize the template disk size** — the module can resize disks during deployment, but it cannot shrink them
- **Test the template** by manually cloning it and verifying that cloud-init applies the configuration correctly

## Pre-built Templates

If you don't want to build templates from scratch, PUQcloud publishes two separate sets of ready-made VM templates as Proxmox backup archives (VMA format, `.vma.zst`):

1. **Prebuild Proxmox OS templates** — custom minimal installations built by PUQcloud from scratch.
2. **Official cloud images with root access** — upstream cloud images (Debian Cloud, Ubuntu Cloud, CentOS GenericCloud) modified so that root SSH login works out of the box.

Both sets are hosted under the same root folder:

- [https://files.puqcloud.com/Proxmox_OS_Templates/](https://files.puqcloud.com/Proxmox_OS_Templates/)

### Download Prebuild Proxmox OS Templates

Custom PUQcloud builds — 5 GB `virtio` disk, no swap, minimal install, root SSH enabled, default password `puqcloud`, timezone Europe/Warsaw.

#### Debian

- [Debian 10](https://files.puqcloud.com/Proxmox_OS_Templates/Debian/Debian-10/)
- [Debian 11](https://files.puqcloud.com/Proxmox_OS_Templates/Debian/Debian-11/)
- [Debian 12](https://files.puqcloud.com/Proxmox_OS_Templates/Debian/Debian-12/)

#### Ubuntu

- [Ubuntu 18](https://files.puqcloud.com/Proxmox_OS_Templates/Ubuntu/Ubuntu-18/)
- [Ubuntu 20](https://files.puqcloud.com/Proxmox_OS_Templates/Ubuntu/Ubuntu-20/)
- [Ubuntu 22](https://files.puqcloud.com/Proxmox_OS_Templates/Ubuntu/Ubuntu-22/)

#### CentOS

- [CentOS 7](https://files.puqcloud.com/Proxmox_OS_Templates/CentOS/CentOS-7/)
- [CentOS 8](https://files.puqcloud.com/Proxmox_OS_Templates/CentOS/CentOS-8/)
- [CentOS 9](https://files.puqcloud.com/Proxmox_OS_Templates/CentOS/CentOS-9/)

#### Proxmox

- [PBS 2.2](https://files.puqcloud.com/Proxmox_OS_Templates/Proxmox/PBS-2-2/)

### Official Cloud Images with Root Access

These are the **upstream cloud images** from Debian, Ubuntu and CentOS, modified by PUQcloud so that root SSH login is enabled out of the box. Use them if you prefer the official distribution builds over custom ones.

> Stock upstream cloud images disable root SSH login by default and create a non-root user (`debian`, `ubuntu`, `centos`). The module requires the `root` account to push cloud-init configuration and run management commands — that's why the "with root access" variants exist.

#### Debian

- [Debian 10](https://files.puqcloud.com/Proxmox_OS_Templates/Debian/Debian-10/vzdump-qemu-1010-2024_03_23-17_10_47.vma.zst) — ~245 MB
- [Debian 11](https://files.puqcloud.com/Proxmox_OS_Templates/Debian/Debian-11/vzdump-qemu-1011-2024_03_23-17_11_03.vma.zst) — ~275 MB
- [Debian 12](https://files.puqcloud.com/Proxmox_OS_Templates/Debian/Debian-12/vzdump-qemu-1012-2024_03_23-17_11_11.vma.zst) — ~298 MB
- [Debian 13](https://files.puqcloud.com/Proxmox_OS_Templates/Debian/Debian-13/vzdump-qemu-1013-2024_03_23-17_11_20.vma.zst) — ~307 MB

#### Ubuntu

- [Ubuntu 20.04](https://files.puqcloud.com/Proxmox_OS_Templates/Ubuntu/Ubuntu-20/vzdump-qemu-1021-2024_03_23-18_33_20.vma.zst) — ~893 MB
- [Ubuntu 22.04](https://files.puqcloud.com/Proxmox_OS_Templates/Ubuntu/Ubuntu-22/vzdump-qemu-1022-2024_03_23-18_33_39.vma.zst) — ~626 MB
- [Ubuntu 23.10](https://files.puqcloud.com/Proxmox_OS_Templates/Ubuntu/Ubuntu-23/vzdump-qemu-1023-2024_03_23-18_33_52.vma.zst) — ~731 MB

#### CentOS

- [CentOS 7](https://files.puqcloud.com/Proxmox_OS_Templates/CentOS/CentOS-7/vzdump-qemu-1030-2024_03_23-21_12_58.vma.zst) — ~1.01 GB
- [CentOS 9](https://files.puqcloud.com/Proxmox_OS_Templates/CentOS/CentOS-9/vzdump-qemu-1032-2024_03_23-21_13_14.vma.zst) — ~1.10 GB

> File names may change over time. If a direct link returns 404, open the parent folder on [files.puqcloud.com](https://files.puqcloud.com/Proxmox_OS_Templates/) and grab the latest archive for the OS version you need.

### Importing a Pre-built Template

1. Copy the `.vma.zst` file to your Proxmox node, for example into `/var/lib/vz/dump/`:
   ```bash
   cd /var/lib/vz/dump/
   wget https://files.puqcloud.com/Proxmox_OS_Templates/Debian/Debian-12/vzdump-qemu-1012-2024_03_23-17_11_11.vma.zst
   ```
2. Restore the backup to a new VMID (pick a free ID, e.g. `9012`) and target storage (e.g. `local-lvm`):
   ```bash
   qmrestore /var/lib/vz/dump/vzdump-qemu-1012-2024_03_23-17_11_11.vma.zst 9012 --storage local-lvm
   ```
   Alternatively, use the Proxmox web UI: **Datacenter > Storage > Backups > Restore**.
3. Open the restored VM, change the default root password (`puqcloud` for the prebuild set), verify the cloud-init drive is present, then right-click the VM and choose **Convert to Template**.

> **Disclaimer:** Your use of these operating systems is at your own risk. PUQcloud does not guarantee the correct operation or security of the pre-built templates or the root-access cloud images. Always review, update and harden them before offering to clients.

## Supported Operating Systems

The module supports any operating system that Proxmox can run as a KVM virtual machine, provided it has working cloud-init (or cloudbase-init on Windows). Tested and known to work:

- **Linux**: Debian, Ubuntu, CentOS, AlmaLinux, Rocky Linux, openSUSE, Proxmox Backup Server
- **Windows**: Server / Desktop editions with `cloudbase-init` installed

## Configuring Templates in WHMCS

Templates are selected in the product configuration under the **Module Settings** tab. You can also offer multiple templates to clients via Configurable Options, allowing them to choose their preferred operating system during order.


<!-- sync:980b754a38a02082 -->

# Install VNCproxy and noVNC

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## Preface

The module supports the ability to connect to and use a browser-based console to manage a specific KVM virtual machine. To connect to the VM console we use third-party software.

**noVNC** — the open-source VNC client. noVNC is both a VNC client JavaScript library and an application built on top of that library. It runs well in any modern browser, including mobile browsers (iOS and Android).

- Project site: [https://novnc.com](https://novnc.com)
- Project GitHub: [https://github.com/novnc/noVNC](https://github.com/novnc/noVNC)

> As we only use an external project, we do not take any responsibility for data leaks, hacks, etc.

The PUQ `vncwebproxy` binary itself is written in Go and uses the following libraries:

- [go-vncproxy](https://github.com/evangwt/go-vncproxy) (MIT License)
- [gin](https://github.com/gin-gonic/gin) (MIT License)
- [golang.org/x/net/websocket](https://pkg.go.dev/golang.org/x/net/websocket) (BSD License)

### How it works

The `vncwebproxy` sits between the client browser and your Proxmox server. It terminates the WebSocket from noVNC and forwards traffic to the Proxmox VNC port.

- The proxy must have stable network connectivity to the Proxmox server; TCP ports **5900–5999** to Proxmox are sufficient.
- If you use a **domain name** (not an IP) for the Proxmox server in the WHMCS server settings, that domain must resolve correctly **from the vncproxy host as well**.
- Each console session uses a one-time authentication ticket generated on demand and validated by the Proxmox API before the connection is established.
- All traffic between the client browser and the proxy is encrypted with SSL/TLS.

## Public PUQcloud proxy (default)

If you have any difficulties setting up your own proxy, you can use the public PUQcloud vncproxy server. **However, we strongly recommend setting up and using your own vncproxy server** — this way you retain full control over performance and security.

| Setting | Value |
|---------|-------|
| noVNC WEB proxy server | `vncproxy.puqcloud.com` |
| noVNC WEB proxy key | `puqcloud` |
| WEB ports | `80` / `443` |
| VNC ports | `5900–5999` |

These values go into the WHMCS product settings under **Module Settings → Integrations Configuration**:

| Setting | Description |
|---------|-------------|
| **noVNC Proxy Domain** | The URL of your noVNC proxy (e.g. `https://vncproxy.puqcloud.com`) |
| **noVNC Proxy Key** | Authentication key configured on the proxy (e.g. `puqcloud`) |

## Installation process — your own VNCproxy server

The sections below describe the full installation of a dedicated vncproxy server. The example uses **Debian 11** and the domain `vncproxy.puqcloud.com` — in your own deployment, substitute your domain everywhere.

### Step 1: Domain definition

First, choose a domain name for the vncproxy server (in our example: `vncproxy.puqcloud.com`). Create an `A`/`AAAA` record in your DNS pointing to the server's public IP address. Wait until the record propagates before requesting the SSL certificate.

### Step 2: Prepare the server

Provision a VM or dedicated host with your favorite Linux distribution — the example uses **Debian 11**. Make sure the server can reach your Proxmox nodes on TCP ports `5900–5999`, and that inbound ports `80/443` are open for clients.

Update the package database:

```bash
sudo apt update
```

Install the NGINX web server, Certbot and `zip`:

```bash
sudo apt install certbot nginx python3-certbot-nginx zip -y
```

### Step 3: Download the noVNC client

```bash
cd /root/
wget https://github.com/novnc/noVNC/archive/refs/tags/v1.3.0.zip
unzip v1.3.0.zip
cp -R noVNC-1.3.0/* /var/www/html/
rm v1.3.0.zip
rm -r noVNC-1.3.0/
```

After this step, opening `http://vncproxy.puqcloud.com/vnc.html` will load the noVNC client page.

### Step 4: Generate an SSL certificate with Certbot

```bash
certbot --nginx -d vncproxy.puqcloud.com
```

To renew the certificate automatically, add a cron job:

```bash
crontab -e
```

```
0 12 * * * /usr/bin/certbot renew --quiet
```

### Step 5: NGINX virtual host configuration

Edit the default site configuration:

```bash
nano /etc/nginx/sites-available/default
```

Use the following config — remember to replace `vncproxy.puqcloud.com` with your own domain:

```nginx
server {
    listen 80 default_server;
    listen [::]:80 default_server;

    root /var/www/html;

    index index.html index.htm index.nginx-debian.html;

    server_name _;

    location / {
        try_files $uri $uri/ =404;
    }
}

server {

    root /var/www/html;

    index index.html index.htm index.nginx-debian.html;
    server_name vncproxy.puqcloud.com; # managed by Certbot

    location / {
        try_files $uri $uri/ =404;
    }

    listen [::]:443 ssl ipv6only=on; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/vncproxy.puqcloud.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/vncproxy.puqcloud.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    location /vncproxy {
        proxy_pass http://127.0.0.1:8080/vncproxy;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host $host;
        proxy_set_header    X-Real-IP        $remote_addr;
        proxy_set_header    X-Forwarded-For  $proxy_add_x_forwarded_for;
    }
}

server {
    if ($host = vncproxy.puqcloud.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    listen 80 ;
    listen [::]:80 ;
    server_name vncproxy.puqcloud.com;
    return 404; # managed by Certbot
}
```

Reload NGINX:

```bash
service nginx restart
```

### Step 6: Install the `vncwebproxy` binary

Download the PUQ `vncwebproxy` binary from the official download server and make it executable:

```bash
apt-get install screen -y
cd /root/
wget https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/vncproxy/vncwebproxy
chmod +x vncwebproxy
```

### Step 7: Run the proxy

Run the script inside a `screen` session so it keeps running in the background. The **first argument** is a unique key — this is exactly the value you will later put into the **noVNC Proxy Key** field in the WHMCS module.

```bash
screen
./vncwebproxy puqcloud
```

After a successful launch you can watch the request log directly in the console:

```
root@vncproxy:~# ./vncwebproxy puqcloud
[./vncwebproxy puqcloud]
proxmox-test.uuq.pl59002022/09/11 19:11:08 [vncproxy][debug] ServeWS
2022/09/11 19:11:08 [vncproxy][debug] request url: /vncproxy/proxmox-test.uuq.pl/5900/d91bac199c2ce79392d8e175076e3780
2022/09/11 19:11:13 [vncproxy][info] close peer
[GIN] 2022/09/11 - 19:11:13 | 200 |  4.740249024s |   79.184.10.217 | GET      "/vncproxy/proxmox-test.uuq.pl/5900/d91bac199c2ce79392d8e175076e3780"
```

Detach from `screen` with `Ctrl+A` then `D`. Reattach later with `screen -r`.

### Step 8: Configure WHMCS

In the WHMCS product settings, under **Module Settings → Integrations Configuration**, fill in:

- **noVNC Proxy Domain** → `https://vncproxy.your-domain.tld`
- **noVNC Proxy Key** → the key you passed to `./vncwebproxy` (in our example: `puqcloud`)

Save the product and try opening the console from the client area.

## Client Access

When noVNC is configured, clients see a **Console** button in their VM management area. Clicking it opens a new browser window with the noVNC console, providing full keyboard and mouse access to the virtual machine.

![noVNC connecting](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-gtpk0jvv.png)

## Security

The security configuration of the vncproxy server should meet your own standards. A few mandatory points:

- Allow inbound TCP **80/443** from the internet (clients need HTTPS access to noVNC).
- Allow outbound TCP **5900–5999** from the vncproxy host to your Proxmox nodes.
- Keep the OS, NGINX and the `vncwebproxy` binary up to date.
- Each console session uses a **one-time ticket** — tickets are generated on demand, expire after a short period, and are validated against the Proxmox API before the connection is established.
- All traffic between the client browser and the proxy is encrypted via SSL/TLS (Let's Encrypt certificate).

Do not forget that for correct operation you must allow HTTPS to the proxy and outgoing connections from the proxy to the Proxmox server.


<!-- sync:57aa6730acbb5a49 -->

# Email Templates

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## Overview

The module uses five email templates to notify clients about VM lifecycle events. These templates must be created manually in WHMCS before the module can send the corresponding notifications.

## Creating Email Templates in WHMCS

1. Navigate to **Setup > Email Templates** in the WHMCS admin area
2. Click **Create New Email Template**
3. Set the **Type** to **Product/Service**
4. Set the **Unique Name** to the exact template name listed below
5. Fill in the subject line and email body using the available variables
6. Click **Save Changes**

Repeat for all five templates.

> **Important:** The template unique names must match exactly as listed below. The module looks up templates by their unique name, so any mismatch will prevent the email from being sent.

## Email Templates

### 1. puqProxmoxVKM Welcome Email

> **Note the spelling.** The original template name is literally `puqProxmoxVKM Welcome Email` (with `VKM`, not `KVM`). It has stayed like this since v1.0 for backwards compatibility — if you rename it the module will stop sending the welcome email.

Sent to the client as an order confirmation when a new Proxmox KVM service is created. At this point the VM is not yet ready — this email just tells the client their order has been accepted.

**When sent:** Upon service creation / order acceptance, before the deploy pipeline runs.

**Unique Name:** `puqProxmoxVKM Welcome Email`
**Email Type:** Product/Service
**Subject:** `Virtual Machine Order Information`

**Body:**

```
Dear {$client_name},

Your order has been accepted for implementation.
Installing and pre-configuring the virtual machine will take some time.
Please wait for an e-mail with information that the virtual machine is ready for use, also with access parameters.

Product/Service: {$service_product_name}
Payment Method: {$service_payment_method}
Amount: {$service_recurring_amount}
Billing Cycle: {$service_billing_cycle}
Next Due Date: {$service_next_due_date}

Important note - if you have also purchased the backup options, do not forget to configure the schedule in the service's subpage.

Thank you for choosing us.

{$signature}
```

---

### 2. puqProxmoxKVM VM is ready

Sent to the client when a new virtual machine has been successfully deployed and is ready for use. **This is the most important template** — it contains the VM access credentials (IP, username, password).

**When sent:** After the deploy pipeline reaches the `ready` state and the VM is confirmed running.

**Unique Name:** `puqProxmoxKVM VM is ready`
**Email Type:** Product/Service
**Subject:** `Virtual Machine is ready`

**Body:**

```
Dear {$client_name},

Your virtual machine is already ready.
You can connect to it using data.

Product/Service: {$service_product_name}
Payment Method: {$service_payment_method}
Amount: {$service_recurring_amount}
Billing Cycle: {$service_billing_cycle}
Next Due Date: {$service_next_due_date}

IP address: {$service_dedicated_ip} or {$service_domain}
Username: {$service_username}
Password: {$service_password}

Thank you for choosing us.

{$signature}
```

---

### 3. puqProxmoxKVM Reset password

Sent to the client after a password reset operation on their virtual machine.

**When sent:** After a password reset is performed via the admin area or client area — once cloud-init has re-applied the new credentials and the VM has been restarted.

**Unique Name:** `puqProxmoxKVM Reset password`
**Email Type:** Product/Service
**Subject:** `Reset password is ready`

**Body:**

```
Dear {$client_name},

Password reset successful.

IP address: {$service_dedicated_ip} or {$service_domain}
Username: {$service_username}
Password: {$service_password}

Thank you for choosing us.

{$signature}
```

---

### 4. puqProxmoxKVM Backup restored

Sent to the client after a backup has been successfully restored.

**When sent:** After a backup restore operation completes and the module has re-applied CPU/RAM/disk/network settings and restarted the VM.

**Unique Name:** `puqProxmoxKVM Backup restored`
**Email Type:** Product/Service
**Subject:** `Backup restored successful`

**Body:**

```
Dear {$client_name},

Backup restored successful.

Product/Service: {$service_product_name}
Payment Method: {$service_payment_method}
Amount: {$service_recurring_amount}
Billing Cycle: {$service_billing_cycle}
Next Due Date: {$service_next_due_date}

IP address: {$service_dedicated_ip} or {$service_domain}

Thank you for choosing us.

{$signature}
```

---

### 5. puqProxmoxKVM Upgrade Email

Sent to the client after a package upgrade or downgrade completes.

**When sent:** After the `change_package` state machine finishes applying the new product parameters to the VM and starts it back up.

**Unique Name:** `puqProxmoxKVM Upgrade Email`
**Email Type:** Product/Service
**Subject:** `Virtual Machine upgrade is ready`

**Body:**

```
Dear {$client_name},

Virtual Machine upgrade is successful.

Product/Service: {$service_product_name}
Payment Method: {$service_payment_method}
Amount: {$service_recurring_amount}
Billing Cycle: {$service_billing_cycle}
Next Due Date: {$service_next_due_date}

IP address: {$service_dedicated_ip} or {$service_domain}

Thank you for choosing us.

{$signature}
```

> The bodies above are the original PUQcloud templates shipped with v1.0 through v3.0. You are free to customize subjects and bodies to match your brand — only the **Unique Name** and **Email Type** must stay as documented, because the module looks up templates by their unique name.

## Available Template Variables

The following merge fields are available in all email templates:

### Standard WHMCS Variables

| Variable | Description |
|----------|-------------|
| `{$client_name}` | Client's full name |
| `{$service_product_name}` | Product/service name |
| `{$service_dedicated_ip}` | Primary IPv4 address assigned to the VM |
| `{$service_domain}` | Service domain / hostname |
| `{$service_username}` | Operating system username |
| `{$service_password}` | Operating system password |
| `{$service_recurring_amount}` | Recurring billing amount |
| `{$service_billing_cycle}` | Billing cycle (Monthly, Quarterly, etc.) |
| `{$service_next_due_date}` | Next payment due date |
| `{$signature}` | Email signature configured in WHMCS |

### Module-Specific Variables

| Variable | Description |
|----------|-------------|
| `{$service_assigned_ips}` | All assigned IP addresses |
| `{$vm_id}` | Proxmox virtual machine ID (VMID) |
| `{$server_hostname}` | Proxmox server hostname |

In addition, all standard WHMCS client and service merge fields are available.


<!-- sync:dcd6b3bbff73588b -->

# Cron Configuration

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## Overview

The module requires a cron system to process VM deployments, package changes, backups, and other automated tasks. Two cron modes are available, and you can choose the one that best fits your environment.

## Cron Modes

### Mode 1: WHMCS Hook (Default)

In this mode, the module hooks into the standard WHMCS cron and executes its tasks automatically each time the WHMCS cron runs.

**Advantages:**
- No additional configuration required
- Works out of the box after module activation
- Uses the existing WHMCS cron schedule

**When to use:** This is the recommended mode for most installations. If your WHMCS cron runs every 5 minutes (the standard recommendation), this provides timely task execution.

No additional crontab entries are needed. Just ensure the standard WHMCS cron is running:

```bash
*/5 * * * * php -q /path/to/whmcs/cron/cron.php
```

### Mode 2: Standalone

In this mode, the module's cron runs independently from the WHMCS cron via a separate crontab entry. This gives you independent control over the module's cron frequency.

**Advantages:**
- Independent schedule from WHMCS cron
- Can run more frequently for faster VM provisioning
- Useful if your WHMCS cron runs less frequently

**When to use:** Use standalone mode if you need the module to process tasks more frequently than your WHMCS cron runs, or if you want to separate the module's workload from the main WHMCS cron.

To set up standalone cron, add the following to your server's crontab:

```bash
*/5 * * * * php -q /path/to/whmcs/modules/addons/puq_proxmox_kvm/cron.php
```

![Standalone cron settings](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-jprbeb1w.png)

## Configuring Cron Mode

The cron mode is configured in the addon settings:

1. Navigate to **Addons > PUQ Proxmox KVM**
2. Go to the **Settings** page
3. Select the **Cron** tab
4. Choose your preferred cron mode: **WHMCS Hook** or **Standalone**
5. Save settings

![Cron settings page](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-h061ooh4.png)

## Task Intervals

Each cron task has a configurable interval that controls how often it runs. These intervals can be adjusted in the Cron settings page. For details on individual tasks and their intervals, see the [Cron and Automation](../07-cron-and-automation/_chapter.md) section.

## Verifying Cron Operation

To confirm the cron is running correctly:

1. Navigate to the addon **Settings > Cron** page
2. Check the **Last Run** timestamp for each task
3. Verify there are no stale lock files

If tasks are not executing, check:

- The WHMCS cron is running (for Hook mode)
- The standalone crontab entry is correct (for Standalone mode)
- PHP CLI is available at the path specified in the crontab
- File permissions allow the cron script to execute


<!-- sync:bfa1089dee48f532 -->

# Migration from PUQ Customization

## Overview

Version 3.0 introduces a standalone addon module (`puq_proxmox_kvm`) that replaces the PUQ Customization extension (`ModulePuqProxmoxKVM`).

## Migration Steps

### 1. Install New Modules

Upload both modules to your WHMCS installation:
- `modules/servers/puqProxmoxKVM/` (updated server module v3.0)
- `modules/addons/puq_proxmox_kvm/` (new addon module)

### 2. Activate Addon

1. Go to **Setup > Addon Modules**
2. Activate **PUQ Proxmox KVM**
3. Enter your license key

### 3. Automatic Data Migration

On activation, the addon automatically:
- Creates new database tables (`puq_proxmox_kvm_ip_pools`, `puq_proxmox_kvm_dns_zones`)
- Detects old tables (`puq_customization_module_puq_proxmox_kvm_ip_pools`, `puq_customization_module_puq_proxmox_kvm_dns_zones`)
- Copies all data from old tables to new tables (if new tables are empty)

### 4. Verify Migration

1. Open **Addons > PUQ Proxmox KVM**
2. Check that all IP Pools are present
3. Check that all DNS Zones are present
4. Verify Services Summary shows correct data

### 5. Deactivate Old Extension

Once verified:
1. Go to PUQ Customization addon
2. Deactivate the ModulePuqProxmoxKVM extension

> **Important:** The server module v3.0 supports both the new and old addon simultaneously, so services will continue working during migration.

## Backward Compatibility

The updated server module (v3.0) uses a dual-path approach:
1. First checks for the new standalone addon (`puq_proxmox_kvm`)
2. Falls back to the old PUQ Customization extension if new addon is not found

This ensures zero downtime during the migration process.


<!-- sync:fcfa581e65552325 -->

# Addon Module

The puq_proxmox_kvm addon module is a required companion to the server module. It provides a central dashboard, IP address pool management, DNS zone management (Cloudflare, HestiaCP, PowerDNS), a VM management view with deploy logs, and the cron task orchestration. The addon must be installed and activated for the server module to work.

# Dashboard

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The addon module dashboard provides a quick overview of all managed resources.

![Addon Dashboard](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-muzwkkji.png)

## Dashboard Cards

| Card | Description |
|------|-------------|
| **IP Pools** | Total number of configured IP address pools |
| **DNS Zones** | Total number of configured DNS zones |
| **KVM Services** | Total number of active KVM services |
| **VM Management** | Link to the centralized VM management page |
| **Settings** | Link to the module settings |

## Navigation

The top navigation bar provides access to all addon sections:

- **Home** — Dashboard (this page)
- **IP Pools** — IPv4/IPv6 address pool management
- **DNS Zones** — DNS zone configuration
- **VM Management** — Centralized VM monitoring and management
- **Settings** — General settings, cron configuration
- **Help** — Links to documentation and support


<!-- sync:77a8a4e2a18e9784 -->

# IP Pools

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

IP Pools allow you to manage blocks of IPv4 and IPv6 addresses that are automatically assigned to virtual machines during provisioning.

> **Changed in v3.0.** IP Pools are now a first-class feature of the dedicated `puq_proxmox_kvm` addon. In v1.3–v2.x the same pool management lived in the separate **PUQ Customization** addon, which is no longer required — on first activation the new addon **automatically imports** all pools from the legacy `puq_customization_ip_pools` tables, including their allocations and per-server assignments. If you are migrating from an older version you do not need to recreate the pools by hand.

> **Legacy alternative (still supported).** If you prefer not to use pools at all, you can still define the IP addresses directly on the WHMCS server entry using the pipe-delimited **Assigned IP Addresses** format — see [Create new server for Proxmox in WHMCS](../03-installation-and-configuration/04-create-new-server-for-proxmox-in-whmcs.md). The server module will pick an IP from whichever source has free entries (pools first, then the legacy list).

## IP Pools List

Navigate to **Addons > PUQ Proxmox KVM > IP Pools** to view all configured pools.

![IP Pools list with rDNS zone hints](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-ozsnnvz9.png)

> **New in v3.2.** The Addresses column now shows the ready-made **rDNS zone name** that corresponds to each pool's prefix — copy it directly into the [DNS Zones](03-dns-zones.md) form when you want reverse DNS for that pool's IPs. No need to compute nibble reversals by hand. Both IPv4 (`/8`, `/16`, `/24`) and IPv6 (any nibble-aligned prefix) are supported.

The table displays:

| Column | Description |
|--------|-------------|
| **ID** | Pool identifier |
| **Server** | Associated Proxmox server |
| **Type** | IPv4 or IPv6 |
| **Bridge** | Network bridge (e.g., `vmbr0`) |
| **Vlan** | VLAN tag (0 = no VLAN) |
| **Gateway** | Default gateway address |
| **Mask** | Subnet mask |
| **Addresses** | Total IPs in pool |
| **Usage** | Visual bar showing allocated vs available |
| **Actions** | Edit / Delete buttons |

## Adding an IP Pool

Click **+ Add IP Pool** to open the creation dialog.

![Add IP Pool modal](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-fzi9l1mu.png)

Fill in the following fields:

| Field | Description | Example |
|-------|-------------|---------|
| **Server** | Select the Proxmox server | `pve-waw1` |
| **Type** | IPv4 or IPv6 | `IPv4` |
| **Bridge** | Network bridge name | `vmbr0` |
| **Vlan** | VLAN tag (0 for untagged) | `0` |
| **Gateway** | Default gateway address | `192.168.130.1` |
| **Mask** | Subnet mask (1-32 for IPv4, 1-128 for IPv6) | `24` |
| **DNS 1** | Primary DNS server | `8.8.8.8` |
| **DNS 2** | Secondary DNS server | `1.1.1.1` |
| **Address Start** | First IP in the range | `192.168.130.2` |
| **Address Stop** | Last IP in the range | `192.168.130.254` |

## Editing an IP Pool

Click the **Edit** button next to any pool to modify its settings.

![Edit IP Pool modal](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-0iimgxoc.png)

> **Note:** Modifying a pool does not affect already-assigned IP addresses. Changes only apply to new allocations.

## IP Allocation Process

IPs are automatically allocated from pools during VM provisioning when:

1. The server has no assigned IPs configured in WHMCS server settings
2. The addon module is installed and activated
3. The product's network configuration has **Auto bridge/VLAN** enabled

The system selects IPs from pools matching the server associated with the product. IPv4 and IPv6 addresses are allocated from separate pools.

## Validation Rules

- Bridge must be a valid Proxmox bridge name
- Gateway must match the pool type (IPv4 for IPv4 pools, IPv6 for IPv6 pools)
- Server must be a valid Proxmox server configured in WHMCS
- Address range must be valid for the given type


<!-- sync:e752a690abea8bce -->

# DNS Zones

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

DNS Zones enable automatic management of forward (A/AAAA) and reverse (PTR) DNS records for every virtual machine the module provisions. Configure your zones once and the module takes care of creation on deploy, refresh on package change, and cleanup on termination — across all three supported providers.

> **Changed in v3.2.** Added native PowerDNS support, asynchronous DNS record creation, live cron output, and automatic reverse-zone hints on the IP Pools page. DNS errors are fully non-blocking — a misconfigured or unreachable provider never stops deployment, package change, or termination. Credentials are no longer echoed back to the browser.

## DNS Zones list

Navigate to **Addons → PUQ Proxmox KVM → DNS Zones**.

![DNS Zones list with multiple providers and reverse zones](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-e6e92rsg.png)

Each zone line shows: internal ID, the zone name, the provider type badge, and per-row actions (**Test** connection, **Edit**, **Delete**).

You can add any number of zones. When a VM is deployed, the module matches the VM's FQDN and every assigned IP against all configured zones and writes to **every** matching zone.

## Supported providers

The module supports three DNS providers. You can mix them freely — forward zones on Cloudflare, reverse zones on PowerDNS, legacy zones on HestiaCP, all at the same time.

| Provider | Forward (A/AAAA) | Reverse (PTR) | API style |
|---|---|---|---|
| **Cloudflare** | yes | yes | REST v4, bearer token |
| **HestiaCP** | yes | yes | custom CLI-over-HTTP, admin user + password |
| **PowerDNS** | yes | yes | Authoritative Server REST API, `X-API-Key` |

## Adding a DNS zone

Click **+ Add DNS Zone**. Choose the provider in the Type dropdown; the form fields change to match the provider.

![Add DNS Zone — Cloudflare](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-dmhtddy1.png)

### Cloudflare

| Field | Description |
|-------|-------------|
| **Zone** | The zone name as it appears in Cloudflare (e.g., `example.com` for forward, `130.168.192.in-addr.arpa` for reverse). |
| **Type** | `Cloudflare` |
| **Account ID** | Cloudflare Account ID from the Cloudflare dashboard. |
| **Zone ID** | Cloudflare Zone ID from the zone's Overview page. |
| **API Token** | Cloudflare API token scoped to DNS edit on this zone. |

### HestiaCP

| Field | Description |
|-------|-------------|
| **Zone** | Domain name as configured on the HestiaCP server. |
| **Type** | `HestiaCP` |
| **Server URL** | HestiaCP URL, e.g., `https://hestia.example.com:8083/`. The trailing slash is added automatically if omitted. |
| **Admin User** | HestiaCP admin username. |
| **Admin Password** | HestiaCP admin password. |
| **User** | HestiaCP user that owns the DNS zone. |

### PowerDNS

| Field | Description |
|-------|-------------|
| **Zone** | Zone name as configured in PowerDNS (`example.com.` for forward, `8.b.d.0.1.0.0.2.ip6.arpa.` for IPv6 reverse). Trailing dots are normalized automatically. |
| **Type** | `PowerDNS` |
| **Server URL** | PowerDNS REST API base URL, e.g., `https://pdns.example.com:8081`. |
| **API Key** | Value of the `X-API-Key` header configured in `pdns.conf`. |

Make sure the PowerDNS API is enabled in your `pdns.conf`:

```
api=yes
api-key=<your-key-here>
webserver=yes
webserver-address=0.0.0.0
webserver-port=8081
webserver-allow-from=127.0.0.1,<whmcs-ip>
```

PowerDNS uses RRset-based updates: records are added or replaced atomically per name+type. PTR/CNAME/NS content is automatically wrapped with a trailing dot to satisfy PowerDNS strict validation.

## Forward vs reverse zones

The module does not distinguish between "forward" and "reverse" zones in the UI — there is just one **Zone** field. What makes a zone forward or reverse is simply its **name**:

- **Forward zone**: a regular domain name. Example: `puqcloud.com`.
  - Stores A records (IPv4 → hostname) and AAAA records (IPv6 → hostname).
  - Needed for clients to reach their VM by DNS name.
- **Reverse IPv4 zone**: ends in `.in-addr.arpa`. Example: `130.168.192.in-addr.arpa` (covers the `192.168.130.0/24` network).
  - Stores PTR records (IP → hostname).
  - Needed for outbound mail, rDNS verification, PTR lookups.
- **Reverse IPv6 zone**: ends in `.ip6.arpa`. Example: `0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa` (covers `2001:db8::/120`).
  - Stores PTR records for IPv6.

You can add **multiple zones of different providers with the same name**. For example, if you run a primary PowerDNS and a secondary HestiaCP, add two zones: `puqcloud.com / PowerDNS` and `puqcloud.com / HestiaCP`. The module will push every forward record to both on deploy and remove from both on termination.

## Finding the reverse-zone name for a pool

You don't have to compute the reverse zone name for an IP pool by hand. The addon does it for you on the IP Pools page: the required **rDNS zone** for each pool is shown on the second line in the Addresses column, and live in the add/edit modal as you type the prefix:

![IP Pools page with rDNS zone hint for each pool](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-5fulqh80.png)

Example from the screenshot:

- Pool `192.168.130.2 - 192.168.130.30` (gateway `192.168.130.1`, mask `/24`) → **rDNS zone `130.168.192.in-addr.arpa`**
- Pool `2001:db8::2 - 2001:db8::50` (gateway `2001:db8::1`, mask `/120`) → **rDNS zone `0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa`**

Copy that value straight into the **Zone** field when adding a DNS zone for reverse records. The pool itself doesn't need any DNS configuration — the module only uses this name to know where PTR records should go, and only if such a zone actually exists in DNS Zones.

### Prefix alignment

- **IPv4** reverse zones work naturally for `/8`, `/16`, `/24` prefixes. Non-octet-aligned prefixes (e.g., `/22`, `/28`) require "classless delegation" — a more advanced DNS setup described in RFC 2317 — which is beyond the module's scope. Pools with such prefixes show `classless delegation required` instead of a ready-made zone name.
- **IPv6** reverse zones work for any nibble-aligned prefix (multiple of 4): `/4`, `/8`, `/12`, `/16`, … `/124`, `/128`. The module supports the full range.

## How DNS automation works

When a VM transitions through state changes, the following DNS operations run automatically:

| Event | Forward records | Reverse records |
|---|---|---|
| **Deploy** (state `clone → set_dns`) | Create A and/or AAAA for `<vmname>.<domain>` | Create PTR for every assigned IP |
| **Change package** (state `change_package → cp_update_ip`) | Delete all, then recreate — reflects any IP changes | Delete all, then recreate |
| **Terminate** | Delete forward records for the VM's FQDN | Delete PTR records for every assigned IP |
| **Set DNS records admin button** | Delete + recreate — forces a full resync | Delete + recreate |

The main domain used for forward records comes from product configuration: **Admin → Products → [your product] → Module Settings → Integrations → Main domain**. For example if the main domain is `puqcloud.com` and the VM's internal name is `5551-1776530141`, the FQDN registered in DNS is `5551-1776530141.puqcloud.com`.

### Zone matching

For a given DNS operation the module walks every configured zone and checks whether the record name would fit that zone:

- A forward `vm-123.puqcloud.com` matches any zone whose name is a suffix: `puqcloud.com`, for example. It does not match `puq.com` or `example.com`.
- A reverse PTR `10.1.168.192.in-addr.arpa` matches `1.168.192.in-addr.arpa`, `168.192.in-addr.arpa`, or `192.in-addr.arpa` — any level of reverse delegation.
- An IPv6 PTR matches any `*.ip6.arpa` zone that is a proper suffix of the full 32-nibble reverse name.

Zones that don't match a given VM's records are simply skipped — there's no error.

### Non-blocking errors

Every per-zone and per-record operation is wrapped in its own try/catch. If a zone's provider is down, credentials are wrong, or a specific record creation fails:

- The error is logged to **Utilities → Logs → Module Log** with full context.
- A live cron output shows `fwd ERR` / `rev ERR` for that specific operation.
- **Deploy / change package / terminate continues** — the next zone, the next record, and the next pipeline step all run.
- When errors occur, a summary entry is written to the WHMCS module log so admins can audit failures after the fact.

This is by design: a DNS outage must not block a client from getting their VM. You can always run **Set DNS records** later from the admin service page once the DNS provider is back online (see below).

## Set DNS records admin button

On a service's admin page in WHMCS, the module exposes a **Set DNS records** button under Module Commands. It performs a full DNS resync for that specific VM: delete every existing forward and reverse record, then recreate them from the VM's current IPs and domain.

Starting with v3.2 this runs **asynchronously**. Clicking the button queues the job (sets `vm_status = 'set_dns_records'`) and returns immediately. The next cron tick picks up the VM and runs the full delete + create cycle with live output — useful for services with dozens of reverse records where the synchronous version used to time out.

The progress shows up in VM Management → Log modal and in the cron stdout just like during deploy.

## Credentials never leave the server

In v3.2 the DNS Zones list API masks all secret fields (API token, admin password, API key) with a `__KEEP__` sentinel before sending data to the browser. The edit form shows `(unchanged — enter new to replace)` placeholders:

- If you **don't type anything** in a secret field on save, the stored value is kept.
- If you **type a new value**, it overwrites the stored value.

This means tokens cannot be stolen by inspecting the edit form's HTML or by a compromised admin browser. The only way to read a stored credential is direct database access.

## Testing a zone

The **Test** button per row runs a live connectivity and authorization check against the provider. For Cloudflare and PowerDNS it fetches the zone metadata; for HestiaCP it issues a zone listing call.

A green toast means the provider is reachable with valid credentials and the zone name matches what's configured on the server. A red toast shows the exact error returned by the provider.

---

## Legacy DNS endpoint (`dns.php`)

> **Still supported.** The legacy read-only JSON endpoint introduced in v1.4 is kept for backwards compatibility with external DNS automations. It does not write to any DNS server — it just returns the current forward/reverse mapping so you can feed it into your own DNS-sync script.

Send a `GET` request to:

```
https://<WHMCS-SERVER>/modules/servers/puqProxmoxKVM/lib/dns/dns.php
```

Example response:

```json
[
   {
      "forward": "vlan-1-4779.vps.uuq.pl",
      "ip": "192.168.0.2",
      "reverse": "mail.uuq.pl"
   },
   {
      "forward": "vps-1-4780.vps.uuq.pl",
      "ip": "192.168.0.3",
      "reverse": "test.vps.uuq.pl"
   }
]
```

### Access control

Restrict access with `.htaccess` next to the file:

```
order deny,allow
deny from all
allow from <allowed_IP_address>
```

> For new integrations, use the native DNS providers (Cloudflare / HestiaCP / PowerDNS) instead of scraping this endpoint. The native integration handles forward + reverse, deletion on terminate, retry on transient errors, credential masking, and produces a live audit trail in the cron log and module log.

## Related reading

- [IP Pools](02-ip-pools.md) — where the rDNS zone name is computed for you.
- [Deploy Process](../07-cron-and-automation/01-deploy-process.md) — when forward and reverse records are created.
- [Change Package](../07-cron-and-automation/02-change-package.md) — DNS refresh on package changes.
- [Terminate Process](../07-cron-and-automation/03-terminate-process.md) — DNS cleanup on service termination.


<!-- sync:40c731577f383daa -->

# VM Management

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The VM Management page provides a centralized view of every KVM virtual machine across every Proxmox server — their current status, assigned IPs with reverse DNS, deployment history, and per-VM admin actions.

## VM list

Navigate to **Addons → PUQ Proxmox KVM → VM Management**.

![VM Management with active services](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-pqebq2ih.png)

The list is server-side-paginated with search and sorting. Two dropdown filters above the table narrow the view by WHMCS service status and by VM state; both remember your last choice in the browser, so the list opens the way you left it next time.

### Columns

| Column | Description |
|---|---|
| **ID** | WHMCS service ID (click for client services page). |
| **Client** | Client name with a link to the client profile. |
| **Product** | Product / plan name. |
| **Server** | Proxmox server name. |
| **VM** | Proxmox VM ID and internal VM name. |
| **VM Status** | Module lifecycle status — see [status reference](#vm-status-reference) below. |
| **Service** | WHMCS service status: Active, Suspended, Terminated, Cancelled, Pending. |
| **IPs** | Every assigned IPv4 / IPv6 address paired with its current reverse DNS name on the line below. |
| **Actions** | Per-VM admin buttons: Redeploy, Reset, Log, DB Record, (Delete Record when applicable). |

### IPs column — IPs with rDNS

Starting with v3.2, each IP is shown together with its rDNS on a second, smaller line:

```
192.168.130.2
  5546-1776530141.puqcloud.com

2001:0db8:0000:0000:0000:0000:0000:0007
  5546-1776530141.puqcloud.com
```

Visual grouping makes it easy to scan. IPs without a rDNS entry simply don't have the second line.

## Administrative actions

| Button | Visible when | What it does |
|---|---|---|
| **Redeploy** (red, circular-arrow) | `vm_status != ready` | Destroys the VM on Proxmox, clears IPs, resets logs, sets state back to `creation` — the full deploy pipeline runs from scratch on the next cron tick. **Destructive.** |
| **Reset** (yellow, sync) | always | Opens the Reset VM Status modal — switch the VM to any of the re-runnable states. See below. |
| **Log** (blue, file) | always | Opens the VM Log modal with per-run and per-step history. |
| **DB Record** (grey, database) | always | Opens the raw `puqProxmoxKVM_vm_info` row for inspection and manual editing. Last-resort troubleshooting tool. |
| **Delete Record** (red, trash) | `vm_status in (error_terminate, remove)` | Removes the row from `puqProxmoxKVM_vm_info`. Does **not** touch Proxmox or `tblhosting`. Confirmation dialog warns explicitly. |

## Reset VM Status

The Reset modal lets you switch a VM to any of the re-runnable states. An embedded reference table explains when each one is appropriate:

| Status | Use case |
|---|---|
| `ready` | Return the VM to the normal "everything is fine" state. Use after you've finished a manual fix and want cron to stop touching it. |
| `creation` | Retry deploy from the beginning — typically after fixing the underlying reason an earlier deploy failed. |
| `set_ip` | Retry only the IP allocation step. |
| `change_package` | Rerun the full package-change flow. |
| `set_dns_records` | Queue a full DNS resync (delete + recreate all records). Fast and safe. |
| `terminate` | Retry termination after an `error_terminate`. |
| `remove` | Force-mark the VM as removed (no Proxmox contact). Use only when the VM is already gone from Proxmox and you just need WHMCS to stop showing it as active. |

## VM Log modal

The Log modal shows every pipeline run — deploy, change package, set DNS records, terminate — with per-step duration, result, and any errors. The most recent 50 runs are kept.

![VM Log showing deploy steps](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-tupam9ar.png)

When the last run failed, a red banner at the top shows the failing action and error message. Each step row shows:

- Step label (human-readable).
- State transition (e.g., `set_ip → clone`).
- Result: `success` / `waiting` / `error: …`.
- Duration in seconds.
- Timestamp.

Skipped steps in change package (see [Change Package](../07-cron-and-automation/02-change-package.md)) show `skip (no change)` and contribute zero duration.

## VM status reference

| Status | Meaning | Cron behavior |
|---|---|---|
| `creation`, `set_ip`, `clone`, `set_dns`, `migrated`, `set_cpu_ram`, `set_system_disk_size`, `set_system_disk_bandwidth`, `set_created_additional_disk`, `set_additional_disk_size`, `set_additional_disk_bandwidth`, `set_network`, `set_firewall`, `set_cloudinit`, `starting` | Deploy pipeline in progress at the named step. | Cron executes the next step each tick. |
| `ready` | VM is live and the client has access. | Cron ignores. |
| `change_package`, `cp_update_ip`, `cp_stop`, `cp_cpu_ram`, `cp_system_disk_size`, `cp_system_disk_bandwidth`, `cp_additional_disk`, `cp_additional_disk_size`, `cp_additional_disk_bandwidth`, `cp_network`, `cp_firewall`, `cp_start` | Change-package pipeline in progress. | Cron executes the next step each tick. |
| `reinstall` | Reinstall requested; needs the existing VM removed before reverting to `set_ip`. | Cron converts to `set_ip` after removing the old VM. |
| `set_dns_records` | Queued DNS resync. | Cron does a full delete+create cycle then returns to `ready`. |
| `terminate` | Termination queued. | Cron performs stop → backups → DNS → DELETE → cleanup. |
| `error_terminate` | Terminate failed. **Admin action required.** | Cron skips. Fix the cause and reset to `terminate` or `remove`. |
| `remove` | VM has been cleaned up. | Cron skips. Optionally use **Delete Record** to remove the row. |

## Watching an async action in VM Management

After clicking a long-running action (Terminate, Set DNS records), the VM row reflects the current state badge in real time. You can watch status changes by refreshing the page or by tracking the cron standalone output:

![VM Management during a termination](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-g8yuyryd.png)

Once the cron finishes, the row appears with the final state — `remove` on success, `error_terminate` on failure:

![VM Management showing terminated services](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-jw1usjbc.png)

## DB Record editor

For advanced troubleshooting, click **DB Record** to view and edit the raw `puqProxmoxKVM_vm_info` row:

![DB Record editor](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-fi7dlvcv.png)

> **Warning:** Direct database editing bypasses every safeguard in the state machine. Incorrect values cause deployment failures, incorrect IP accounting, or data loss. Use only when you know exactly what you're doing and the usual Reset / Redeploy actions cannot help.

## Related reading

- [Deploy Process](../07-cron-and-automation/01-deploy-process.md) — deploy state machine driving `creation → … → ready`.
- [Change Package](../07-cron-and-automation/02-change-package.md) — the `change_package → … → ready` flow.
- [Terminate Process](../07-cron-and-automation/03-terminate-process.md) — async terminate, `error_terminate` path and recovery.
- [DNS Zones & Integration](03-dns-zones.md) — what runs when you click Set DNS records or when `set_dns_records` is queued.


<!-- sync:7bc570f52bd75fe9 -->

# Settings

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The Settings section is divided into two pages: **General** and **Cron**.

---

## General Settings

Navigate to **Addons > PUQ Proxmox KVM > Settings > General**.

![General Settings](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-xtrvdtm3.png)

### API Timeouts

| Setting | Default | Range | Description |
|---------|---------|-------|-------------|
| **Default Timeout** | 10s | 1–300 | Timeout for Proxmox API requests from admin/client area pages |
| **Heavy Operations Timeout** | 60s | 1–600 | Timeout for cron operations (deploy, clone, terminate, change package) |

### VM Migration

| Setting | Default | Description |
|---------|---------|-------------|
| **Enable post-clone migration** | Yes | When enabled, VMs cloned to the template node will be automatically migrated to the target node with the correct storage |
| **Migration Timeout** | 300s | Maximum wait time per cron run for migration to complete. If exceeded, cron retries on next run |

### Module Data

| Setting | Default | Description |
|---------|---------|-------------|
| **Delete all database tables** | No | When enabled, all module database tables are dropped on addon deactivation. When disabled (default), tables are preserved for safe updates |

---

## Cron Settings

Navigate to **Addons > PUQ Proxmox KVM > Settings > Cron**.

### WHMCS Hook Mode

![Cron Settings — WHMCS mode](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-qojycywq.png)

In this mode, cron tasks run automatically as part of the WHMCS system cron. No additional configuration is needed.

### Standalone Mode

![Cron Settings — Standalone mode](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-e0lkgaop.png)

In standalone mode, you run the cron file directly via system crontab:

```bash
* * * * * php /path/to/whmcs/modules/addons/puq_proxmox_kvm/cron.php
```

### Cron Tasks

| Task | Default Interval | Description |
|------|-----------------|-------------|
| **Process Virtual Machines** | 1 min | Deploys new VMs, processes change_package, refreshes DNS |
| **Remove Old Snapshots** | 5 min | Deletes snapshots past their configured lifetime |
| **Restore Backup Status** | 1 min | Checks completion of backup restore operations |
| **Now Backup Status** | 1 min | Checks completion of manual backup operations |
| **Schedule Backup** | 5 min | Runs scheduled automatic backups |
| **Collecting Statistics** | 60 min | Collects network usage metrics for billing |

- Set interval to **0** to disable a task
- **Lock Timeout** — maximum time a cron lock is held before considered stale (default: 600s)

### CLI Tools

![Cron CLI help](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-mlxm4ptd.png)

The standalone cron file supports command-line arguments:

```bash
# Run all tasks
php cron.php

# Run specific task
php cron.php --task=processVirtualMachines

# Force run (ignore intervals)
php cron.php --force

# List tasks and their status
php cron.php --list

# Show help
php cron.php --help
```


<!-- sync:dd302f1878108612 -->

# Admin Area

Reference for everything a WHMCS administrator uses to run the Proxmox KVM module: the product configuration panel, the per-service admin page with real-time VM status, deploy logs, charts and module commands, and WHMCS Configurable Options for selectable resources.

# Product Configuration

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The product configuration page defines all default settings for virtual machines provisioned under a given WHMCS product. These settings are accessible by navigating to **Setup > Products/Services > Products/Services**, selecting a product, and opening the **Module Settings** tab with **PUQ ProxmoxKVM** selected as the module.

The module injects a custom settings panel directly below the standard WHMCS module options. All settings are organized into collapsible sections arranged in a two-column layout.

![Full product configuration page](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-gpzduogi.png)

> **Changed in v3.0.** The product configuration page has been fully rewritten as a custom Bootstrap panel injected into the Module Settings tab. In v1.x–v2.x the same options were stored in the stock WHMCS `configoption1..N` fields and displayed as plain textareas — all existing values are preserved during upgrade and migrated to the new panel automatically. The **Firewall** section and the **Anti-spoofing** checkbox, which previously lived inside the Network block, are now a dedicated collapsible section of their own.

---

## License Key

The first field in the standard WHMCS module settings area is the **License key**. Enter your PUQ ProxmoxKVM license key here. The module validates the license on each page load and displays a verification badge next to the field.

---

## VM Configuration

This section controls the core virtual machine parameters applied during provisioning.

![VM Configuration section](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-iclnl2dm.png)

| Setting | Description | Default |
|---------|-------------|---------|
| **Target node** | Proxmox node where VMs will be created. Select a specific node from the dropdown or leave as **automatically** to let the module choose the node with the most available resources. The dropdown is populated via AJAX from the connected Proxmox server; click the refresh button to reload the list. | `automatically` |
| **OS template** | The default operating system template used for cloning new VMs. Templates are loaded from Proxmox via AJAX. Click the refresh button to reload available templates. | (none) |
| **Clone type** | Determines how the VM is cloned from the template. **Linked Clone** is faster and uses less disk space by sharing the base disk with the template. **Full Clone** creates a completely independent copy but is slower and uses more storage. | `Linked Clone` |
| **CPU** | Number of virtual CPU cores assigned to the VM. | `1` |
| **RAM** | Amount of memory in gigabytes assigned to the VM. | `1` |
| **Backups** *(new in v3.3)* | Default maximum number of backups for the service. Overridden by the `Backups` Configurable Option when assigned. `0` = backups disabled. | `0` |
| **Snapshots** *(new in v3.3)* | Default maximum number of snapshots for the service. Overridden by the `Snapshots` Configurable Option when assigned. `0` = snapshots disabled. | `0` |
| **VM name rule** | A naming pattern for the VM hostname. Supports macros that are expanded at provisioning time. Leave empty to use the default pattern. A live preview is shown below the field. | `{client_id}-{service_id}` |
| **First VM ID** | The starting VM ID number. The module assigns VM IDs sequentially from this value, skipping any IDs already in use on the Proxmox cluster. | `100` |
| **OS username** | The default operating system username set via cloud-init. Leave empty to generate a random username. | (empty = random) |
| **Snapshot lifetime** | Automatic cleanup period for client-created snapshots. The cron job removes snapshots older than the selected duration. Set to **Don't remove** to keep snapshots indefinitely. | `Don't remove` |

### VM Name Rule Macros

The following macros can be used in the **VM name rule** field:

| Macro | Description | Example |
|-------|-------------|---------|
| `{client_id}` | WHMCS client ID | `142` |
| `{service_id}` | WHMCS service/hosting ID | `387` |
| `{random_digit_X}` | Random digits (X = count) | `{random_digit_4}` = `7291` |
| `{random_letter_X}` | Random lowercase letters (X = count) | `{random_letter_3}` = `kqz` |
| `{unixtime}` | Current Unix timestamp | `1712678400` |
| `{year}` | Current 4-digit year | `2026` |
| `{month}` | Current 2-digit month | `04` |
| `{day}` | Current 2-digit day | `09` |
| `{hour}` | Current 2-digit hour | `14` |
| `{minute}` | Current 2-digit minute | `35` |
| `{second}` | Current 2-digit second | `07` |

### Snapshot Lifetime Options

| Value | Duration |
|-------|----------|
| Don't remove | Snapshots kept indefinitely |
| 1 day | 86,400 seconds |
| 2 days | 172,800 seconds |
| 3 days | 259,200 seconds |
| 4 days | 345,600 seconds |
| 5 days | 432,000 seconds |
| 6 days | 518,400 seconds |
| 7 days | 604,800 seconds |
| 8 days | 691,200 seconds |
| 9 days | 777,600 seconds |
| 10 days | 864,000 seconds |

---

## Network

This section configures the virtual network adapter and IP addressing behavior for provisioned VMs.

![Network section](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-pyvvazif.png)

| Setting | Description | Default |
|---------|-------------|---------|
| **Model** | The virtual network adapter model. **As in template** preserves the model defined in the Proxmox template. Other options: **VirtIO** (recommended for Linux), **Intel E1000**, **Realtek RTL8139**, **VMware vmxnet3**. | `As in template` |
| **Bandwidth** | Maximum network bandwidth limit in MB/s. Set to **0** for unlimited bandwidth. Overridden by the `Network Bandwidth` Configurable Option when assigned. | `0` (unlimited) |
| **Bridge** | The Proxmox network bridge to attach the VM's network adapter to (e.g., `vmbr0`, `vmbr1`). | `vmbr0` |
| **VLAN tag** | VLAN tag for the network adapter. Set to **0** for no VLAN tagging. Valid range: 0-4096. | `0` |
| **IPv4 count** *(new in v3.3)* | Default number of IPv4 addresses to allocate from the pool. Overridden by the `IPv4 Addresses` Configurable Option when assigned. | `1` |
| **IPv6 count** *(new in v3.3)* | Default number of IPv6 addresses to allocate from the pool. Overridden by the `IPv6 Addresses` Configurable Option when assigned. `0` = no IPv6. | `0` |
| **Auto bridge/VLAN** | When enabled, the bridge and VLAN are automatically determined from the IP Pool configuration in the addon module, overriding the manual Bridge and VLAN settings above. | `on` |
| **DHCP IPv4** | Enable DHCP for IPv4 addressing in cloud-init configuration. | `on` |
| **DHCP IPv6** | Enable DHCP for IPv6 addressing in cloud-init configuration. | `on` |

> **Note:** When **Auto bridge/VLAN** is enabled and the addon module's IP Pools are configured, the pool's bridge and VLAN values take precedence over the manually entered Bridge and VLAN fields.

> **DHCP caveat.** When either DHCP IPv4 or DHCP IPv6 is enabled, the module does **not** know the VM's final IP address at provisioning time. In that case no firewall rules and no anti-spoofing IPSet are applied to the VM's interface (they would be meaningless without a known IP). If you want the firewall feature, either use static IPs with the IP pool, or configure the rules manually after the DHCP lease has been issued.

---

## Firewall

This section defines the default Proxmox firewall configuration applied to each provisioned VM's network interface.

![Firewall section](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-jc58xmvp.png)

### Policy and Logging

| Setting | Description | Default |
|---------|-------------|---------|
| **Input Policy** | Default policy for incoming traffic. Options: **ACCEPT**, **DROP**, **REJECT**. | `ACCEPT` |
| **Output Policy** | Default policy for outgoing traffic. Options: **ACCEPT**, **DROP**, **REJECT**. | `ACCEPT` |
| **Log level in** | Logging level for incoming traffic. Options: **nolog**, **info**, **notice**, **warning**. | `nolog` |
| **Log level out** | Logging level for outgoing traffic. Options: **nolog**, **info**, **notice**, **warning**. | `nolog` |

### Firewall Toggles

| Setting | Description | Default |
|---------|-------------|---------|
| **Enable** | Enable the Proxmox firewall on the VM's network interface. | `on` |
| **DHCP** | Allow DHCP traffic through the firewall. | `on` |
| **NDP** | Allow Neighbor Discovery Protocol (IPv6) traffic. | `on` |
| **Router Adv** | Allow Router Advertisement packets. Typically disabled for client VMs. | `off` |
| **MAC filter** | Enable MAC address filtering on the network interface. | `on` |
| **IP filter** | Enable IP address filtering, restricting traffic to assigned IPs only. | `off` |
| **Anti-spoofing** | Enable anti-spoofing rules to prevent the VM from sending traffic with forged source addresses. | `on` |

> **Anti-spoofing requires a deny-by-default policy on the cluster.** For the anti-spoofing IPSet (`ipfilter-net0`) to actually protect against spoofed traffic, the **cluster / node firewall policy must be DENY/DENY** — the module then only adds permissive rules matching the VM's own IP addresses. Without a DENY baseline, the permissive rules change nothing and the feature has no effect. The filter was renamed from the legacy `wm-VMID` to `ipfilter-net0` in v2.3; v3.0 uses the same naming.

---

## Storage

This section configures the system (boot) disk and optional additional (secondary) disk for provisioned VMs. A value of **0** means "not changed" — the template's default is preserved.

![Storage section](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-34vj7wwc.png)

### System Disk

| Setting | Description | Default |
|---------|-------------|---------|
| **Storage** | Proxmox storage pool for the system disk. Select a specific storage or leave as **auto (from template)** to use the same storage as the template. The dropdown is populated via AJAX from the connected Proxmox server. | `auto (from template)` |
| **Space** | System disk size in GB. Set to **0** to keep the template's disk size. | `0` |
| **Bandwidth Read** | Maximum read throughput in MB/s. Set to **0** for unlimited. | `0` |
| **Bandwidth Write** | Maximum write throughput in MB/s. Set to **0** for unlimited. | `0` |
| **IOPS Read** | Maximum read I/O operations per second. Set to **0** for unlimited. | `0` |
| **IOPS Write** | Maximum write I/O operations per second. Set to **0** for unlimited. | `0` |

### Additional Disk

The additional disk is automatically created during provisioning if the space is set to a value greater than 0.

| Setting | Description | Default |
|---------|-------------|---------|
| **Storage** | Proxmox storage pool for the additional disk. Leave as **same as system disk** to use the system disk's storage. | `same as system disk` |
| **Space** | Additional disk size in GB. Set to **0** to skip additional disk creation. | `0` |
| **Bandwidth Read** | Maximum read throughput in MB/s. Set to **0** for unlimited. | `0` |
| **Bandwidth Write** | Maximum write throughput in MB/s. Set to **0** for unlimited. | `0` |
| **IOPS Read** | Maximum read I/O operations per second. Set to **0** for unlimited. | `0` |
| **IOPS Write** | Maximum write I/O operations per second. Set to **0** for unlimited. | `0` |

> **Important:** Storage names must be identical across all cluster nodes, or use shared storage. If the VM may be migrated between nodes, ensure the target storage exists on all nodes.

---

## Integrations

This section configures external integrations: backup/ISO storage locations, noVNC console proxy, domain naming, reverse DNS ticket creation, and email notification templates.

![Integrations section](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-g7xul7n3.png)

### Storage and Console

| Setting | Description | Default |
|---------|-------------|---------|
| **Backups storage** | Proxmox storage pool for VM backups. The dropdown lists all storages with backup content type. The value includes the storage name and plugin type (e.g., `local\|dir`). | (none) |
| **ISOs storage** | Proxmox storage pool where ISO images are stored for client ISO mount functionality. | (none) |
| **noVNC domain** | Domain name of the noVNC proxy server used for browser-based console access. | `vncproxy.puqcloud.com` |
| **noVNC key** | Authentication key for the noVNC proxy server. | `puqcloud` |

### Domain and DNS

| Setting | Description | Default |
|---------|-------------|---------|
| **Main domain** | The base domain suffix used for VM hostname generation. The full hostname is constructed as `<prefix>-<client_id>-<service_id><main_domain>`. | `.example.com` |
| **RevDNS ticket** | When enabled, a support ticket is automatically created when a client requests a reverse DNS change (if no DNS zone automation is configured). Select the support department for these tickets from the dropdown. | `on` |

### Email Templates

These dropdowns list all WHMCS product-type email templates. Select the template to be sent for each event, or choose **None** to disable the notification.

| Setting | Description | Default Template |
|---------|-------------|-----------------|
| **VM is ready** | Sent when VM provisioning completes successfully. Contains VM credentials and connection details. | `puqProxmoxKVM VM is ready` |
| **Reset password** | Sent when a client resets the VM's OS password. Contains the new credentials. | `puqProxmoxKVM Reset password` |
| **Backup restored** | Sent when a backup restore operation completes. | `puqProxmoxKVM Backup restored` |

---

## Client Area Permissions

This section controls which features are visible and accessible to clients in their service management area. Each toggle enables or disables a specific client area function.

![Client Area Permissions](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-ozdpxccn.png)

| Permission | Description | Default |
|------------|-------------|---------|
| **Start** | Allow clients to power on their VM. | `on` |
| **Stop** | Allow clients to power off their VM. | `on` |
| **noVNC** | Allow clients to open a browser-based console session. | `on` |
| **Charts** | Allow clients to view CPU, RAM, disk, and network performance charts. | `on` |
| **Reinstall** | Allow clients to reinstall the VM's operating system (destructive). | `on` |
| **Reset password** | Allow clients to reset the VM's OS password via cloud-init. | `on` |
| **RevDNS** | Allow clients to configure reverse DNS records for their IP addresses. | `on` |
| **ISO mount** | Allow clients to mount and unmount ISO images on their VM. | `on` |
| **Firewall** | Allow clients to manage their VM's Proxmox firewall rules. | `on` |

---

## Metric Billing

The module includes a built-in WHMCS Usage Billing (Metric) Provider that reports monthly bandwidth consumption per service. This integrates with WHMCS's standard metric billing system.

![Metric billing toggle](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-hcfyys8i.png)

### Available Metrics

| Metric | Description | Unit | Period |
|--------|-------------|------|--------|
| **Bandwidth Usage In** | Total inbound network traffic | GB | Monthly |
| **Bandwidth Usage Out** | Total outbound network traffic | GB | Monthly |

To enable metric billing:

1. Navigate to **Setup > Products/Services > Products/Services** and edit the product
2. Open the **Metrics** tab
3. Enable the desired metrics and configure pricing

The module's cron job collects bandwidth statistics from Proxmox and stores them in the `puqProxmoxKVM_statistics` table. The metric provider aggregates this data for WHMCS's billing calculations.


<!-- sync:7f8d2cfc2cd7ac8e -->

# Service Management

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The service management page is the primary admin interface for an individual client's KVM service. It is accessed by navigating to **Clients > [Client Name] > Products/Services > [Service]** and viewing the module's custom tab fields.

The page provides real-time VM status monitoring, resource usage visualization, deploy logging, console access, performance charts, and direct module command execution.

![Service detail overview](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-y37m7vhe.png)

---

## VM ID and Reverse DNS

At the top of the service tab, the module displays:

- **VM ID** — the Proxmox VM identifier, with a verification status indicator confirming whether the VM exists on the cluster
- **Reverse DNS** — a table listing all assigned IP addresses with editable reverse DNS fields; changes are saved when the admin clicks the WHMCS **Save Changes** button

## API Connection Status

The module checks connectivity to the Proxmox API on each page load. A green **API answer OK** box confirms successful communication. If the connection fails, a red error box is shown with the error details, and real-time features are disabled.

---

## Function Buttons

Below the connection status, a toolbar provides quick-action buttons:

![Module command buttons](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-c0g1vvsc.png)

| Button | Description |
|--------|-------------|
| **noVNC** | Generates a one-time noVNC console URL. The link is valid for 10 seconds. After expiration, click again to generate a new link. |
| **Deploy Log** | Toggles the deploy log panel (see below). |
| **Redeploy** | Deletes the existing VM on Proxmox, clears IP assignments, and starts a fresh provisioning cycle. Requires confirmation. |

### noVNC Console

![noVNC connect button](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-xwjvkw2i.png)

Clicking **noVNC** sends an AJAX request to the module, which obtains a VNC ticket from Proxmox and constructs a proxy URL. The link opens in a new 800x600 browser window. The URL is single-use and expires after 10 seconds for security.

---

## Module Commands

The module registers a set of administrative command buttons in the WHMCS **Module Commands** section of the service page.

![Module command buttons](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-c0g1vvsc.png)

| Command | Description | Notes |
|---------|-------------|-------|
| **Start** | Power on the VM | — |
| **Stop** | Power off the VM | — |
| **Reinstall** | Wipe the VM and reinstall the OS from the template | Destructive; requires confirmation |
| **VMSetDedicatedIp** | Assign or reassign dedicated IP addresses from the pool | — |
| **VMClone** | Clone the VM to a new VM ID | — |
| **Set CPU RAM** | Update CPU core count and RAM size | Requires VM stop for certain changes |
| **Set System Disk Size** | Resize the boot disk | One-way: can only increase |
| **Set System Disk Bandwidth** | Update read/write throughput and IOPS limits on the system disk | — |
| **Set Created Additional Disk** | Create a secondary disk if one does not exist | — |
| **Set Additional Disk Size** | Resize the secondary disk | One-way: can only increase |
| **Set Additional Disk Bandwidth** | Update read/write throughput and IOPS limits on the additional disk | — |
| **Set Network** | Update network bridge, VLAN, bandwidth, and adapter model | — |
| **Set Firewall** | Apply firewall configuration from product settings to the VM | — |
| **SetCloudinit** | Reapply cloud-init configuration (hostname, user, SSH keys, network) | Destructive; overwrites current cloud-init |
| **VMRemove** | Delete the VM from Proxmox | Destructive; requires confirmation |
| **Set DNS records** | Synchronize forward and reverse DNS records based on current IP assignments | — |

> **Legend of the button prefixes:**
>
> - `*` — the function can run while the VM is **running**
> - `**` — the function can only run when the VM is **stopped**
> - `->` — the function participates in the automatic creation/reinstall pipeline and points to the next step in the state machine
>
> These markers match the ones PUQcloud has used since v1.0 — they are shown inline next to each command button in WHMCS.

### Local status values

The module tracks each VM with an internal **local status** that controls which automation actions may run on the next cron tick. Knowing the status helps diagnose stuck deploys.

| Status | Meaning |
|--------|---------|
| `creation` | First status issued at the time of service creation. Indicates that the VM creation process should start on the next cron run. |
| `reinstall` | The VM is in the reinstall queue and will be redeployed from the selected template. |
| `clone` | The clone operation is in progress (or just finished) — the state machine is about to start post-clone configuration. |
| `migrated` | *(new in v3.0)* The VM has been successfully migrated to the target node after cloning. |
| `set_cpu_ram` | CPU cores and RAM have been configured successfully. |
| `set_system_disk_size` | System disk has been resized successfully. |
| `set_system_disk_bandwidth` | System disk I/O bandwidth limits have been applied. |
| `set_created_additional_disk` | Additional disk step finished (whether a disk was created or not — the step is skipped if the package has no additional disk). |
| `set_additional_disk_size` | Additional disk has been resized (or skipped). |
| `set_additional_disk_bandwidth` | Additional disk bandwidth limits have been applied (or skipped). |
| `set_network` | Network card configuration (bridge, VLAN, bandwidth, MAC) is complete. |
| `set_firewall` | Firewall options, policies and anti-spoofing IPSet have been configured. |
| `set_cloudinit` | Cloud-init has been rewritten with the target user/password/network. |
| `ready` | Terminal success state — the VM was created correctly and is ready to work. |
| `set_dns_records` | On the next cron tick, DNS records will be synchronized. |
| `change_package` | On the next cron tick, the module will start the `change_package` state machine to apply new package parameters. |
| `cp_*` | *(new in v3.0)* Intermediate states of the change-package state machine (`cp_update_ip`, `cp_stop`, `cp_cpu_ram`, `cp_system_disk_size`, `cp_system_disk_bandwidth`, `cp_additional_disk`, `cp_additional_disk_size`, `cp_additional_disk_bandwidth`, `cp_network`, `cp_firewall`, `cp_start`). Each state represents a single completed change-package step. On failure the state machine resumes from the last successful state. |

Alongside the local status the module tracks:

- **Remote status** — the status returned by the Proxmox API itself: `running` or `stopped`.
- **VM remote lock** — set by Proxmox while a long operation (like `clone` or `backup`) is in progress. While a lock is present the module pauses all other actions against that VM.

---

## Real-Time VM Information

The real-time information panel refreshes automatically every 5 seconds (with a 10-second initial load). It displays comprehensive VM status and resource usage in a two-column layout.

![Real-time VM info panel](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-zfvqweed.png)

### Left Column: Status and Compute

**Status Section:**

| Field | Description |
|-------|-------------|
| **Remote** | Current VM power state on Proxmox (running/stopped), uptime, and lock status if any operation is in progress |
| **Local / Node** | The module's internal status tracking and the Proxmox node hosting the VM |

**CPU & RAM Section:**

| Field | Description |
|-------|-------------|
| **CPU** | Current CPU usage as a percentage of allocated cores, with a color-coded progress bar (green < 50%, yellow 50-80%, red > 80%) |
| **RAM** | Current memory usage in GiB and percentage, with a color-coded progress bar |

### Right Column: Storage and Network

![Detailed status closeup](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-qxuqmrwv.png)

**System Disk Section:**

| Field | Description |
|-------|-------------|
| **Size** | Current disk size vs. package-configured size, with the underlying file path |
| **I/O real** | Actual read/write throughput in MB/s and IOPS as currently measured |
| **I/O pkg** | Package-configured throughput and IOPS limits |

**Additional Disk Section** (shown only if an additional disk exists):

Same fields as the system disk section, displayed for the secondary disk.

**Network Section:**

| Field | Description |
|-------|-------------|
| **Adapter** | Network model and MAC address |
| **Real** | Actual bandwidth rate, bridge, and VLAN as configured on Proxmox |
| **Package** | Package-configured bandwidth limit, bridge, and VLAN |
| **ISO** | Currently mounted ISO image, if any |

---

## Configurable Options *(new in v3.3)*

A dedicated **Configurable Options** tab on the service page shows the effective per-service selection of every WHMCS Configurable Option that is assigned to the product. Useful for confirming which pricing tier the client actually picked without having to dig into the database or the order itself.

![Service Configurable Options tab](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-cdbtlypg.png)

The tab lists each option by its plain-English name (`CPU Cores`, `RAM`, `System Disk`, `Backups`, `Snapshots`, etc.) together with the human-readable display text of the selected sub-option. When no Configurable Option is assigned for a given resource, the Module Settings default is used and that resource simply does not appear in this tab — see the [Product Configuration chapter](01-product-configuration.md) for where each default lives.

See the dedicated [Configurable Options chapter](03-configurable-options.md) for the full list of supported options, sub-option formats, and pricing-tier examples.

---

## Deploy Log

The deploy log panel is toggled by clicking the **Deploy Log** button. It provides a complete history of all provisioning and administrative operations performed on the VM.

![Deploy log with steps](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-qks7yvf4.png)

### Last Action

The top section shows the most recent operation:

| Field | Description |
|-------|-------------|
| **Action** | The operation name (e.g., `deploy`, `reinstall`, `change_package`) |
| **Result** | Success or failure badge |
| **Time range** | Start and finish timestamps |
| **Steps table** | Numbered list of individual steps with result status and duration in seconds |

### Deploy History

Below the last action, a chronological list of all deploy runs is displayed. Each entry shows:

- Start timestamp
- Status transition (before → after)
- Result badge (success/waiting/error)
- Error message, if applicable
- Expandable step detail table (click the header to toggle)

Each step in the detail table includes:

| Column | Description |
|--------|-------------|
| **#** | Step sequence number |
| **Step** | Operation name (e.g., `clone`, `set_ip`, `set_cpu_ram`, `set_firewall`) |
| **Result** | Success or failure |
| **From** | VM status before this step |
| **To** | VM status after this step |
| **Time** | Timestamp when the step executed |
| **Dur** | Duration in seconds |

---

## Usage Charts

The charts section displays CPU, memory, disk I/O, and network throughput graphs rendered using Google Charts. The data is fetched via AJAX from Proxmox's RRD statistics.

![Usage charts and metrics](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-rosc0rar.png)

### Time Frame Selection

A button group allows selecting the chart time range:

| Button | Period |
|--------|--------|
| **Hour** | Last 60 minutes (default on page load) |
| **Day** | Last 24 hours |
| **Week** | Last 7 days |
| **Month** | Last 30 days |
| **Year** | Last 365 days |

### Chart Types

Three area charts are displayed side by side:

| Chart | Series | Description |
|-------|--------|-------------|
| **CPU & RAM** | CPU %, RAM % | Processor and memory utilization over time |
| **Disk I/O** | Read MB/s, Write MB/s | Storage throughput |
| **Network** | IN MB/s, OUT MB/s | Network interface throughput |

---

## Change Package

When a service's product/package is changed (upgrade or downgrade), the module executes a multi-step reconfiguration process. The admin can monitor progress through the deploy log.

![Change package in progress](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-4wgkccao.png)

The change package operation follows this sequence:

1. Update IP addresses (if pool/network changed)
2. Stop the VM
3. Set CPU and RAM to new values
4. Resize system disk
5. Update system disk bandwidth limits
6. Create or resize additional disk
7. Update additional disk bandwidth limits
8. Reconfigure network adapter
9. Reapply firewall rules
10. Start the VM

![Change package complete](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-uns3ljjr.png)

Each step is logged individually in the deploy log. If any step fails, the process halts and the error is recorded. The admin can review the failure in the deploy log and either fix the issue manually or use the **Redeploy** button to start fresh.


<!-- sync:8ddb8139066f78fa -->

# Configurable Options

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

WHMCS Configurable Options allow clients to customize their virtual machine resources at order time and during upgrades. The PUQ Proxmox KVM module reads configurable option values and uses them to override the product's default settings during provisioning and change package operations.

> **New in v3.3.** Eleven new options (every disk size / bandwidth / IOPS parameter plus Network Bandwidth) and clean plain-English names for the four previously prefix-only ones (`Backups`, `Snapshots`, `IPv4 Addresses`, `IPv6 Addresses`). Every overridable resource also has a default in Module Settings, so a product works without any Configurable Options at all.

![Full list of Configurable Options assigned to a product](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-sl8xtgfx.png)

The screenshot above shows all 18 supported options assigned to a single product. The next screenshot shows how a client sees them on the order form:

![Client order form with Configurable Options](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-zvoatpqd.png)

---

## Overview

Configurable Options provide a way to offer multiple resource tiers within a single product. For example, you can create one "KVM VPS" product with configurable options for CPU, RAM, and disk, letting clients pick their desired configuration and pricing tier at checkout.

When a configurable option is set on an order, its value takes precedence over the corresponding product-level default configured in the Module Settings.

---

## Setup

1. Navigate to **Setup > Products/Services > Configurable Options**
2. Click **Create a New Group**
3. Name the group (e.g., "KVM VPS Options")
4. Add individual options as described below
5. Assign the group to your PUQ ProxmoxKVM product(s) using the **Assigned Products** tab

---

## Supported Configurable Options

The module recognizes the following configurable option names. The **Option Name** must match exactly (case-sensitive) for the module to detect and apply the value.

### Compute Resources

| Option Name | Type | Description | Example Values |
|-------------|------|-------------|----------------|
| **CPU Cores** | Dropdown | Number of virtual CPU cores | `1`, `2`, `4`, `8`, `16` |
| **RAM** | Dropdown | Memory size in GB | `1`, `2`, `4`, `8`, `16`, `32` |

### Backups & Snapshots

| Option Name | Type | Description | Example Values |
|-------------|------|-------------|----------------|
| **Backups** | Dropdown | Maximum number of backups for the service (0 = backups disabled) | `0`, `3`, `7`, `14`, `30` |
| **Snapshots** | Dropdown | Maximum number of snapshots for the service (0 = snapshots disabled) | `0`, `1`, `3`, `5`, `10` |

### Storage

| Option Name | Type | Description | Example Values |
|-------------|------|-------------|----------------|
| **System Disk** | Dropdown | Boot disk size in GB | `10`, `20`, `40`, `80`, `160` |
| **Additional Disk** | Dropdown | Secondary disk size in GB (0 = no additional disk) | `0`, `10`, `20`, `50`, `100` |
| **System Disk Read Bandwidth** | Dropdown | System disk read throughput limit in MB/s | `0`, `50`, `100`, `200` |
| **System Disk Write Bandwidth** | Dropdown | System disk write throughput limit in MB/s | `0`, `50`, `100`, `200` |
| **System Disk Read IOPS** | Dropdown | System disk read IOPS limit | `0`, `500`, `1000`, `5000` |
| **System Disk Write IOPS** | Dropdown | System disk write IOPS limit | `0`, `500`, `1000`, `5000` |
| **Additional Disk Read Bandwidth** | Dropdown | Additional disk read throughput limit in MB/s | `0`, `50`, `100` |
| **Additional Disk Write Bandwidth** | Dropdown | Additional disk write throughput limit in MB/s | `0`, `50`, `100` |
| **Additional Disk Read IOPS** | Dropdown | Additional disk read IOPS limit | `0`, `500`, `1000` |
| **Additional Disk Write IOPS** | Dropdown | Additional disk write IOPS limit | `0`, `500`, `1000` |

### Network

| Option Name | Type | Description | Example Values |
|-------------|------|-------------|----------------|
| **Network Bandwidth** | Dropdown | Network bandwidth limit in MB/s (0 = unlimited) | `0`, `10`, `50`, `100`, `1000` |
| **IPv4 Addresses** | Dropdown | Number of IPv4 addresses to allocate from the pool | `1`, `2`, `4`, `8` |
| **IPv6 Addresses** | Dropdown | Number of IPv6 addresses to allocate from the pool | `0`, `1`, `4`, `16` |

### Operating System

| Option Name | Type | Description | Example Values |
|-------------|------|-------------|----------------|
| **Operating System** | Dropdown | OS template selection (Proxmox template VM ID) | Template IDs from Proxmox |

---

## Creating a Configurable Option

For each option:

1. Click **Add New Configurable Option** in your group
2. Set the **Option Name** to match one of the supported names above
3. Set the **Option Type** to **Dropdown**
4. Add sub-options with the format: `value|Display Name`

### Compute resources

#### Example: CPU Cores

```
Option Name: CPU Cores
Option Type: Dropdown

Sub-options:
1|1 Core
2|2 Cores
4|4 Cores
8|8 Cores
16|16 Cores
```

#### Example: RAM

```
Option Name: RAM
Option Type: Dropdown

Sub-options:
1|1 GB
2|2 GB
4|4 GB
8|8 GB
16|16 GB
32|32 GB
```

### Backups & Snapshots

#### Example: Backups

```
Option Name: Backups
Option Type: Dropdown

Sub-options:
0|No backups
3|3 backups
7|7 backups
14|14 backups
30|30 backups
```

#### Example: Snapshots

```
Option Name: Snapshots
Option Type: Dropdown

Sub-options:
0|No snapshots
1|1 snapshot
3|3 snapshots
5|5 snapshots
10|10 snapshots
```

### Storage — size

#### Example: System Disk

```
Option Name: System Disk
Option Type: Dropdown

Sub-options:
10|10 GB
20|20 GB
40|40 GB
80|80 GB
160|160 GB
```

#### Example: Additional Disk

```
Option Name: Additional Disk
Option Type: Dropdown

Sub-options:
0|No additional disk
10|10 GB
20|20 GB
50|50 GB
100|100 GB
500|500 GB
```

> **Note:** `0` means no additional disk required by the package. If a disk already exists on the VM, the module does not delete it — the existing disk is preserved.

### Storage — I/O limits

#### Example: System Disk Read Bandwidth

```
Option Name: System Disk Read Bandwidth
Option Type: Dropdown

Sub-options:
0|Unlimited
50|50 MB/s
100|100 MB/s
200|200 MB/s
500|500 MB/s
```

#### Example: System Disk Write Bandwidth

```
Option Name: System Disk Write Bandwidth
Option Type: Dropdown

Sub-options:
0|Unlimited
50|50 MB/s
100|100 MB/s
200|200 MB/s
500|500 MB/s
```

#### Example: System Disk Read IOPS

```
Option Name: System Disk Read IOPS
Option Type: Dropdown

Sub-options:
0|Unlimited
500|500 IOPS
1000|1000 IOPS
2500|2500 IOPS
5000|5000 IOPS
```

#### Example: System Disk Write IOPS

```
Option Name: System Disk Write IOPS
Option Type: Dropdown

Sub-options:
0|Unlimited
500|500 IOPS
1000|1000 IOPS
2500|2500 IOPS
5000|5000 IOPS
```

#### Example: Additional Disk Read Bandwidth

```
Option Name: Additional Disk Read Bandwidth
Option Type: Dropdown

Sub-options:
0|Unlimited
50|50 MB/s
100|100 MB/s
200|200 MB/s
```

#### Example: Additional Disk Write Bandwidth

```
Option Name: Additional Disk Write Bandwidth
Option Type: Dropdown

Sub-options:
0|Unlimited
50|50 MB/s
100|100 MB/s
200|200 MB/s
```

#### Example: Additional Disk Read IOPS

```
Option Name: Additional Disk Read IOPS
Option Type: Dropdown

Sub-options:
0|Unlimited
500|500 IOPS
1000|1000 IOPS
2500|2500 IOPS
```

#### Example: Additional Disk Write IOPS

```
Option Name: Additional Disk Write IOPS
Option Type: Dropdown

Sub-options:
0|Unlimited
500|500 IOPS
1000|1000 IOPS
2500|2500 IOPS
```

> **Note:** For bandwidth / IOPS options, `0` means **unlimited** — the value is omitted from the disk config string sent to Proxmox.

### Network

#### Example: Network Bandwidth

```
Option Name: Network Bandwidth
Option Type: Dropdown

Sub-options:
0|Unlimited
10|10 MB/s
50|50 MB/s
100|100 MB/s
500|500 MB/s
1000|1 GB/s
```

#### Example: IPv4 Addresses

```
Option Name: IPv4 Addresses
Option Type: Dropdown

Sub-options:
1|1 IPv4
2|2 IPv4
4|4 IPv4
8|8 IPv4
16|16 IPv4
```

#### Example: IPv6 Addresses

```
Option Name: IPv6 Addresses
Option Type: Dropdown

Sub-options:
0|No IPv6
1|1 IPv6
4|4 IPv6
16|16 IPv6
```

> **Note:** Setting either count to `0` means «no addresses of that family will be allocated from the IP pool for this service». Upgrades that lower the count automatically release the excess addresses back to the pool.

### Operating System

#### Example: Operating System

```
Option Name: Operating System
Option Type: Dropdown

Sub-options:
9001|Ubuntu 22.04 LTS
9002|Debian 12
9003|AlmaLinux 9
9004|Windows Server 2022
```

> **Note:** The sub-option values for **Operating System** must be the Proxmox template VM IDs. The display names can be human-readable OS names.

---

## Pricing

Each sub-option can have its own pricing configured per billing cycle. Navigate to the sub-option's pricing section to set monthly, quarterly, semi-annual, and annual prices.

For options where `0` means "not configured" or "unlimited" (such as Additional Disk = 0, Network Bandwidth = 0), you would typically set the price for the `0` sub-option to $0.00.

---

## Upgrade/Downgrade

When a client upgrades or downgrades their service through the WHMCS client area, the module automatically detects the changed configurable option values and triggers a **change package** operation. This operation updates the VM's resources on Proxmox to match the new configuration.

The change package process is logged step-by-step in the [Deploy Log](02-service-management.md#deploy-log) and can be monitored from the admin service management page.

---

## Disk size constraints

**System Disk and Additional Disk size can only be increased.** Proxmox does not support shrinking VM disks (it would risk corrupting/losing data), so any configurable option that would result in a smaller disk than the current size is rejected by the module.

![Client upgrade page with shrink protection — smaller disk options are disabled with a clear warning banner](https://doc.puq.info/uploads/images/gallery/2026-05/embedded-image-skfvpfxj.png)

### How it is enforced

The module applies the constraint at three layers:

1. **Client-area upgrade page** — on `/clientarea.php?action=upgrade&type=configoptions`, sub-options whose value is smaller than the currently selected one are visually disabled in the System Disk / Additional Disk dropdowns and marked `(downgrade not allowed)`. A warning banner is shown above the form.
2. **Change-package state machine** — if a smaller value still reaches the backend (e.g. via direct admin edit), the `Resize system disk` / `Resize additional disk` step is skipped with status `skip — shrink not allowed by Proxmox`. The VM is **not** stopped, snapshots are **not** removed, and the step is logged via `logModuleCall` under the action name `system_disk_shrink_rejected` / `additional_disk_shrink_rejected`.
3. **Post-backup-restore re-apply** — when the module re-applies package configuration after a backup restore, a smaller package disk size is treated the same way: the resize is silently skipped, the existing larger disk is kept, and the rejection is logged.

### Resulting behaviour for clients

- A client picking a smaller System Disk / Additional Disk in an upgrade order cannot submit it — the option is disabled in the UI.
- If by some path a smaller value reaches the change-package operation, the **disk size stays unchanged**. All other configurable options in the same change-package operation (CPU, RAM, bandwidth, IOPS, IPv4/IPv6 count, etc.) are still applied normally.

### Additional Disk special cases

- `Additional Disk = 0` with no existing disk → no action.
- `Additional Disk = 0` with **existing disk** → the existing disk is **detached and deleted**. VM is stopped first, all snapshots are removed (Proxmox requires this for detach), the disk interface is removed from the VM config, and the disk file is purged from storage. **All data on the additional disk is lost.** Logged via `logModuleCall` under `additional_disk_deleted`.
- `Additional Disk` increased from `0` to `N` → new disk is created with size `N` GB.
- `Additional Disk` increased from `N` to `M > N` → disk is resized in place (no data loss).
- `Additional Disk` decreased while > 0 (e.g. from `50` to `20`) → treated as shrink, rejected, current size kept.

> **Note for clients:** The upgrade form shows a confirmation dialog before submitting an Additional Disk = `0` change. The sub-option is also labeled `(removes the existing disk — data will be lost)` in the dropdown to make the destructive effect visible.

> **Note for admins:** If you do not want clients to be able to delete the additional disk via the configurable option, omit the `0|...` sub-option from the Additional Disk dropdown — make the lowest entry the minimum disk size you offer (e.g. `10|10 GB`).

---

## Priority Order

When determining the final value for a VM resource, the module follows this priority:

1. **Configurable Option value** (highest priority — applied whenever the option is assigned to the service, including a value of `0`)
2. **Product Module Settings default** (used only when no configurable option for that resource is assigned to the service)

Every overridable resource has a default in **Module Settings** for the product. If you do not create a Configurable Option for a resource, that default is used for every service of the product. The defaults live in:

| Resource | Module Settings location | Default value (if you don't set it) |
|---|---|---|
| CPU Cores | VM Configuration → CPU | `1` |
| RAM | VM Configuration → RAM | `1` GB |
| Backups | VM Configuration → Backups | `0` (disabled) |
| Snapshots | VM Configuration → Snapshots | `0` (disabled) |
| System Disk size + bandwidth + IOPS | Storage → System Disk | `0` (no change) |
| Additional Disk size + bandwidth + IOPS | Storage → Additional Disk | `0` (no additional disk) |
| Network Bandwidth | Network → Bandwidth | `0` (unlimited) |
| IPv4 count | Network → IPv4 count | `1` |
| IPv6 count | Network → IPv6 count | `0` |
| Operating System | VM Configuration → OS template | `configoption4` (default OS template) |

`0` is a meaningful value for many options and is **always** applied when a client selects it through a Configurable Option:
- `Additional Disk` = `0` → existing disk is **detached and deleted** (data lost)
- `Network Bandwidth` = `0` → unlimited
- `*Bandwidth` / `*IOPS` = `0` → unlimited
- `IPv4 Addresses` / `IPv6 Addresses` = `0` → no address of that family (existing addresses are released back to the pool)
- `Backups` / legacy `B|Backup` = `0` → backups disabled for the service
- `Snapshots` / legacy `S|Snapshot` = `0` → snapshots disabled for the service

This means you can set conservative defaults in the product configuration and allow clients to customize resources both upward (more CPU/RAM/disk) and downward (disable additional disk, set unlimited bandwidth) through configurable options.

---

## Legacy prefix-based option names (v1.x–v2.x)

> **Still supported in v3.0.** In v1.x–v2.x, PUQ Proxmox KVM used a **prefix-based convention** for configurable option names where the prefix identified the option type and the display name was free text. If you upgraded from an older version, your existing configurable options continue to work without any changes — the module recognizes both the legacy prefix-based names and the v3.0 plain names.

The legacy convention uses an Option Name of the form `PREFIX|Display Name` (the text on the right of the `|` can be whatever you want — "My Backup Offer", "Sicherung", etc.) and sub-options of the form `value|Display Name`.

| Legacy Option Name | Sub-option format | Meaning |
|--------------------|-------------------|---------|
| `B\|Backup` | `<count>\|Name` | Number of allowed backups (0 disables backups for the service) |
| `S\|Snapshot` | `<count>\|Name` | Number of allowed snapshots (0 disables snapshots for the service) |
| `CPU\|Processor` | `<count>\|Name` | Number of CPU cores |
| `RAM\|Memory` | `<count>\|Name` | RAM in GB |
| `ipv4\|IPv4` | `<count>\|Name` | Number of IPv4 addresses to allocate |
| `ipv6\|IPv6` | `<count>\|Name` | Number of IPv6 addresses to allocate |
| `OS\|Operating system` | `<template_id>\|Name` | Proxmox template VM ID to clone from |

### Legacy example: Operating System

```
Option Name: OS|Operating system
Option Type: Dropdown

Sub-options:
1010|Debian-10.12
1011|Debian-11
1012|Debian-12
1021|Ubuntu-20.04
1022|Ubuntu-22.04
```

The sub-option values are the Proxmox template VMIDs (e.g. `1010` = a template VM in Proxmox with ID 1010 based on Debian 10). The module uses the number on the left of the `|` to call `qm clone`; the text on the right is shown to the admin/client in the order form.

### Legacy example: Backup

```
Option Name: B|Backup
Option Type: Dropdown

Sub-options:
0|No backups
3|3 backups
7|7 backups
14|14 backups
```

### Legacy example: Snapshot

```
Option Name: S|Snapshot
Option Type: Dropdown

Sub-options:
0|No snapshots
1|1 snapshot
3|3 snapshots
5|5 snapshots
10|10 snapshots
```

### Legacy example: CPU

```
Option Name: CPU|Processor
Option Type: Dropdown

Sub-options:
1|1 Core
2|2 Cores
4|4 Cores
8|8 Cores
16|16 Cores
```

### Legacy example: RAM

```
Option Name: RAM|Memory
Option Type: Dropdown

Sub-options:
1|1 GB
2|2 GB
4|4 GB
8|8 GB
16|16 GB
```

### Legacy example: IPv4

```
Option Name: ipv4|IPv4
Option Type: Dropdown

Sub-options:
1|1 IPv4
2|2 IPv4
4|4 IPv4
8|8 IPv4
```

### Legacy example: IPv6

```
Option Name: ipv6|IPv6
Option Type: Dropdown

Sub-options:
0|No IPv6
1|1 IPv6
4|4 IPv6
16|16 IPv6
```

### Which format should I use?

- **New installations** — use the plain v3.0 names shown higher on this page (`CPU Cores`, `RAM`, `System Disk`, etc.).
- **Upgrades from v1.x/v2.x** — keep using your existing prefix-based names. They are still recognized and require no changes. Migrating them to the new names is optional and purely cosmetic.
- **Mixing both** — not recommended, but technically allowed. If both a legacy `CPU|Processor` and a new `CPU Cores` option are assigned to the same product, the plain v3.0 name wins.


<!-- sync:7a2e4901c08716e2 -->

# Client Area

Everything the end customer sees in the WHMCS client area: VM overview with real-time status, noVNC console, performance charts, reinstall, snapshots, backups, password reset, reverse DNS, ISO mount and firewall management. Available features are controlled per product by the administrator.

# Overview

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The Overview page is the main management screen displayed when a client opens their Proxmox KVM service. It provides real-time VM status information, quick action buttons, and a complete network configuration summary.

## Action Buttons

At the top of the page, action buttons allow the client to perform common operations:

- **Start** — Power on the virtual machine
- **Stop** — Gracefully shut down the virtual machine
- **noVNC** — Open a browser-based VNC console session (see [noVNC](02-novnc.md))
- **Charts** — View performance usage graphs (see [Charts](03-charts.md))

Below the real-time information panel, additional management buttons provide access to:

- **Reinstall** — Reinstall the operating system
- **Snapshots** — Manage VM snapshots
- **Backups** — Manage backups and schedules
- **Reset password** — Reset the root/admin password
- **revDNS** — Configure reverse DNS records
- **ISO** — Mount or unmount ISO images
- **Firewall** — Manage firewall policies and rules

The visibility of each button depends on the Client Area Permissions configured by the administrator for the product.

## Information on Real Time

The overview displays live VM metrics that auto-refresh every 7 seconds:

| Field | Description |
|-------|-------------|
| **Status** | Current VM state (running / stopped) with uptime counter |
| **CPU** | Current CPU utilization percentage and number of allocated cores |
| **RAM** | Current memory usage with used/total values and a progress bar |
| **System disk** | System disk size with R/W throughput (MB/s) and IOPS limits |
| **Additional disk** | Additional disk size with R/W throughput (MB/s) and IOPS limits (if configured) |
| **Network adapter** | Network adapter model, MAC address, and link speed |

![VM overview](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-s5ana790.png)

## Network Configuration

Below the real-time information panel, the network configuration section displays the complete networking setup for the VM:

- **IPv4** — Primary IPv4 address with subnet mask, plus any additional IPv4 addresses
- **GW** — IPv4 gateway address
- **DNS** — Configured DNS servers (primary and secondary)
- **IPv6** — Primary IPv6 address with prefix length, plus any additional IPv6 addresses
- **GW** — IPv6 gateway address
- **Domain** — The assigned domain name for the VM

An informational note reminds the client that only the main IP address is automatically configured on the network interface. Additional IP addresses must be configured manually inside the VM.

![Network configuration](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-31j6p1xb.png)

## Disabled actions

When a feature is not permitted by the product's client-area permissions (or is temporarily unavailable — for example, during a backup or snapshot operation), the corresponding button stays **visible but dimmed** and is not clickable. This is intentional: the client can see the full list of features the product offers, even if specific ones are not allowed in their plan, and clearly understands the state of their VM while operations are in progress.

> **Changed in v3.0.** Feature permissions have moved from the legacy `configoption12` checkboxes to the new Bootstrap-based **Client permissions** panel in the product settings. All permission flags are preserved during upgrade, so the end-user behavior is identical to v2.x.

## Navigation menu

Every sub-page in the client service area (Snapshots, Backups, Firewall, Reset password, revDNS, ISO, Charts, Reinstall) has a sidebar **navigation menu** that allows the client to jump between settings without going back to the overview each time.

![Client area sidebar](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-omhgmoss.png)

If the client navigates directly to a page for a feature that the product does not allow, they see an **Access Denied** error message instead of the feature's UI. The `Overview` and `noVNC` buttons cannot be hidden — they are always available.

## Error messages

The client area displays two common error messages:

- **Something went wrong** — returned when WHMCS cannot reach the Proxmox server (network issue, credentials invalid, API service down) or when the VM is no longer present on Proxmox. Check the [Log Collection](../08-troubleshooting/01-log-collection.md) chapter for diagnostics.
- **Access Denied** — returned when the client tries to open a page (via a direct URL or a bookmarked link) for a feature that is not enabled for their product.


<!-- sync:970b4684c21cb13a -->

# noVNC Console

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The noVNC console provides browser-based remote access to the virtual machine's display, allowing clients to interact with their VM directly without requiring a separate VNC client application.

## Accessing the Console

1. Navigate to the service detail page and click the **noVNC** button in the action bar.
2. A **CONNECT** button will appear along with a note indicating that the link is a one-time connection valid for 10 seconds.
3. Click **CONNECT** to open the noVNC console in a new browser tab.

![noVNC connect button](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-d9g6avod.png)

## Connecting

After clicking the CONNECT button, a new browser tab opens and establishes a secure, encrypted WebSocket connection to the Proxmox VNC proxy. A status indicator in the console confirms the connection, showing the target QEMU VM identifier.

![noVNC console connecting](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-l795mkvr.png)

## Console View

Once connected, the full noVNC console is displayed, providing direct keyboard and mouse interaction with the VM. The console toolbar on the left side provides additional controls for clipboard, screen scaling, and connection settings.

![noVNC console connected](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-1au3mapa.png)

## Important Notes

- The console connection link is **one-time use** and expires after **10 seconds**. If the link expires, click the noVNC button again to generate a new one.
- The VM must be in a **running** state to open a console session.
- The noVNC feature must be enabled in the product's Client Area Permissions by the administrator.
- The connection is encrypted (TLS) between the browser and the VNC proxy server.
- Full keyboard input is supported, including special key combinations (Ctrl+Alt+Del, etc.) via the console toolbar.


<!-- sync:87ab81272b32513b -->

# Charts

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The Charts page provides visual performance graphs showing resource utilization of the virtual machine over time. Data is sourced from Proxmox VE RRD (Round Robin Database) statistics and rendered using the Google Charts library.

## Available Charts

The page displays four resource usage graphs:

| Chart | Description |
|-------|-------------|
| **CPU Usage** | Processor utilization as a percentage of allocated cores over time |
| **RAM Usage** | Memory consumption showing used vs. available RAM |
| **Disk I/O Usage** | Disk read and write throughput, displayed as separate Read MB/s and Write MB/s lines |
| **Network Usage** | Network traffic volume with separate lines for inbound (In MB/s) and outbound (Out MB/s) traffic |

## Time Period Tabs

Charts can be viewed across different time ranges using the tab buttons at the top of the page:

- **Hour** — Last 60 minutes of data
- **Day** — Last 24 hours of data
- **Week** — Last 7 days of data
- **Month** — Last 30 days of data
- **Year** — Last 12 months of data

Clicking a tab reloads all four charts with data for the selected time period.

![Charts usage](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-cmepx0f7.png)

## Notes

- The Charts feature must be enabled in the product's Client Area Permissions by the administrator.
- The VM must be running to generate new data points. Historical data is available even when the VM is stopped.
- Data granularity varies by time period: shorter periods show more detailed data points, while longer periods are averaged.


<!-- sync:216178438116a0ad -->

# Reinstall

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The Reinstall page allows clients to reinstall the operating system on their virtual machine. This is a destructive operation that replaces the current OS with a fresh installation from the selected template.

## Process

1. Navigate to the service and click **Reinstall** in the sidebar or from the action buttons on the overview page.
2. A warning is displayed: reinstalling will **completely remove all data on all disks** of the virtual machine, and **all snapshots will also be deleted**.
3. Select the desired operating system from the **Select operating system** dropdown. The available options are configured by the administrator in the product settings.
4. To protect against accidental reinstallation, type the word **REINSTALL** in the confirmation field.
5. Click the **Reinstall** button to begin the process.

![Reinstall page](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-oqm5qtqm.png)

## What Happens During Reinstall

- The VM is stopped if currently running.
- All data on all disks is destroyed.
- All existing snapshots are deleted.
- The VM is redeployed from the selected OS template using the module's deploy pipeline.
- Cloud-init configuration is reapplied (hostname, IP addresses, DNS, user credentials).
- **Network identity is preserved** — the original IPv4/IPv6 addresses, the **same network card MAC address**, the VLAN tag and the VMID are kept so that inventory systems, firewalls and DNS records continue to work without changes.
- A new root password is generated and sent to the client via email.

> **Backups survive a reinstall.** The reinstall procedure explicitly deletes only the VM's disks and snapshots — any existing **backup archives are kept intact**. This gives you a safety net: even after reinstalling a brand-new OS, you can still restore a previous backup to return to the pre-reinstall state. Use this carefully.

## Important Notes

- This operation is **irreversible** for the data on the VM disks. All data will be permanently lost.
- The Reinstall feature must be enabled in the product's Client Area Permissions by the administrator.
- Only OS templates approved by the administrator appear in the dropdown list.
- The VM must not be locked by another operation (backup, snapshot, migration) when initiating a reinstall.
- The `REINSTALL` confirmation word must be typed **in capital letters** exactly — it's an intentional speed-bump to prevent accidental reinstalls.


<!-- sync:78997a72e94cb0ce -->

# Snapshots

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The Snapshots page allows clients to create, rollback, and remove point-in-time snapshots of their virtual machine. Snapshots capture the complete state of the VM, including disk contents and memory (if running), enabling quick recovery to a known good state.

> **Snapshots are not backups.** They are intended as a quick safety net during system administration work (package updates, config changes, etc.) — that's why their lifetime is enforced and limited (1–10 days). For long-term data protection use the [Backups](06-backups.md) feature instead.

## Snapshot Quota

The snapshot quota is displayed at the top of the page as a counter (e.g., **2/3**), showing the number of existing snapshots out of the maximum allowed. The quota limit is configured by the administrator in the product settings.

## Creating a Snapshot

1. Navigate to the service and click **Snapshots** in the sidebar.
2. Optionally enter a description in the **Snapshot description** text field.
3. Click the **Take Snapshot** button.
4. The snapshot is created in the background. Once complete, it appears in the list below.

## Managing Snapshots

Each snapshot in the list displays:

- **Name** — The snapshot identifier
- **Date and time** — When the snapshot was created
- **Remaining lifetime** — A countdown showing how long until the snapshot is automatically deleted (e.g., "0 days 23:59:54")

For each snapshot, two actions are available:

- **Rollback** — Restore the VM to the exact state captured by this snapshot. The VM will be stopped during the rollback process.
- **Remove** — Permanently delete this snapshot to free up storage space and quota.

![Snapshots page](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-wzhis8um.png)

## Snapshot Lifetime

Snapshots have a configurable lifetime set by the administrator in the product settings. When the lifetime expires, the snapshot is automatically removed by the cron system. The remaining lifetime for each snapshot is displayed in the snapshot list.

## Important Notes

- Snapshots consume additional storage space on the Proxmox node. The more changes are made after a snapshot, the larger the snapshot data grows.
- The VM must not be locked by another operation (backup, migration, etc.) when creating or managing snapshots.
- Rolling back to a snapshot will discard all changes made after the snapshot was taken.
- The maximum number of snapshots is determined by the product configuration.


<!-- sync:3c3b6a19cfc7f24e -->

# Backups

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The Backups page provides full VM backup management, including scheduled automatic backups, manual on-demand backups, restore from backup, and backup removal.

## Scheduled Automatic Backups

The top section of the page displays the backup schedule configuration with a day-of-week grid. For each day of the week (Sunday through Saturday), the client can:

- **Enable or disable** the day using the checkbox.
- **Set the time** for the backup to run on that day.

After configuring the schedule, click **Save Schedule** to apply the changes. When a schedule is configured, the system will automatically create backups at the specified times and delete old backups that exceed the retention quota.

An informational note confirms: "If the schedule is configured, the system will automatically create backups and delete old backups."

## Backup Quota

The backup quota is displayed as a counter next to the **Backups** heading (e.g., **1/10**), showing the number of existing backups out of the maximum allowed. The quota limit is configured by the administrator in the product settings.

## Creating a Manual Backup

1. Optionally enter a note in the **Backups notes** text field to identify the backup.
2. Click the **Backup now** button.
3. The backup task is submitted to Proxmox and runs in the background. Progress is monitored by the WHMCS cron system.

## Backup List

Each backup in the list displays:

- **Date and time** — When the backup was created
- **Description** — The note entered when creating the backup
- **Size** — The storage size of the backup (e.g., 300 GiB)

For each backup, two actions are available:

- **Restore** — Restore the VM from this backup. The VM will be stopped during the restore process.
- **Remove** — Permanently delete this backup to free up storage space and quota.

A warning note reminds the client: "In the case of a backup restore, all snapshots of Virtual Machine will be deleted."

![Backups page](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-pi4fcsne.png)

## How scheduled backups run

On each cron tick the backup task:

1. Checks which VMs have the current weekday enabled in their schedule.
2. Checks whether the configured time-of-day for today is already in the past (so that the job runs once per day, not repeatedly).
3. Checks whether today's backup already exists — if yes, skips.
4. Checks whether there is a free backup slot. If the quota is full, the **oldest** backup is deleted first to make room.
5. Creates the new backup and monitors the Proxmox task until completion.

## Backup restoration

Before a backup is restored, the VM must be in a **powered off** state. After a successful restore the module automatically re-applies the current package parameters to the restored VM:

1. Set CPU & RAM if different from the restored values
2. Resize system disk if different
3. Re-apply system disk bandwidth limits
4. Create additional disk if needed
5. Resize additional disk if needed
6. Re-apply additional disk bandwidth limits
7. Re-apply network configuration (bridge, VLAN, bandwidth, MAC)
8. Start the VM
9. Send the **Backup restored** email to the client

If the restore fails for any reason, the client is given the option to retry the restore or to reinstall the virtual machine from scratch.

## Important Notes

- Backups are stored on the backup storage configured in Proxmox by the administrator.
- Restoring a backup will stop the VM and delete all existing snapshots.
- Backup creation runs as a background task; large VMs may take considerable time to back up.
- Scheduled backups are executed by the WHMCS cron system. Ensure that the cron is running properly for scheduled backups to function.
- **While a backup is being created or restored, all other VM management operations are suspended** — Start/Stop, Reinstall, Reset password, Snapshots and package changes are locked until Proxmox releases the backup lock.
- The datastore used for backups must either not rotate backup copies, or rotate them in a way that does not interfere with the number of spare copies purchased by the client.


<!-- sync:ed81ada39e9f43de -->

# Reset Password

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The Reset Password page allows clients to generate a new root/admin password for their virtual machine. The new password is applied via cloud-init and sent to the client by email.

## Process

1. Navigate to the service and click **Reset password** in the sidebar.
2. Review the informational note about cloud-init requirements.
3. Click the **Reset Password** button.
4. A new password is automatically generated by the system.
5. Cloud-init applies the new password to the VM.
6. The new password is sent to the client via the configured email template.

![Reset password page](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-0lhxbp5n.png)

## Cloud-Init Requirement

An informational note on the page states: "Password reset requires cloud-init packages installed on the VM. If reset succeeds but password doesn't change, connect via noVNC and change manually."

This means:

- The **cloud-init** package must be installed and properly configured inside the VM's operating system.
- The **QEMU guest agent** is recommended for the password change to take effect immediately.
- If cloud-init is not installed or not functioning, the password reset command will succeed on the API level but the actual password inside the VM will not change. In this case, the client should use the noVNC console to log in and change the password manually.

## Important Notes

- The Reset Password feature must be enabled in the product's Client Area Permissions by the administrator.
- The VM should be in a **running** state for the password change to be applied by cloud-init.
- The generated password is random and secure. The client receives it only via the configured email template.
- If the VM was deployed from a template that does not include cloud-init, this feature will not work as expected.

> **Changed in v3.0.** The password reset flow now works on a **running** VM via cloud-init (and the QEMU guest agent if installed) — the client does not need to stop the VM first. In **v2.x and earlier**, the client had to manually power off the VM before resetting the password; the module then generated the new password, rewrote cloud-init and started the VM back up. If you are documenting behaviour for clients running an older version, keep that difference in mind.


<!-- sync:31fe3f5647707a16 -->

# Reverse DNS

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The Reverse DNS page allows clients to configure PTR (pointer) records for all IP addresses assigned to their virtual machine. Reverse DNS records map IP addresses back to hostnames and are commonly required for mail servers and other services that perform reverse lookups.

## Configuration

1. Navigate to the service and click **revDNS configure** in the sidebar.
2. Each IP address assigned to the VM (both IPv4 and IPv6) is listed with an editable hostname field.
3. Enter the desired hostname for each IP address.
4. Click the **Save** button to apply the changes.

The page lists all assigned addresses, including:

- All IPv4 addresses (primary and additional)
- All IPv6 addresses (primary and additional)

Each address has its own hostname input field, allowing independent reverse DNS configuration per IP.

![Reverse DNS page](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-m9u1juvh.png)

## DNS Propagation

An informational note at the top of the page states: "DNS changes take 1-8 hours to propagate across servers."

After saving, the reverse DNS records are automatically synchronized with the configured DNS provider (Cloudflare or HestiaCP, as configured in the addon module). However, due to DNS caching and propagation across the internet, the changes may not be visible to all resolvers immediately.

## Important Notes

- The RevDNS feature must be enabled in the product's Client Area Permissions by the administrator.
- The DNS addon module must be configured with the appropriate reverse DNS zones for the IP ranges used by the VM.
- Hostnames must be in a valid DNS format (e.g., `mail.example.com`).
- Reverse DNS is particularly important for email delivery. Many mail servers reject messages from IP addresses without proper PTR records.

## Ticket-based fallback

> **Still supported for operators without a DNS API.** If your reverse-DNS infrastructure does not expose an API (neither Cloudflare, HestiaCP nor PowerDNS), the module can fall back to **opening a WHMCS ticket** when the client requests a revDNS change — you then apply the change by hand on your DNS server.
>
> This is configured in the product settings under **Integrations → Revdns ticket / RevDNS ticket department**. When a ticket department is selected, saving the reverse-DNS form creates a new WHMCS ticket in that department with the requested IP→hostname mapping instead of calling the DNS provider.

> **Changed in v3.0.** With the PowerDNS provider added alongside Cloudflare and HestiaCP, most deployments can now use the automatic path and do not need the ticket fallback any more. The ticket mode is still available for mixed setups or for operators who deliberately want manual approval of every PTR change.


<!-- sync:c5e87b630dcf9178 -->

# ISO Mount

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The ISO Mount page allows clients to mount and unmount ISO images on their virtual machine's virtual CD/DVD drive. ISO images are organized into categorized folders for easy browsing.

## Currently Mounted ISO

If an ISO image is currently mounted, it is displayed at the top of the page with a highlighted status bar showing the filename (e.g., "Mounted: alpine-standard-3.21.3-x86_64.iso") and an **Unmount** button to eject it.

## Browsing Available ISOs

ISO images are organized into folders by category. Each folder displays:

- **Folder name** — The category name (e.g., ALPINE, DEBIAN, TAHR), shown with a folder icon
- **File count** — The number of ISO files in that folder

Inside each folder, individual ISO files are listed with their full filename and a **Mount** button.

### How the categorization works

To keep the ISO list readable the module derives the folder name from the **part of the filename before the first `-` character**:

- `Debian-12.5.0-amd64-netinst.iso` → folder **Debian**
- `alpine-standard-3.21.3-x86_64.iso` → folder **alpine**
- `myimage.iso` (no dash at all) → folder **OTHER**

Follow this convention when uploading ISOs to your Proxmox ISO storage. PUQcloud publishes a set of pre-built ISO images that are named in this convention and ready to use — see the ISO storage on [files.puqcloud.com](https://files.puqcloud.com/).

## Mounting an ISO

1. Navigate to the service and click **ISO mount** in the sidebar.
2. Browse the available ISO folders to find the desired image.
3. Click the **Mount** button next to the ISO file.
4. The ISO is attached to the VM's virtual CD/DVD drive and becomes available for booting or installation.

## Unmounting an ISO

1. Locate the currently mounted ISO at the top of the page.
2. Click the **Unmount** button.
3. The ISO is ejected from the virtual CD/DVD drive.

![ISO mount page](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-twvhbmfz.png)

## Use Cases

- **Recovery operations** — Boot from a rescue ISO to repair a broken system
- **Manual OS installation** — Install an operating system from an ISO image
- **Additional software** — Mount driver or utility ISOs for installation
- **Diagnostics** — Boot diagnostic tools (e.g., memtest, disk utilities)

## Important Notes

- The ISO mount feature must be enabled in the product's Client Area Permissions by the administrator.
- Available ISO images are sourced from the ISO storage configured in the Proxmox product settings. Only ISOs uploaded by the administrator to that storage will appear.
- To boot from a mounted ISO, the VM's boot order may need to be configured to include the CD/DVD drive.
- Only one ISO can be mounted at a time. Mounting a new ISO will replace the currently mounted one.


<!-- sync:27ae294a74f158e9 -->

# Firewall

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

The Firewall page provides clients with full control over their virtual machine's Proxmox firewall, including default policies and individual traffic rules.

## Firewall Policies

At the top of the page, two default policies can be configured:

- **Input Policy** — The default action for incoming traffic (ACCEPT or DROP)
- **Output Policy** — The default action for outgoing traffic (ACCEPT or DROP)

After selecting the desired policy values from the dropdown menus, click the **Save** button to apply them. These policies determine what happens to traffic that does not match any specific rule.

## Firewall Rules

Below the policies section, the rules table displays all configured firewall rules. The rule count is shown as a badge next to the heading (e.g., **4**).

### Rules Table Columns

| Column | Description |
|--------|-------------|
| **#** | Rule position number (determines evaluation order) |
| **Dir** | Traffic direction: **IN** (inbound) or **OUT** (outbound) |
| **Action** | What to do with matching traffic: **ACCEPT** (allow, shown in green) or **DROP** (block, shown in red) |
| **Proto** | Protocol filter (e.g., tcp, udp, any) |
| **Source** | Source IP address or network (or "any" for all sources) |
| **S.Port** | Source port or port range (or "any" for all ports) |
| **Dest** | Destination IP address or network (or "any" for all destinations) |
| **D.Port** | Destination port or port range (or "any" for all ports) |
| **Comment** | Optional description of the rule's purpose |

### Adding a Rule

Click the **+ Add Rule** button to open the rule creation modal. Fill in the rule parameters (direction, action, protocol, source, destination, ports, and comment) and save.

### Reordering Rules

Rules are evaluated in order from top to bottom. The drag handle (grid icon) on the left side of each rule row allows drag-and-drop reordering. Drag a rule up or down to change its evaluation priority. The first matching rule determines the action taken on the traffic.

### Deleting a Rule

Click the red delete button on the right side of a rule row to remove it. The rule is deleted immediately.

![Firewall rules page](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-rcdpnygy.png)

## How Rules Are Evaluated

1. Incoming or outgoing traffic is checked against the rules in order, starting from rule #0.
2. The first rule that matches the traffic's direction, protocol, source, destination, and ports determines the action (ACCEPT or DROP).
3. If no rule matches, the default policy (Input Policy or Output Policy) is applied.

## Important Notes

- The Firewall feature must be enabled in the product's Client Area Permissions by the administrator.
- Anti-spoofing IPSet rules are automatically managed by the module to prevent IP address spoofing from the VM.
- Rule changes take effect immediately on the Proxmox firewall.
- Be cautious when changing the default Input Policy to DROP, as this will block all incoming traffic that is not explicitly allowed by a rule. Ensure you have an ACCEPT rule for your management access (e.g., SSH on port 22) before changing the policy.


<!-- sync:fa639004d217f87b -->

# Cron and Automation

How the module's cron-driven state machines work: the deploy pipeline that walks a new VM from creation to ready, the change package pipeline for upgrades and downgrades, the asynchronous terminate pipeline introduced in v3.2, and all other scheduled tasks (snapshot cleanup, backup schedule, stats collection) with their intervals and locking behaviour. Every long-running lifecycle operation runs through cron with live per-step output, so clients never wait on synchronous HTTP requests and admins see real-time progress in the logs.

# Deploy Process

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## Overview

When a new virtual machine is provisioned (from a WHMCS order, an admin **Create** action, or the WHMCS API), the module does **not** try to do everything inside the HTTP request. Instead it just writes the VM record with status `creation` and returns immediately — WHMCS sees the order as "accepted" within milliseconds. The actual work is done by the cron task **Process VMs** as a 15-step state machine. Each step is idempotent and resumable: if anything fails, the VM stays at its current step and the next cron tick retries it.

This design makes the module resilient to:

- Proxmox API timeouts and slow storage operations
- Cluster-wide task queue back-pressure
- Transient network failures between the WHMCS host and Proxmox
- Long-running operations (disk resize, full clones, cross-node migration)

## Deploy Pipeline

The pipeline progresses through the following states:

```
creation → set_ip → clone → set_dns → migrated →
set_cpu_ram → set_system_disk_size → set_system_disk_bandwidth →
set_created_additional_disk → set_additional_disk_size → set_additional_disk_bandwidth →
set_network → set_firewall → set_cloudinit → starting → ready
```

A state name represents the **last completed step** — not the current action. When a VM is in state `set_ip`, the IP has been allocated and the next action to run is "Clone VM".

### Step descriptions

| State (done) | Next action | What happens |
|---|---|---|
| `creation` | Allocate IP | Choose an IP pool matching the server/bridge/VLAN, reserve IPv4 and/or IPv6 addresses, write them into `tblhosting` and the VM record. |
| `set_ip` | Clone VM | Clone the template to the target storage. Supports both linked and full clones depending on product config. |
| `clone` | Configure DNS records | Create forward A/AAAA for the VM's FQDN and PTR records for all assigned IPs. Runs against every matching DNS zone (see [DNS Zones & Integration](../04-addon-module/03-dns-zones.md)). **DNS errors never block deployment** — they are logged and the pipeline moves on. |
| `set_dns` | Migrate to target node | If the template lives on a different Proxmox node than the target, do an offline migration with `targetstorage` mapping. Skipped when source and target nodes are the same. |
| `migrated` | Set CPU & RAM | Apply the cores / sockets / memory from the product configuration. |
| `set_cpu_ram` | Resize system disk | Expand the system disk to the configured size. |
| `set_system_disk_size` | Set system disk I/O | Apply `iops_rd` / `iops_wr` / `mbps_rd` / `mbps_wr` bandwidth limits. |
| `set_system_disk_bandwidth` | Create additional disk | If the product has an additional disk, create it on the configured storage. |
| `set_created_additional_disk` | Resize additional disk | Expand it to the configured size. |
| `set_additional_disk_size` | Set additional disk I/O | Apply bandwidth limits to the additional disk. |
| `set_additional_disk_bandwidth` | Configure network | Set bridge, VLAN, rate limit, and enable the firewall flag on the NIC. |
| `set_network` | Configure firewall | Apply per-product firewall options (enable, DHCP/NDP, MAC filter, IP filter, log levels), policies, and anti-spoofing IPSet with the allocated IPs. |
| `set_firewall` | Configure cloud-init | Push hostname, IP addresses, gateway, DNS servers, username, password and SSH keys into the cloud-init drive. |
| `set_cloudinit` | Start VM | Power on the VM through the Proxmox API. |
| `starting` | Verify running + email | Wait up to 5 cron ticks for the guest to report `running`. Send the "VM is ready" email. |
| `ready` | — | Final state. Client has full access to the service. |

## Resumability & retry

Every step is wrapped in this contract:

1. **Check current state** — guards against double execution if the cron fires twice.
2. **Do one API call (or a short sequence of related calls)** — deliberately small so the step either completes quickly or can be retried cheaply.
3. **On success** — advance `vm_status` to the next state.
4. **On failure** — return the error string; `vm_status` stays the same; the cron log records the step with the error; next tick retries.

There is **no retry count limit**. A VM that genuinely cannot deploy (misconfigured IP pool, no free node with matching storage) will stay stuck at its current step and keep appearing in the cron log — visible immediately. Admins are expected to fix the root cause (add IPs to the pool, adjust storage mapping) rather than fight a phantom max-retry counter.

## DNS configuration during deploy

The `clone → set_dns` transition is where forward and reverse DNS records are registered. With many IPs or many DNS zones, this can involve dozens of API calls. Starting with v3.2 each DNS operation is logged live to the cron output, so admins can watch records being created in real time:

![Cron live output showing DNS operations during deploy](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-3tgnqnds.png)

Lines prefixed with `· fwd OK` are forward (A/AAAA) creations, `· rev OK` are reverse (PTR) creations. Per-zone failures (for example, one DNS provider is temporarily unreachable) are logged as `· fwd ERR` / `· rev ERR` but do **not** abort deployment — other zones and subsequent steps still run.

## Full deploy walkthrough

A successful deploy looks like this in the standalone cron output (`php cron.php`). Each step shows its duration and the state transition:

![Full deploy pipeline in cron output](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-oaiehprs.png)

Every line is flushed to stdout immediately — nothing is buffered until the end. This means even during a long-running step (full clone of a large template, cross-node migration) you can tell whether the job is still making progress or has truly stalled.

## Deploy log in the admin UI

In the addon's **VM Management → Log** modal, every run is recorded with per-step duration, state transitions, and any errors:

![Deploy log in VM Management](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-dnylevwz.png)

The most recent 50 runs are kept per VM. If the last attempt paused partway, the log is marked `waiting` with an `error` field showing why — useful for diagnosing a sticky step.

## Migration step in detail

`set_dns → migrated` handles the common Proxmox cluster topology where templates live on one node (often for storage cost reasons) but client VMs should run on another:

1. `set_ip → clone` creates the VM on **Template Node A** using the template's local storage.
2. `clone → set_dns` configures DNS while the VM is still on Node A.
3. `set_dns → migrated` migrates the VM from Node A to the **Target Node B** with a `targetstorage` remap so disks end up on the right backend on the target.
4. All subsequent steps (CPU, disk, network, firewall, cloud-init, start) run against the VM on Node B.

Target node selection considers storage availability and free RAM on candidate nodes. If no suitable target is found, the VM stays on Node A and deployment completes there — the VM is still fully functional, just not on the preferred node.

## What triggers a deploy

A deploy starts the moment `vm_status` is set to `creation`. That happens when:

- A client's order is provisioned (automatic after payment, or manual accept/provision by an admin).
- An admin clicks **Create** or **Module Create** in the service's module commands.
- `ModuleCreateAccount` is called through the WHMCS API.
- A **Redeploy** action resets the VM record (deletes the existing VM on Proxmox, clears logs, sets status to `creation`).

From the moment `creation` is written, the next cron tick picks up the VM. At the default 1-minute **Process VMs** interval, that means provisioning starts within 60 seconds of the trigger.

## Related reading

- [Change Package](02-change-package.md) — how upgrades and downgrades use a similar state machine.
- [Terminate Process](03-terminate-process.md) — how services are torn down asynchronously.
- [DNS Zones & Integration](../04-addon-module/03-dns-zones.md) — configuring the providers that the `clone → set_dns` step writes to.
- [Scheduled Tasks](04-scheduled-tasks.md) — all cron tasks including Process VMs.


<!-- sync:5b6e6e24b7be721c -->

# Change Package

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## Overview

A package change (upgrade or downgrade) reconfigures a live VM to match a different product plan — new CPU/RAM values, larger disks, different network settings, changed firewall options. Just like [Deploy](01-deploy-process.md), this runs asynchronously as a resumable state machine driven by the cron. The client does not wait for all the Proxmox API calls to finish — WHMCS accepts the upgrade order immediately and the module walks the VM through the change over the next one or two cron ticks.

## Change Package Pipeline

```
change_package → cp_update_ip → cp_stop → cp_cpu_ram →
cp_system_disk_size → cp_system_disk_bandwidth →
cp_additional_disk → cp_additional_disk_size → cp_additional_disk_bandwidth →
cp_network → cp_firewall → cp_start → ready
```

### Step descriptions

| State (done) | Next action | What happens |
|---|---|---|
| `change_package` | Update IP + DNS + firewall | Reload config, reload remote VM data, update IP allocation if the new package changes IP requirements, refresh anti-spoofing IPSet, and refresh forward + reverse DNS records to reflect any IP changes. |
| `cp_update_ip` | Stop VM | Stop the VM. Required because some downstream steps (disk resize, CPU/RAM limits) cannot be applied to a running VM. Polls until the VM reports `stopped`. |
| `cp_stop` | Set CPU & RAM | Apply new CPU cores / sockets / memory from the new package. Skipped if the values are unchanged. |
| `cp_cpu_ram` | Resize system disk | Grow the system disk to the new size (Proxmox cannot shrink disks — a smaller target is silently skipped). |
| `cp_system_disk_size` | System disk I/O | Apply new bandwidth limits to the system disk. |
| `cp_system_disk_bandwidth` | Additional disk | Create an additional disk if the new package includes one and the VM does not have it yet. |
| `cp_additional_disk` | Additional disk size | Grow the additional disk. |
| `cp_additional_disk_size` | Additional disk I/O | Apply new bandwidth limits. |
| `cp_additional_disk_bandwidth` | Network | Update bridge, VLAN and NIC rate limit. |
| `cp_network` | Firewall | Re-apply per-product firewall options and refresh the anti-spoofing IPSet with the current IP list. |
| `cp_firewall` | Start VM | Power the VM back on. |
| `cp_start` | Verify running | Wait up to 5 ticks for the guest to report `running`. |
| `ready` | — | Done. VM is live with the new package. |

### Skip-if-unchanged optimization

Every `cp_*` step first compares the **current** VM configuration with the **target** package configuration. If they match, the step logs `skip (no change)` and advances immediately. A downgrade that only reduces RAM, for example, doesn't touch the disks, network, or firewall — it stops the VM, applies RAM, restarts. In practice most package changes complete in 20-40 seconds real time.

In the log this shows up as lines like `cp_system_disk_size skip → cp_system_disk_bandwidth`. Useful for auditing what actually changed during a given upgrade.

## DNS refresh during change package

The first step (`change_package → cp_update_ip`) triggers a full DNS sync — old records for removed IPs are deleted, new records for added IPs are created. All matching DNS zones are updated. With v3.2 each operation is logged live so admins can watch the refresh happen:

![Change package DNS refresh live in cron output](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-rnbxe48e.png)

`fwd-del OK` / `rev-del OK` lines are the old records being removed; `fwd OK` / `rev OK` lines below are the new records being created. Failures in one provider do not block the rest — DNS errors are non-blocking exactly like they are during deploy.

## Full change package walkthrough

A complete upgrade in the cron output, including a step that failed to start the VM on the first attempt and was retried on the next tick:

![Change package pipeline with retry](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-cobvzyey.png)

Successful steps show `success`, skipped steps show `skip (no change)`, and the failed `cp_start` step gets retried until it succeeds. At no point is the state machine forced to restart from the beginning — retry only re-runs the step that didn't complete.

## Log viewing

Every change package run writes a structured entry to `vm_last_action_log` on the VM record. In the addon's VM Management the **Log** modal shows each step with its duration, result, and any skip markers:

![Change package log with per-step detail](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-lognafmq.png)

If the last run had any failures the modal shows a red error banner at the top with the failure reason.

## Retry semantics

Change package uses the same "no retry limit, no time bomb" design as deploy:

- A failed step keeps the VM in its current `cp_*` state.
- The next cron tick retries **only that step**, not the whole pipeline.
- Earlier successful steps are never repeated — disk resizes, for example, are not redone on retry.
- A persistent failure is visible in every cron log entry until an admin addresses the root cause.

During the `cp_stop` → `cp_start` window the VM is offline for as long as the hardware changes take. For most upgrades this is under a minute. For large disk resizes it can be longer — Proxmox needs to finish the storage operation before `cp_start` can proceed.

## What triggers a change package

A package change starts when `vm_status` is set to `change_package`. That happens when:

- A client completes an upgrade/downgrade order in the client area.
- An admin clicks **Change Package** in the service module commands.
- `ChangePackage` is called through the WHMCS API.

The module verifies the current `vm_status` is either `ready` (normal path) or already `change_package` (idempotent) before setting state. An in-progress deploy or terminate will be respected — the change package request waits until the VM returns to `ready`.

## Caveats

- **Disks cannot be shrunk.** Proxmox does not support shrinking virtual disks safely. A downgrade to a smaller disk size logs "skip (new size is smaller)" and keeps the existing larger disk. Billing is unaffected — WHMCS tracks the package, not the disk size on disk.
- **VM must be healthy to stop gracefully.** If the guest OS is unresponsive, `cp_stop` may take longer and eventually force-stop.
- **Cross-node migration during upgrade is not performed.** The VM stays on its current node. If you need to move a VM to a different node during an upgrade, do the migration separately in Proxmox first.

## Related reading

- [Deploy Process](01-deploy-process.md) — first-time provisioning using the same state-machine pattern.
- [Terminate Process](03-terminate-process.md) — async service teardown.
- [DNS Zones & Integration](../04-addon-module/03-dns-zones.md) — how the DNS refresh during `change_package → cp_update_ip` works.


<!-- sync:9d88454f3bd7f7e9 -->

# Terminate Process

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## Overview

Terminating a service means destroying the virtual machine on Proxmox, removing its backups, deleting its DNS records across every configured provider, and cleaning up the WHMCS records. On a service with many backups or many DNS entries this easily takes over a minute — more than a typical PHP request limit allows.

**Starting with v3.2 terminate runs asynchronously.** When an admin clicks **Terminate**, the module:

1. Sends a fire-and-forget **stop** request to Proxmox so the VM starts shutting down right away.
2. Sets `vm_status = 'terminate'` on the VM record.
3. Returns `success` to WHMCS.

WHMCS then marks the service **Terminated** immediately — the client loses client-area access within the same request. The heavy work (polling for stop, removing backups, deleting DNS records, the Proxmox DELETE call, clearing `tblhosting` and the VM record) is done by the cron task **Process VMs** on the next tick.

## Terminate Pipeline

The pipeline is a single cron handler, not a multi-step state machine — but each internal phase is logged as a distinct event.

```
terminate → [stop VM] → [remove backups] → [delete DNS] → [DELETE VM] → [clean DB] → remove
                                                              └─ on error → error_terminate
```

### Phases

| Phase | What happens | Failure handling |
|---|---|---|
| **Stop VM** | Single stop request, then poll the remote status every 5 seconds for up to 120 seconds (graceful). If still `running`, send a force-stop and poll another 60 seconds. | If the VM is still running after both windows, proceed to DELETE anyway — `purge=1` can reap a hung VM. |
| **Remove backups** | Best-effort delete of every backup snapshot for the VM across all configured storages. | Backup deletion errors are caught, logged, ignored. |
| **Delete DNS** | For every DNS zone whose name matches the VM's domain or an assigned IP, remove the forward A/AAAA and reverse PTR records. | Per-zone, per-IP errors are non-blocking — caught, logged, the next record continues. |
| **DELETE VM** | The Proxmox `DELETE /nodes/<node>/qemu/<vmid>?purge=1` call. **This is the only phase that can cause failure** — everything else is best-effort. | On error the VM goes to `error_terminate`. The DB is **not** cleaned. |
| **Clean DB** | Only on DELETE success. Wipes `tblhosting.dedicatedip/assignedips/domain` and clears identity fields on the VM record. | — |

## Live cron output

Every phase is streamed to the cron output with timestamps and progress heartbeats. A completed terminate looks like this:

![Cron live output of a terminate run](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-rqw35yup.png)

Lines you will see:

- `[terminate] starting` — pipeline began for this service.
- `remote status: running, vmid: 2003` — initial state.
- `stop request sent` — the graceful stop was issued.
- `still running, waited 15s / 120s, status: running` — heartbeat while waiting. Prints every 15 seconds.
- `stopped after ~45s` — the guest acknowledged shutdown.
- `removing backups` / `backups removed` — backup cleanup.
- `deleting DNS records` — beginning of the DNS phase.
- `rev-del OK [hestiacp] 130.168.192.in-addr.arpa ← 192.168.130.6` — individual reverse record deletion.
- `rev-del OK [powerdns] 0.0.0.0….8.b.d.0.1.0.0.2.ip6.arpa ← …` — same, IPv6.
- `fwd-del OK [hestiacp] puqcloud.com ← 5546-1776530141.puqcloud.com` — forward record deletion.
- `DELETE VM vmid=2003 node=pve-wew2 purge=1` — the Proxmox destroy call.
- `VM deleted` — success.
- `[terminate] done (28.67s) result=success` — pipeline finished.

If the cron runs in `--verbose` mode (standalone `php cron.php` does by default) everything is flushed line-by-line in real time.

## What happens on failure

If the **DELETE VM** API call returns an error (node unreachable, lock conflict, auth expired, etc.), the cron handler switches to the failure path:

- `vm_status` is set to **`error_terminate`**.
- The VM record is **not** cleaned. `tblhosting.dedicatedip` / `assignedips` stay populated, the VM ID stays on the record, the domain is preserved.
- The client gets **one** entry in the Activity Log: `Service termination FAILED — admin attention required. Error: <reason>`.
- The VM Log modal in VM Management shows a red banner with the error.
- The cron will **not** automatically retry — `error_*` states are admin-manual.

### Why IPs stay allocated on failure

It is deliberate. If the VM still exists on Proxmox but the WHMCS record has been cleared, those IPs are free to be reassigned — and the IP pool will hand them out to the next client. That new client's VM will then conflict with a "zombie" VM still holding the IPs on Proxmox. Keeping the record intact until Proxmox confirms the VM is gone avoids this class of bug entirely.

## Admin actions after `error_terminate`

Open **Addons → PUQ Proxmox KVM → VM Management**. Rows in `error_terminate` show a red status badge and a trash icon in the Actions column:

![VM Management with a terminate in progress](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-cqneotz9.png)

After the cron finishes:

![VM Management showing a terminated service](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-iyvvroex.png)

### Reset VM Status modal

Clicking the Reset button opens a modal with a full reference of available target statuses and when to use each:

- **`terminate`** — re-queue the termination. Use this after you've fixed whatever made the original attempt fail (restored node connectivity, re-authed with Proxmox, etc.).
- **`remove`** — force-mark the VM record as removed. **Does not touch Proxmox.** Use this only when you've manually deleted the VM from Proxmox and just want WHMCS to stop showing it.
- `ready`, `creation`, `set_ip`, `change_package`, `set_dns_records` — retry other state machines (see the [Deploy](01-deploy-process.md) and [Change Package](02-change-package.md) docs).

### Delete Record button

Visible **only** for rows in `error_terminate` or `remove`. Removes the VM row from `puqProxmoxKVM_vm_info`. Does **not** touch Proxmox or `tblhosting`. Use this when the VM is long gone from Proxmox but you want to clean up leftover database rows. The confirmation dialog repeats this warning explicitly.

## Guarantees

- **Client access revoked instantly.** The service is Terminated in WHMCS the same moment the admin clicks the button. The client cannot log back in while the actual teardown happens in the background.
- **IPs cannot be reassigned before the VM is gone from Proxmox.** A failing terminate preserves the allocation until a human confirms the cleanup.
- **One Activity Log entry per attempt.** Success → one "terminated successfully" entry. Failure → one "termination FAILED" entry. Cron never writes duplicates on skipped `error_terminate` rows.
- **DNS errors never block termination.** A missing or broken DNS provider does not stop the VM from being destroyed.

## Logs

- **Per-VM action log** — in the addon's VM Management → Log modal, every terminate attempt (successful or not) is recorded with duration, phase, and any errors.
- **Client Activity Log** — visible in the WHMCS client area under My Activity Log.
- **Module log** — all Proxmox API calls, DNS provider calls, and non-blocking errors go to WHMCS **Utilities → Logs → Module Log** with identifier `puqProxmoxKVM` and `puq_proxmox_kvm`.
- **Cron output** — when running cron in verbose mode, every step is streamed to stdout in real time.

## Related reading

- [Deploy Process](01-deploy-process.md) — same state-machine pattern applied to provisioning.
- [Change Package](02-change-package.md) — async package changes.
- [VM Management](../04-addon-module/04-vm-management.md) — the admin UI with the Reset and Delete Record actions.
- [DNS Zones & Integration](../04-addon-module/03-dns-zones.md) — what happens in the DNS deletion phase.


<!-- sync:16523e1bfd520213 -->

# Scheduled Tasks

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## Overview

The module runs six scheduled tasks through the cron system. Each task has a configurable interval and independent lock management to prevent overlapping executions.

## Task List

| Task | Default Interval | Description |
|------|------------------|-------------|
| **Process VMs** | 1 minute | Processes the deploy and change package pipelines. Picks up VMs in non-ready states and executes the next step in their pipeline. Also handles DNS record creation and updates. This is the primary task responsible for VM provisioning and modification. |
| **Remove Snapshots** | 60 minutes | Checks for expired snapshots based on the configured snapshot lifetime setting and automatically removes them from Proxmox. Keeps the snapshot count manageable and frees up storage. |
| **Restore Backup** | 5 minutes | Monitors active backup restore tasks on Proxmox. When a restore operation completes, it updates the VM status and sends the "Backup restored" email notification to the client. |
| **Backup Status** | 5 minutes | Monitors active manual backup tasks on Proxmox. When a backup operation completes, it updates the backup record with the result (success or failure). |
| **Schedule Backup** | 60 minutes | Executes scheduled backups based on per-VM backup schedules. Checks each VM's configured backup days and initiates a backup if one is due. Runs once per day per VM per scheduled day. |
| **Collect Statistics** | 60 minutes | Aggregates network traffic statistics (inbound and outbound bytes) from Proxmox RRD data. Used for WHMCS Metric Billing to enable usage-based network traffic billing. |

## Configuring Task Intervals

Task intervals can be adjusted in the addon settings:

1. Navigate to **Addons > PUQ Proxmox KVM**
2. Go to **Settings > Cron**
3. Adjust the interval for each task as needed
4. Save settings

The interval specifies the minimum time between executions of a task. For example, a 5-minute interval means the task will run no more frequently than once every 5 minutes.

> **Tip:** For faster VM provisioning, keep the **Process VMs** interval low (1-2 minutes). For less time-sensitive tasks like statistics collection, longer intervals reduce system load.

## Lock Management

Each task uses a lock mechanism to prevent concurrent execution:

- When a task starts, it acquires a lock
- If the lock is already held, the task is skipped for that cron cycle
- When the task completes, the lock is released
- Stale locks (from crashed processes) are automatically detected and cleared based on a timeout

If a task appears to be stuck, you can check and manage locks from the addon's Cron settings page.

## CLI Tools

The module provides command-line tools for manual task execution and diagnostics. These can be useful for troubleshooting or for running tasks on demand outside the normal cron schedule.

![CLI help output](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-wtvfkw7c.png)

To see available CLI commands, run:

```bash
php /path/to/whmcs/modules/addons/puq_proxmox_kvm/cron.php --help
```

### Common CLI Operations

| Command | Description |
|---------|-------------|
| `--help` | Display available commands and usage information |
| `--run-task=process_vms` | Manually run the Process VMs task |
| `--clear-locks` | Clear all stale lock files |

## Monitoring

To monitor cron health:

1. Check the **Last Run** timestamp for each task on the Cron settings page
2. Verify no tasks have stale locks
3. Review the WHMCS activity log for any cron-related errors
4. For deploy issues, check the per-VM deploy log in the VM Management section

If tasks are consistently failing or not running, refer to the [Cron Configuration](../03-installation-and-configuration/08-cron-configuration.md) guide to verify your cron setup.


<!-- sync:d8005f97e36a34db -->

# Troubleshooting

Diagnostic recipes for when a deployment stalls or a VM gets stuck: how to collect WHMCS module logs, cron output, Proxmox node logs and firewall configuration, plus the exact information to attach when opening a support ticket.

# Log Collection

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

When troubleshooting issues with the PUQ Proxmox KVM module, collecting the right logs is essential for diagnosing problems. Follow the steps below to gather all relevant information.

## Step 0: Temporarily Disable WHMCS Cron

Before collecting logs, temporarily disable the WHMCS system cron job to prevent it from interfering with your manual debugging. This ensures you have full control over when tasks execute.

Comment out or disable the WHMCS cron entry in your crontab:

```bash
# crontab -e
# Temporarily comment out the WHMCS cron line:
# */5 * * * * php -q /var/www/whmcs/crons/cron.php
```

> **Important:** Remember to re-enable the cron job after you have finished troubleshooting.

## Step 1: Run Cron Manually

Run the module cron manually to observe its output in real time. This captures all deploy pipeline activity, task processing, and any errors.

**v3.0 (addon standalone cron):**

```bash
php modules/addons/puq_proxmox_kvm/cron.php --force 2>&1 | tee /tmp/puq-cron.log
```

Run this command from the WHMCS root directory. The `--force` flag ensures the cron runs immediately regardless of scheduling. The output is both displayed on screen and saved to `/tmp/puq-cron.log`.

**Legacy (v2.x and earlier) — run WHMCS cron directly:**

In v2.x the provisioning tasks were chained onto the regular WHMCS cron, so you could capture the same output with the plain `cron.php`:

```bash
/usr/bin/php -q /WHMCS_DIR/crons/cron.php | tee /root/whmcs-cron-debug.log
```

Replace `/WHMCS_DIR` with the real path to your WHMCS installation. This command still works in v3.0 if you are running the cron in **WHMCS Hook mode** instead of standalone mode (see the Cron chapter).

### What a successful run looks like

```
===========================================
VM_id: 2001
Service_id: 4785
User_id: 1
VMSetDedicatedIp: The local status should be creation
VMDeleteDNSRecords: success
VMSetDNSRecords: success
VMClone: The local status should be set_ip|clone
VMSetCpuRam: The local status should be clone
VMSetSystemDiskSize: The local status should be set_cpu_ram
VMSetSystemDiskBandwidth: The local status should be set_system_disk_size
VMSetCreatedAdditionalDisk: The local status should be set_system_disk_bandwidth
VMSetAdditionalDiskSize: The local status should be set_created_additional_disk
VMSetAdditionalDiskBandwidth: The local status should be set_additional_disk_size
VMSetNetwork: The local status should be set_additional_disk_bandwidth
VMSetFirewall: The local status should be set_network
VMSetCloudinit: success
VMStart: success
Remote_status: running
Local_status: ready
ServiceSendEmailVMReady: OK
```

Each step either reports `success` or a reason why it was skipped (`The local status should be ...` — means the state machine is not ready for this step on this pass; it will resume on the next cron tick).

### What an error looks like

```
VMSetCloudinit: HTTP/1.1 500 volume 'local:snippets/user-dnsfix.yaml' does not exist
```

When you see a line like that, the VM is stuck at the step printed **before** the error — fix the underlying cause (missing snippet, API timeout, IPSet conflict, etc.) and the next cron run will resume from that exact step.

## Step 2: WHMCS Module Debug Log

The WHMCS module log records all API calls made by the module, including requests and responses.

1. In the WHMCS admin area, go to **Utilities > Module Debug Log**
2. If module logging is not already enabled, click **Enable Debug Logging**
3. Reproduce the issue (e.g., trigger a provisioning action)
4. Return to the Module Debug Log page and review the recorded API calls

Look for entries related to `puqProxmoxKVM` — these will show the exact API requests sent to Proxmox and the responses received.

> **Note:** Disable debug logging after troubleshooting, as it records all module API traffic and can grow rapidly.

## Step 3: WHMCS Activity Log

The WHMCS activity log captures general system events, including provisioning actions, errors, and status changes.

1. Go to **Utilities > Activity Log** in the WHMCS admin area
2. Use the search/filter to narrow down entries by date or keyword
3. Look for entries related to the affected service or containing error messages

## Step 4: Proxmox Node Logs

On the Proxmox server itself, review the system logs for the relevant services:

```bash
journalctl -u pvedaemon -u pveproxy -u pve-firewall --since "2 hours ago"
```

This shows logs from:

- **pvedaemon** — the Proxmox API daemon that processes VM operations
- **pveproxy** — the Proxmox API proxy that handles HTTPS requests
- **pve-firewall** — the Proxmox firewall service

Adjust the `--since` parameter to match the timeframe of the issue.

## Step 5: Filter for a Specific VM

If you know the VMID of the affected virtual machine, filter the Proxmox logs for relevant entries:

```bash
grep -E "<VMID>|ipset|firewall|cloudinit|error|fail" /var/log/pve/tasks/*
```

Replace `<VMID>` with the actual VM ID number (e.g., `100`).

You can also check the Proxmox task log for the specific VM:

```bash
grep -r "<VMID>" /var/log/pve/tasks/
```

## Step 6: Firewall Configuration

If the issue is related to networking or firewall rules, check the VM-specific firewall configuration:

```bash
cat /etc/pve/firewall/<VMID>.fw
```

This file contains the firewall rules and IP sets applied to the specific VM. If the file does not exist, the VM has no specific firewall rules configured.

Also check the cluster-level and node-level firewall configuration:

```bash
cat /etc/pve/firewall/cluster.fw
```

## What to Send to Support

When contacting PUQ support, include the following information:

1. **Cron output** — the `/tmp/puq-cron.log` file from Step 1
2. **WHMCS Module Debug Log** — export or screenshot the relevant entries from Step 2
3. **WHMCS Activity Log** — relevant entries from Step 3
4. **Proxmox logs** — output from Step 4 and Step 5
5. **Firewall configuration** — output from Step 6 (if applicable)
6. **VM status** — the current status of the affected VM in both WHMCS and the Proxmox web UI
7. **Module version** — the version of the PUQ Proxmox KVM module you are running
8. **WHMCS version** and **PHP version**
9. **Proxmox VE version**

> **Tip:** When collecting logs, try to reproduce the issue as closely to the log collection time as possible. This ensures the relevant entries are captured and easy to identify.


<!-- sync:bbff37eb3070329b -->

# Questions and Answers

Frequently asked questions about the PUQ Proxmox KVM module — clarifications on how WHMCS server entries, server groups, module-level settings and the Proxmox API interact during VM provisioning.

# Server Groups vs. Target Node

### Proxmox KVM module **[WHMCS](https://puqcloud.com/link.php?id=77)**
#####  [Order now](https://puqcloud.com/whmcs-module-proxmox-kvm.php) | [Download](https://download.puqcloud.com/WHMCS/servers/PUQ_WHMCS-Proxmox-KVM/) | [FAQ](https://faq.puqcloud.com/)

## Question

> What is the difference between a WHMCS server group and the **Target Node** parameter in the product configuration? How do I make sure that virtual machines are always deployed on one specific server?

## Answer

Allow us to clarify how WHMCS servers, server groups and the **Target Node** parameter work together.

In WHMCS, a **server** represents a Proxmox host. If that host is part of a Proxmox cluster, then through that single server entry WHMCS has access to all nodes within the cluster. In this case, the **Target Node** parameter allows you to specify exactly which node inside that cluster the virtual machine should be deployed on.

**Server groups** and **Target Node** are two different levels of control:

- **Server group** — this is a WHMCS-level setting. When a VM is being deployed, WHMCS selects a server from the group based on the rules you have configured for that group (e.g. fill, round-robin, etc.).
- **Target Node** — this is a module-level setting. Once WHMCS has selected a server (or cluster), the module then deploys the virtual machine on the specific node defined by the **Target Node** parameter on that server.

If you want to ensure that virtual machines are always created on one specific standalone server (not part of a cluster), the correct approach is to create a dedicated server group containing only that one server. WHMCS will then always select it for deployment, and **Target Node** will simply resolve to its single node.

## Where the Target Node parameter lives

The **Target Node** parameter is configured per product in the **VM Configuration** section of the Module Settings tab (**Setup > Products/Services > Products/Services > [product] > Module Settings**):

![Server Group (WHMCS-level) and Target Node (module-level) on the product Module Settings tab](https://doc.puq.info/uploads/images/gallery/2026-04/embedded-image-6emo1n6c.png)

The dropdown is populated via AJAX from the Proxmox server (or cluster) selected by WHMCS. Leave the value as **automatically** to let the module pick the node with the most free resources, or choose a specific node to pin all deployments of this product to it.

## Summary

| Level | Setting | Controls |
|-------|---------|----------|
| WHMCS | Server group | Which **server** (or cluster entry) is chosen for deployment |
| Module | Target Node | Which **node** inside the chosen Proxmox cluster receives the VM |

To always deploy on one specific standalone host: put that host alone in a dedicated server group, assign the group to the product, and the **Target Node** will naturally resolve to its only node.

<!-- sync:d3e46b58e40e04d0 -->

