After the migration to the new HP Microserver, there have been a few teething issues:
- ZFS is very slow with 8GB RAM
- I want to move Proxmox to SSD, along with all the VMs
- I want storage to be placed onto the 3TB drives with ZFS
- I’ve been running out of memory because of the ZFS ARC cache, I can’t even ssh into VMs
I’ve ordered 16GB of ram for this server, as I was running out of space currently. The plan is to provision 1GB of ram to ZFS for slow file transfers, VMs will be on the SSD for faster reaction times.
Steps
- Backup VMs to a spare 500GB drive I have laying around
- Put SSD into port 1, move HDDS to port 3 & 4 as ports 1 & 2 are 6Gbps, 3,4 and ODB are 3Gbps
- Additionally, there is no option to select the boot order when using the microserver in AHCI mode, so port 1 for the boot drive.
- Install Proxmox onto SSD with ext4 as filesystem
- As before, setup no-subscription repos, apt update && apt dist-upgrade
- Setup linux bridges again & reboot
Restore VMs
- Mount 500GB that has the VM backups HDD to /mnt/
- Copy VMs to /var/lib/vz/dump/ this places them into “local”
- From GUI restore VMs to local-lvm
Create ZFS Mirror pool from CLI
Format the drives
root@sequoya:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 119.2G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 118.8G 0 part
├─pve-root 253:0 0 29.5G 0 lvm /
├─pve-swap 253:1 0 7G 0 lvm [SWAP]
├─pve-data_tmeta 253:2 0 1G 0 lvm
│ └─pve-data-tpool 253:4 0 65.5G 0 lvm
│ ├─pve-data 253:5 0 65.5G 0 lvm
│ ├─pve-vm--100--disk--0 253:6 0 32G 0 lvm
│ ├─pve-vm--101--disk--0 253:7 0 16G 0 lvm
│ ├─pve-vm--105--disk--0 253:8 0 32G 0 lvm
│ ├─pve-vm--102--disk--0 253:9 0 20G 0 lvm
│ └─pve-vm--103--disk--0 253:10 0 20G 0 lvm
└─pve-data_tdata 253:3 0 65.5G 0 lvm
└─pve-data-tpool 253:4 0 65.5G 0 lvm
├─pve-data 253:5 0 65.5G 0 lvm
├─pve-vm--100--disk--0 253:6 0 32G 0 lvm
├─pve-vm--101--disk--0 253:7 0 16G 0 lvm
├─pve-vm--105--disk--0 253:8 0 32G 0 lvm
├─pve-vm--102--disk--0 253:9 0 20G 0 lvm
└─pve-vm--103--disk--0 253:10 0 20G 0 lvm
sdb 8:16 0 465.8G 0 disk
└─sdb1 8:17 0 465.8G 0 part
sdc 8:32 0 2.7T 0 disk
├─sdc1 8:33 0 1007K 0 part
├─sdc2 8:34 0 512M 0 part
└─sdc3 8:35 0 2.7T 0 part
sdd 8:48 0 2.7T 0 disk
├─sdd1 8:49 0 1007K 0 part
├─sdd2 8:50 0 512M 0 part
└─sdd3 8:51 0 2.7T 0 part
My 3TB drives are mapped to /dev/sdc
and /dev/sdd
, fist step is to partition them.
root@sequoya:/# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.29.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): d
Partition number (1-3, default 3):
Partition 3 has been deleted.
Command (m for help): d
Partition number (1,2, default 2):
Partition 2 has been deleted.
Command (m for help): d
Selected partition 1
Partition 1 has been deleted.
Command (m for help): g
Created a new GPT disklabel (GUID: 02DC54A3-4E42-4EF6-8337-0BC6414A1E81).
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Do this for both disks, leave them unformatted.
Create a new mirrored ZFS pool called ‘zfs’ with the drives:
zpool create zfs mirror /dev/sdc /dev/sdd
Setup compression:
zfs set compression=lz4 zfs
Make a directory, for a directory style Proxmox datastore:
mkdir -p /zfs/data
My ZFS pool automounted to /zfs
and does so at boot, there are no fstab rules for it.
Add a directory datasore in proxmox mounted from /zfs/data, and create a zfs-thin and attach it to the ‘zfs’ pool.
Done!
Conclusion
- Performance is now substantially better and RAM usage is down.
- ZFS ARC Cache no longer needs maximum limits set, it seems to be much more reasonable. I imagine this is due to the fact that the data in the ZFS pool is accessed infrequently.
Still to do
- Still, need to setup GPU passthrough.
- Still waiting on the 16GB RAM upgrade to arrive…