SolusVM 2.0 requires an empty Volume Group (VG) to provision VPSes. That means:
- The VG should exist,
- but contain no Logical Volumes (LVs) and no filesystem.
Hereβs the exact safe sequence to wipe the existing mount and leave just an empty VG ready for SolusVM 2.0.
π Reset Storage for SolusVM 2.0 β Empty VG
β οΈ Warning: This will destroy all data inside the old mount. Backup first.
1. Stop usage & unmount
Check if itβs mounted:
mount | grep storage1
Unmount:
umount /storage1
If busy:
lsof +D /storage1
fuser -vm /storage1
Kill or stop services, then retry umount.
2. Remove existing Logical Volumes (LVs)
List them:
lvdisplay
Remove:
lvremove -y /dev/vg_raid1/*
3. Remove the old VG
vgremove -y vg_raid1
4. Recreate empty VG on RAID device
SolusVM will detect this VG and use it:
vgcreate vg_raid1 /dev/md127
5. Verify it is empty
vgs
lvs
Expected output:
vgsshowsvg_raid1exists with free space available.lvsshows no logical volumes.
β
At this point, vg_raid1 is clean and empty, exactly what SolusVM 2.0 requires for managing KVM/Xen VPS storage.
