Table of Contents
Proxmox Networking
Check the /etc/network/interfaces
file contents on Proxmox node:
auto lo
iface lo inet loopback
iface ens65f0 inet manual
iface ens65f1 inet manual
iface ens2f0 inet manual
iface ens2f1 inet manual
iface ens2f2 inet manual
iface ens2f3 inet manual
auto vmbr0
iface vmbr0 inet manual
bridge-ports ens65f0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 172
auto vmbr0.172
iface vmbr0.172 inet static
address 172.190.1.100/24
gateway 172.190.1.1
...
According to interfaces configuration this Proxmox node uses a VLAN-aware bridge vmbr0
, and has a VLAN subinterface vmbr0.172
with IP 172.190.1.100/24
and gateway 172.190.1.1
. VLAN 172 is tagged on traffic coming in/out of vmbr0
. IP network is 172.190.1.0/24
.
That means when you create new VM on Proxmox you can set those parameters:
- IP address
172.190.1.X
, whereX
can not be 1 (gateway) and 100 because Proxmox itself uses it for console. - Netmask
255.255.255.0
- Gateway
172.190.1.1
- Any DNS servers IP addresses
1.1.1.1, 8.8.8.8
Don't forget to enable VLAN 172 tag on virbr0
network interface for newly created VM!
FC Storage: Huawei Dorado
Removing Orphaned VM Disks
Sometimes after FC storage configuration the lsblk
shows fc_vg1-vm--100--disk--0
but no disk in Proxmox UI and you have no VM 100 anymore:
sdc 8:32 0 15T 0 disk
└─mpatha 252:5 0 15T 0 mpath
└─fc_vg1-vm--100--disk--0 252:7 0 50G 0 lvm
sdd 8:48 0 15T 0 disk
└─mpatha 252:5 0 15T 0 mpath
└─fc_vg1-vm--100--disk--0 252:7 0 50G 0 lvm
Ensure that you have a logical volume fc_vg1-vm--100--disk--0
visible in lsblk
, but not listed in lvscan
, lvs
, or lvdisplay
. This usually means that the volume is not activated, or the device-mapper entry exists, but the LVM metadata is not properly imported.
Check the pvscan
, vgscan
and lvscan
output:
root@pve01:~# pvscan
PV /dev/sdb3 VG pve lvm2 [445.62 GiB / 16.00 GiB free]
PV /dev/mapper/mpatha VG fc_vg1 lvm2 [<15.02 TiB / <15.02 TiB free]
Total: 2 [15.45 TiB] / in use: 2 [15.45 TiB] / in no VG: 0 [0 ]
root@pve01:~# vgscan
Found volume group "pve" using metadata type lvm2
Found volume group "fc_vg1" using metadata type lvm2
root@pve01:~# lvscan
ACTIVE '/dev/pve/data' [<319.11 GiB] inherit
ACTIVE '/dev/pve/swap' [8.00 GiB] inherit
ACTIVE '/dev/pve/root' [96.00 GiB] inherit
If lvdisplay
or lvs
confirms the volume - inspect contents before deleting:
mkdir /mnt/vm100
mount /dev/fc_vg1/<VOLUME-NAME> /mnt/vm100
ls /mnt/vm100
umount /mnt/vm100
Now you can remove logical volume:
lvremove /dev/fc_vg1/<VOLUME-NAME>
If LVM still doesn’t show it - use dmsetup
.
Confirm existence and unmap the orphaned volume:
ls -l /dev/mapper/fc_vg1-vm--100--disk--0
dmsetup info -c
dmsetup remove /dev/mapper/fc_vg1-vm--100--disk--0
If this fails with "device is busy" - ensure it is not mounted:
mount | grep fc_vg1
lsof | grep vm-100-disk
Optional: Remove stale symlinks:
ls -l /dev/fc_vg1/
rm /dev/fc_vg1/vm-100-disk-0
Now you can create VM 100 again with new empty virtual disk attached:
qm create 100 --name myvm --memory 2048 --cores 2
qm set 100 --scsi0 fc_vg1:50 # Creates 50G disk on fc_vg1
qm set 100 --net0 virtio,bridge=vmbr0
qm set 100 --boot order=scsi0
You can see a new volume created on Proxmox node:
root@pve01:~# dmsetup info -c
Name Maj Min Stat Open Targ Event UUID
fc_vg1-vm--100--disk--0 252 7 L--w 1 1 0 LVM-zJMmHc69FEw8pTlZ8ZoWrklBz1E2f82LfWkSC6YieIGjYPRI4k1JR2LafiWkAQ9y
ALUA Black Magic
Verify multipath is properly working. Check which physical devices back mpatha
:
root@pve01:~# multipath -ll
688988.639160 | sdd: prio = const (setting: emergency fallback - alua failed)
688988.639598 | sdc: prio = const (setting: emergency fallback - alua failed)
mpatha (360c8408100bee318090978b000000000) dm-5 HUAWEI,XSG1
size=15T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 23:0:0:1 sdd 8:48 active ready running
`- 22:0:0:1 sdc 8:32 active ready running
If multipath setup shows:
prio = const (setting: emergency fallback - alua failed)
This means Proxmox specifically multipathd
attempted to use ALUA (Asymmetric Logical Unit Access) for path prioritization — which Huawei normally supports — but it is failed, so it fell back to the default round-robin style.
Ensure vendor identification for block device has spaces at the end:
root@pve01:~# sg_inq /dev/sdc | grep -i "vendor\|product"
Vendor identification: HUAWEI
Product identification: XSG1
Product revision level: 6000
Add those spaces to /etc/multipath.conf
:
defaults {
user_friendly_names yes
find_multipaths yes
}
devices {
device {
vendor "HUAWEI "
product "XSG1 "
hardware_handler "1 alua"
prio alua
path_selector "service-time 0"
path_grouping_policy group_by_prio
failback immediate
fast_io_fail_tmo 10
dev_loss_tmo 60
no_path_retry 18
# path_checker tur
# features "1 queue_if_no_path"
}
}
Restart multipathd
:
systemctl restart multipathd
Now it works just fine:
root@pve01:~# multipath -ll
mpatha (360c8408100bee318090978b000000000) dm-5 HUAWEI,XSG1
size=15T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 22:0:0:1 sdc 8:32 active ready running
`- 23:0:0:1 sdd 8:48 active ready running
Disabled Shared Option
If can't use FC stroage on all nodes, for example you can see it only on Proxmox node pve02
but not on pve01
- that means shared
option is disabled.
Check pve
configuration:
root@pve01:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,iso,backup
lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
lvm: huawei-lvm
vgname huawei-vg
content rootdir,images
saferemove 0
shared 0
You just need to set shared 1
and all Proxmox nodes in the cluster immediately share this config.
Lost VG Problem
Problem: Huawei LUN visible as /dev/mapper/mpatha
, multipath -ll
confirms it's healthy (mpatha, 15T). But pvdisplay /dev/mapper/mpatha
fails and pvs
, vgs
only show your local pve volume group + vgscan
sees a mysterious fc_vg
, but it’s not in vgs.
If vgscan
sees a VG called fc_vg
— that means the metadata is there, but it’s not activated.
Check fc_vg
uses mpatha
and activate the VG:
vgdisplay fc_vg
vgchange -ay fc_vg
pvs
vgs
lvs fc_vg
Update storage configuration (example):
lvm: huawei-lvm
vgname fc_vg
content rootdir,images
saferemove 0
shared 1
If vgdisplay fc_vg
fails:
root@pve01:~# vgdisplay fc_vg
Volume group "fc_vg" not found
Cannot process volume group fc_vg
It means your Proxmox node detects LVM metadata on a disk, vgscan
shows fc_vg
, but the volume group is not activated, and LVM can't access it directly. This often happens when: The LUN was initialized on another node. But the PV was never activated on this node. Or the PV is not being mapped properly as a device.
Try manually scanning mpatha
and activate the volume group:
pvscan /dev/mapper/mpatha
vgchange -ay fc_vg
When pvscan
sees a volume - the LVM metadata is present and correctly shows the VG name fc_vg
.
pvscan
PV /dev/mapper/mpatha VG fc_vg lvm2 [<15.02 TiB / 14.97 TiB free]
But if vgchange -ay fc_vg
fails with: Volume group "fc_vg" not found
- This means the VG metadata exists on the disk, but it is not loaded into LVM’s active configuration on this node.
You have to import and activate the foreign volume group:
vgimport fc_vg
vgchange -ay fc_vg
pvs
vgs
lvs fc_vg
The vgimport
only works on VGs that were explicitly exported using vgexport
, and your fc_vg
could be not exported, so vgimport
refuses to import it.
Import the foreign VG using vgimportclone
. It will rename VG if there is a name conflict fc_vg0
and register it on this node:
vgimportclone /dev/mapper/mpatha
vgs # confirm new VG name
vgchange -ay <new_vg_name>
pvs
lvs
Don't forget to update your /etc/pve/storage.cfg
:
lvm: huawei-lvm
vgname fc_vg1
content rootdir,images
saferemove 0
shared 1
Optional - If you only have one node active, and want to rename it back to fc_vg
:
vgrename <new_vg_name> fc_vg
vgchange -ay fc_vg
Oh, one last hack to ensure you're working with the right VG - deep check using hexdump
. You should see LVM2 in the header:
root@pve01:~# hexdump -C /dev/mapper/mpatha | head -n 20
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000200 4c 41 42 45 4c 4f 4e 45 01 00 00 00 00 00 00 00 |LABELONE........|
00000210 d6 4a f9 69 20 00 00 00 4c 56 4d 32 20 30 30 31 |.J.i ...LVM2 001|
00000220 62 67 7a 49 5a 47 66 4f 6a 4c 45 31 75 65 39 44 |bgzIZGfOjLE1ue9D|
00000230 6d 4a 4a 79 4b 35 72 65 57 64 6f 6e 43 35 56 77 |mJJyK5reWdonC5Vw|
00000240 00 00 00 00 05 0f 00 00 00 00 10 00 00 00 00 00 |................|
00000250 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000260 00 00 00 00 00 00 00 00 00 10 00 00 00 00 00 00 |................|
00000270 00 f0 0f 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000280 00 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 |................|
00000290 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00001000 a9 68 2a ed 20 4c 56 4d 32 20 78 5b 35 41 25 72 |.h*. LVM2 x[5A%r|
00001010 30 4e 2a 3e 01 00 00 00 00 10 00 00 00 00 00 00 |0N*>............|
00001020 00 f0 0f 00 00 00 00 00 00 08 00 00 00 00 00 00 |................|
00001030 c8 05 00 00 00 00 00 00 7d 58 b7 4e 00 00 00 00 |........}X.N....|
00001040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00001200 66 63 5f 76 67 31 20 7b 0a 69 64 20 3d 20 22 7a |fc_vg1 {.id = "z|
Thats it. Easy isn't it? Enjoy using your huge FC storage on Proxmox!