Proxmox 9 SAN Setup

At work we have two Dell PowerEdge R6515 servers and a Dell ME5012 SAN for a lab environment (we use R650/750 and the ME 5012 for prod). I had had an MSP we previously partnered with do the setup of the prod environments as we were going through a bunch of upgrades all at once and didn’t have time to learn the newer SAN setups (last SAN I used was an EqualLogic PS6100 deployed back in 2013 that I had Dell’s assistance with and then our next upgrade was the Dell VRTX unit). So I ordered the lab equipment with management approval so we could learn it and be able to support it on our own since we were growing the internal team some.

I have been using VMware for probably close to 20yrs now, starting back on VMware server that installed on top of Windows server before quickly migrating to ESXi when that released. But I had never gotten certified. So when I rolled out the lab environment at work, I got the VMUG licensing so the team could learn it. So I bought the VMUG in 2023 when we setup the lab environment and honestly wasn’t aware of the whole Broadcom thing at the time. Fast forward two years and now Broadcom has changed the VMUG setup and you have to have at least a base certification to get the free licenses (doesn’t that defeat the purpose of getting the license to learn it to get certified? I digress), and I could no longer do the annual renewal of the free licenses.

Since I had started playing around with Proxmox 1.5-2yrs ago for home lab stuff, and we had started deploying mini PCs with Proxmox on it at our smaller remote locations that just needed some basic monitoring tools, I decided to try and upgrade our lab to Proxmox so we could start experimenting more with it and get used to the SAN config, etc.

Setup

Step 1: Install Packages

First things we need to do is install the packages for iSCSI.

Bash
apt update
apt install open-iscsi multipath-tools -y

Step 2: Enable services

Enable multipath at boot and start it now:

Bash
systemctl enable multipathd
systemctl start multipathd
systemctl enable open-iscsi
systemctl start open-iscsi

Step 3. Discover targets on each fabric

Run discovery against each controller subnet:

Bash
iscsiadm -m discovery -t sendtargets -p 10.1.1.10
iscsiadm -m discovery -t sendtargets -p 10.1.2.10

After this, you should see one IQN (your Dell array) listed multiple times with different portals.

Step 4. Log into all portals

Easiest way to log into every discovered portal:

Bash
iscsiadm -m node -L all

(That’s a capital L for login-all.)

Step 5. Verify iSCSI sessions

Check that your host has sessions to both controllers:

Bash
iscsiadm -m session

You should see lines with each SAN IP you connected to.

Step 6. Configure multipath

Create a /etc/multipath.conf:

Bash
nano /etc/multipath.conf

There are a couple different options for the config file. First up is a minimal config

Bash
defaults {
    user_friendly_names yes
    find_multipaths yes
}

blacklist {
    # Keep your local boot/system disks out if needed; adjust as appropriate
    devnode "^sda"
}

Tune /etc/multipath.conf to give this LUN a friendly alias, e.g.:

Bash
defaults {
    user_friendly_names yes
    find_multipaths yes
}

multipaths {
    multipath {
        wwid    3600c0ff000f94a2d7816006801000000
        alias   ME5_LUN1
    }
}

blacklist {
    devnode "^sda"   # don’t multipath your OS disk
}

In my setup, we have two LUNs in the lab SAN, so I needed to alias both and added additional config for the ME5012

Bash
defaults {
    find_multipaths yes
    user_friendly_names yes
    # optional, modern default is service-time; keep it explicit:
    path_selector "service-time 0"
}

# Keep local/boot disks out of multipath (adjust as needed)
blacklist {
    devnode "^sda"
}

# Per-array tuning for Dell EMC ME5 iSCSI
devices {
    device {
        vendor                  "DellEMC"
        product                 "ME5"
        path_checker            tur
        hardware_handler        "1 alua"
        prio                    alua
        path_grouping_policy    group_by_prio
        failback                immediate
        fast_io_fail_tmo        5
        dev_loss_tmo            600
        no_path_retry           queue
    }
}

# Your explicit LUN aliases
multipaths {
    multipath {
        wwid   3600c0ff000f94a2d7816006801000000
        alias  LabB-SAS-DS
    }
    multipath {
        wwid   3600C0FF000F9477691A7016501000000
        alias  LabA-SAS-DS
    }
}

Once the config file is built, we need to Reload multipath:

Bash
systemctl restart multipathd
multipath -ll

If you gave the LUN and alias, or you configured multiple LUNs, it should show them.

If the second LUN isn’t presented yet, it’ll simply show up under /dev/mapper/LabA-SAS-DS once the array maps it and you rescan or log in to the target.

Step 7. Add storage in Proxmox

In the Proxmox web UI:
Datacenter → Storage → Add → LVM (or ZFS) over multipath device

Point it to /dev/mapper/mpatha (not /dev/sdX).

Verification

Run multipath -ll — you should see one WWID with 4 or 8 paths beneath it (depending on how many SAN ports are zoned).

MATLAB
mpatha (3600c0ff000f94a2d7816006801000000) dm-8 DellEMC,ME5
size=3.3T features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 15:0:0:1 sdc 8:32 active ready running
| `- 16:0:0:1 sdf 8:80 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 14:0:0:1 sde 8:64 active ready running
  `- 17:0:0:1 sdg 8:96 active ready running

Test redundancy: unplug one cable → I/O should continue without interruption.


Notes

Removing Extra iSCSI iniators

The Dell ME5012 technically has 4 NICs per controller, so when I first started testing this, I had added all 8 IPs. Only two NICs per controller are plugged in for our lab environment (one per switch, don’t need as much redundancy on the lab). So I needed to remove all the extra connections from the config so it didn’t create unnecessary errors.

Step 1: List current iSCSI nodes

Bash
iscsiadm -m node

This shows all IQNs + portals saved locally, e.g.:

Bash
10.1.1.11:3260,1 iqn.1988-11.com.dell:01.array.bc305b5de342
10.1.2.11:3260,5 iqn.1988-11.com.dell:01.array.bc305b5de342
...

Step 2: Clean login sessions

If you previously logged in and there are stale sessions, log them out first:

Bash
iscsiadm -m node -p 10.1.1.11 -u
iscsiadm -m node -p 10.1.1.12 -u
iscsiadm -m node -p 10.1.2.11 -u
iscsiadm -m node -p 10.1.2.12 -u

Step 3: Delete unwanted portals

For each unplugged NIC address, run:

Bash
iscsiadm -m node -p 10.1.1.11 --op=delete
iscsiadm -m node -p 10.1.1.12 --op=delete
iscsiadm -m node -p 10.1.2.11 --op=delete
iscsiadm -m node -p 10.1.2.12 --op=delete

That removes those entries from /etc/iscsi/nodes/.

Step 4: Confirm cleanup

Re-list:

Bash
iscsiadm -m node

Rebuilding LVM Volume Group

My second LUN that I was initially setting up for Proxmox had an LVM volume on it already as I had tinkered around with XCP-NG at one point. So I had to remove this to create the LVM volume for Proxmox

Big Warning

These steps will destroy all data on that LUN. Make absolutely sure you don’t need anything from it before proceeding.

Remove Old Volume Group

Step 1: Identify the device in Proxmox

Since you gave the alias LabA-SAS-DS, it should appear as:

Bash
ls -l /dev/mapper/LabA-SAS-DS

That symlink points to the actual multipath device (/dev/dm-X).

Step 2: Deactivate existing LVM (if active)

First check if Proxmox has it active:

Bash
vgdisplay

If you see a VG corresponding to that LUN, deactivate it:

Bash
vgchange -an <VGNAME>

Step 3: Wipe old LVM metadata

Clean out any volume signatures so Proxmox sees the disk as new:

Bash
wipefs -a /dev/mapper/LabA-SAS-DS

Sometimes LVM signatures persist in multiple places — safer to also do:

Bash
pvremove -ff -y /dev/mapper/LabA-SAS-DS

Step 4: Rescan in Proxmox

Either reboot or rescan:

Bash
multipath -ll
pvs

At this point, /dev/mapper/LabA-SAS-DS should appear empty (no PV/VG/LV).

Add New Volume Group(s)

Step 1: Check multipath

multipath -ll should still list your aliases (LabA-SAS-DS, LabB-SAS-DS) with healthy paths.

  • If LabA-SAS-DS shows up but has no LVM metadata anymore, you’re good.
  • If it still reports an existing VG, it means the wipe didn’t succeed → re-run wipefs -a and pvremove -ff -y.

Step 2: Initialize a fresh PV

Once the device is empty, create a new physical volume:

Bash
pvcreate /dev/mapper/LabA-SAS-DS

Step 3: Create a new VG

Bash
vgcreate LabA_vg /dev/mapper/LabA-SAS-DS

Verify:

Bash
vgs

Now vgs should show LabA_vg.

Step 4: Add to Proxmox

You have two options:

  • Via Web UI (easier):
    • Datacenter → Storage → Add → LVM.
    • Select LabA_vg as the Volume Group.
    • Give it an ID (e.g., LabA-LVM).
    • Add.
  • Via CLI: edit /etc/pve/storage.cfg and add:
Bash
lvm: LabA-LVM
    vgname LabA_vg
    content images,rootdir

Then Proxmox will see it immediately.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top