- OMV 6.x
- geaves
- 21. Dezember 2021
- 27. Dezember 2021
- Offizieller Beitrag
Zitat von geaves
either the proxmox kernel is not used or it's a must use, to enable zfs to function from a user perspective and yours
But the proxmox kernel is not required. It makes things easier though. It is just hard to get the dependencies in the plugin package to do the right thing depending on which kernel is installed. The plugin also works on non-amd64 platforms. So, the proxmox kernel cannot be a requirement there.
The following allows installing the zfs plugin when the proxmox kernel is installed because it forces the proxmox packages to be used.
apt-get install zfsutils-linux=2.1.1-pve3 zfs-zed=2.1.1-pve3
apt-get install openmediavault-zfs
- 27. Dezember 2021
- Offizieller Beitrag
To what extent is the proxmox kernel necessary? I have been using ZFS without the proxmox kernel for a long time and I have no problem.
- 27. Dezember 2021
- Offizieller Beitrag
Also, on a fresh install with the proxmox kernel installed, disabling backports and clicking the apt clean button in omv-extras allowed the zfs plugin to be installed.
- 27. Dezember 2021
- Offizieller Beitrag
Zitat von chente
To what extent is the proxmox kernel necessary?
It is never necessary. It just skips the compilation of the zfs module which helps installation time and reliability. The kernel is the Ubuntu LTS kernel which is tested with zfs extensively and in my opinion, a better choice for zfs. The problem right now happens whenever debian backports has a never version of zfs packages than the proxmox repos. Disabling backports prevents that.
- 27. Dezember 2021
- Offizieller Beitrag
Precisely for this reason I stopped using the proxmox kernel. I always try to keep the system as simple as possible to avoid complications.
- 27. Dezember 2021
- Offizieller Beitrag
Zitat von ryecoaaron
Also, on a fresh install with the proxmox kernel installed, disabling backports and clicking the apt clean button in omv-extras allowed the zfs plugin to be installed.
that worked by running apt clean as the backports were already disabled from your previous suggestion.
So if I redeploy, but run apt clean after disabling the backports this should be the way to go.
- 27. Dezember 2021
- Offizieller Beitrag
Zitat von chente
Precisely for this reason I stopped using the proxmox kernel. I always try to keep the system as simple as possible to avoid complications.
That may "simplify" one thing but disabling backports simplifies another. I also think the proxmox kernel is better than the debian backports kernel. I have run it on all of my amd64 systems since I added the option to omv-extras.
Zitat von geaves
So if I redeploy, but run apt clean after disabling the backports this should be the way to go.
Yep. It is always safe to run apt clean (or omv-aptclean).
- 27. Dezember 2021
- Offizieller Beitrag
Zitat von ryecoaaron
I also think the proxmox kernel is better than the debian backports kernel
Because it is better? What advantages will I notice?
- 27. Dezember 2021
- Offizieller Beitrag
Ok redeploy;
1) Install OMV6
2) Login to GUI
3) Set time zone (first time around I saw some errors relating to this)
4) Install updates from Update Management
5) Install omv-extras
6) Omv-Extras -> Settings uncheck backports -> run apt clean
7) Install kernel plugin
-8) Install Proxmox kernel
9) Reboot
10) Remove non-Proxmox kernels
11) Install sharerootfs plugin
12) Install zfs plugin
Houston, we have lift off
- 28. Dezember 2021
- Offizieller Beitrag
Zitat von chente
Because it is better? What advantages will I notice?
The only one I think you will notice is that upgrades will not have to compile the zfs module every time the kernel is upgraded. Other than that, you will probably not notice anything. The kernel will just run and work. Take it from someone who uses this kernel (ubuntu lts) on hundreds of systems.
- 28. Dezember 2021
Hi guys, quick question about ZFS. Does RaidZ (the equivalent to Raid5) needs to make a parity sync the same way Raid5 needs to do?
Yesterday I installed OMV 6 on my home server and decide to use ZFS. Installed OMV Extras, Kernel plugin. Then installed Proxmox Kernel
Then I created the Zpool
Code
sudo zpool create -m /srv/nas-data-zpool nas-data raidz /dev/sda /dev/sdc /dev/sdd----sudo zpool status pool: nas-data state: ONLINEconfig: NAME STATE READ WRITE CKSUM nas-data ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sda ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0errors: No known data errors
Alles anzeigen
On the UI, I've imported the pool and it seams ready to be used, but I thought I need to do a parity sync.
Other question, now about the way it shows on the file systems tab. It is supposed to show false on the device?
- 28. Dezember 2021
- Offizieller Beitrag
Zitat von ryecoaaron
The only one I think you will notice is that upgrades will not have to compile the zfs module every time the kernel is upgraded.
See AlsoZFS on OMV 7 - openmediavaultThat is an advantage, yes. I guess this affects how long the update lasts.
Zitat von wultyc
On the UI, I've imported the pool and it seams ready to be used, but I thought I need to do a parity sync.
It is not necessary, this is one of the advantages of ZFS. The creation of a pool is instantaneous.
https://pthree.org/2012/04/17/…l-zfs-on-debian-gnulinux/
Zitat von wultyc
Other question, now about the way it shows on the file systems tab. It is supposed to show false on the device?
I don't know how to answer this, I have not yet migrated my real system to OMV6.
- 28. Dezember 2021
chente thanks for the feedback!
- 28. Dezember 2021
- Offizieller Beitrag
Zitat von wultyc
zpool create -m /srv/nas-data-zpool nas-data raidz /dev/sda /dev/sdc /dev/sdd
I think this is where it has gone wrong, installing the sharerootfs plugin allows the name of the pool to be placed in the root of OMV, so what I did (as a test on a VM) was
Code
zpool create tank raidz1 /dev/sdb /dev/sdc /dev/sddzpool status pool: tank state: ONLINEconfig: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0errors: No known data errors
Alles anzeigen
then for a filesystem zfs create tank/movies
After each cli command I imported on the zfs plugin
- 28. Dezember 2021
Zitat von geaves
I think this is where it has gone wrong, installing the sharerootfs plugin allows the name of the pool to be placed in the root of OMV, so what I did (as a test on a VM) was
Yes sure. Was my intention to mount my zpool on /srv/nas-data-zpool. My question was about the Filesystem tab showing false on the device. I understand maybe the filesystem tab is not ready for ZFS yet.
But on the ZFS tab everything looks fine
- 28. Dezember 2021
- Offizieller Beitrag
Zitat von wultyc
Other question, now about the way it shows on the file systems tab. It is supposed to show false on the device
No it should display like this;
- 28. Dezember 2021
Ok strange
Well I destroyed the pool and created a new one without the mounting point and it appears
Code
sudo zpool create nas-data raidz /dev/sda /dev/sdc /dev/sddsudo zpool status pool: nas-data state: ONLINEconfig: NAME STATE READ WRITE CKSUM nas-data ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sda ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0errors: No known data errors
Alles anzeigen
But when I set the mounting point it shows as false
Code
sudo zpool create -m /srv/nas-data-zpool nas-data raidz /dev/sda /dev/sdc /dev/sddsudo zpool status pool: nas-data state: ONLINEconfig: NAME STATE READ WRITE CKSUM nas-data ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sda ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0errors: No known data errors
Alles anzeigen
- 28. Dezember 2021
- Offizieller Beitrag
Is it the difference between nas-data-zpool and nas-data
so if you used
zpool create -m /srv/nas-data nas-data raidz /dev/sda /dev/sdc /dev/sdd or
zpool create -m /srv/nas-data-zpool nas-data-zpool raidz /dev/sda /dev/sdc /dev/sdd
- 28. Dezember 2021
- Offizieller Beitrag
I would say the problem is that you are trying to create the mount point after creating the pool.
When the pool is created, the mount point is automatically created. If you do not want the default mount point, you must define it when creating the pool. To do this later, you would have to disassemble and reassemble it.
- 28. Dezember 2021
geaves I used this one and the result was the same zpool create -m /srv/nas-data nas-data raidz /dev/sda /dev/sdc /dev/sdd
chente using the set endpoint had the same result. started showing the pool and then shows false
Code
# sudo zpool create nas-data raidz /dev/sda /dev/sdc /dev/sdd# sudo zfs get mountpoint nas-dataNAME PROPERTY VALUE SOURCEnas-data mountpoint /nas-data default
Code
# sudo zfs set mountpoint=/srv/nas-data-zpool nas-data# sudo zfs get mountpoint nas-dataNAME PROPERTY VALUE SOURCEnas-data mountpoint /srv/nas-data-zpool local
But if I set to the original value is shows again
Code
# sudo zfs set mountpoint=/nas-data nas-data
Isn't that of an issue for me tbh, I was simply trying to have all mount point in the same folder
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!
Benutzerkonto erstellenAnmelden
Ähnliche Themen
omv-extras plugins - porting progress to OMV 6.x (done)
- ryecoaaron
- Plugins
User privileges "bugs" after reboot
- Jstark
- General
How to install CUPS in OMV6
- xztd34
- Plugins
mergerfs - cannot create new pool
- ottoking
- Plugins
OMV6 does not install omv-extra - unsupported version
- eriklysoe
- Plugins
Backup ZFS Pool for new OMV installation
- Plermpel
- General