diff mbox

[bug#45692,0/3] Better Support for ZFS on Guix

Message ID mhdVSrgYDEwem_IimJeqVrvDCNjldhvEkXl81zqCruSRJWs6FRSyqF266yLlsONwkxCIWPy49dYYK6xG6a-bHrybufj4-xXhAVzP32Iiw1c=@protonmail.com
State Accepted
Headers show

Commit Message

raid5atemyhomework Feb. 18, 2022, 7:13 a.m. UTC
Modified so it applies cleanly to origin/master.

PLEASE JUST REVIEW AND MERGE, WHAT IS THE PROBLEM HERE ANYWAY?

---

From 9964668d93b496317c16c803f6d96bb3ace3560f Mon Sep 17 00:00:00 2001
From: raid5atemyhomework <raid5atemyhomework@protonmail.com>
Date: Thu, 30 Sep 2021 16:58:46 +0800
Subject: [PATCH] gnu: Add ZFS service type.

* gnu/services/file-systems.scm: New file.
* gnu/local.mk (GNU_SYSTEM_MODULES): Add it.
* gnu/services/base.scm: Export dependency->shepherd-service-name.
* doc/guix.texi (ZFS File System): New subsection.
---
 doc/guix.texi                 | 351 ++++++++++++++++++++++++++++++++
 gnu/local.mk                  |   2 +
 gnu/services/base.scm         |   4 +-
 gnu/services/file-systems.scm | 363 ++++++++++++++++++++++++++++++++++
 4 files changed, 719 insertions(+), 1 deletion(-)
 create mode 100644 gnu/services/file-systems.scm


base-commit: 1d1a4efd8cdac3757792cfbae92440edc5c3a802
--
2.34.0

Comments

raid5atemyhomework March 16, 2022, 11:44 p.m. UTC | #1
BUMP
Liliana Marie Prikler March 17, 2022, 8:24 a.m. UTC | #2
Hi raid5,

Am Freitag, dem 18.02.2022 um 07:13 +0000 schrieb raid5atemyhomework:
> Modified so it applies cleanly to origin/master.
> 
> PLEASE JUST REVIEW AND MERGE, WHAT IS THE PROBLEM HERE ANYWAY?
You've been begging for review for a while now, so let me inform you
that the way you've been doing this is not particularly helpful to you
or the reviewers.

First of all, your follow-up messages do not include anyone who has so
far reviewed the patch in the "To:" or "Cc:" field.  This makes it less
likely that they will actually see your message.  Secondly, the tone in
which you're asking is not nice to the reviewers.  I can understand
you're a little frustrated waiting for so long, but shouting "WHAT IS
THE PROBLEM ANYWAY?" communicates that you're both unaware of and do
not care about burdens (e.g. maintenance) that are created by your
patch.  This in turn prompts reviewers to look away; both out of spite
and in order not to deal with this mess at all.

I have no stake in ZFS and no intent to review this patch beyond this
point, but here a few questions to ask: Why is it necessary to define a
file system as a service?  Why do we need to export a seemingly
unrelated variable?  Can this be tested?  Is this sufficiently tested?
Are there any points Maxime that were drowned out by a huge wall of
licensing-related messages being passed back and forth that I will not
attempt to sift through in order to respond to this message?  If so,
have those been sufficiently addressed?

Another complicating factor for this bug in particular is that the mumi
web interface and the raw messages are out of sync; I have no idea why
that is the case, but trying to fetch a patch only to get one of your
bump messages is not particularly encouraging.

In any case, I've added Maxime to CC so they can have a closer look at
it.

Cheers
M March 17, 2022, 5:22 p.m. UTC | #3
Liliana Marie Prikler schreef op do 17-03-2022 om 09:24 [+0100]:
> In any case, I've added Maxime to CC so they can have a closer look at
> it.

I have been more-or-less ignoring the ZFS patches since some time after
<https://issues.guix.gnu.org/45692#44>.  If ZFS people(^), after a
disagreement about licensing concerns, directly jump to accusations of
gaslighting and sabogate, completely ignoring my previous arguments (*)
without trying to refute any of them or bringing new arguments, then I
don't want to be involved with ZFS.

(^) So far only Mason Loring Bliss, _not_ raid5atemyhomework! 

Also, the various ‘work-arounds’ around the GPL<->CDLL incompatibility
still seem super fishy to me even if they _might_ be technically
correct.  To me, this makes reviewing the code practically pointless --
why review the zfs service patches if they will have to be reverted
due to incompatibility concerns anyway?  Summarised:

  * The ‘Oracle does not care so no legal risk’ argument:

    - Oracle might not care, but there are other parties involved as
      well (e.g. the Linux people and contributors to OpenZFS).
    - Not getting caught doesn't mean things are above board.  It just
      means you haven't got caught, and you might get caught later.
    - Has anyone actually ever asked Oracle for some official ‘yes,
      go ahead’ / ‘no, here's a DMCA notice / see you at YYYY-MM-DD
      in court’ / ‘no, but you're too small fry to bother so you'll
      get away for it ... for now’ response?

      AFAICT, this has not been done.
    - Even if it would be very strange for Oracle to try to stop (the
      Linux part of(*)) OpenZFS, why would that stop Oracle and how
      would the odd behaviour actually legally matter?

  * The ‘zfs package is already in Guix’ argument
    (https://issues.guix.gnu.org/45692#47): then it should be
    reverted when the incompatibility is discovered.

    Also, the incompatibility issue has been noted before:

    https://lists.gnu.org/archive/html/guix-devel/2019-04/msg00404.html

    though it appears to have been forgotten in

    https://lists.gnu.org/archive/html/guix-patches/2019-12/msg00543.html

    presumably because different people were involved?

  * The ‘Guix is not distributing the source code, it's only pointing
    to the source code’ argument:

    - We do distribute the source code, at https://ci.guix.gnu.org
    - probably also via our friends at SWH
    - and via the Wayback Machine fallback
    - possibly also to any Guix users on the local network, when using
      '--advertise' and '--discover'
    - and by delegating the distributing to the OpenZFS project
    - even if pointing to the tarball OpenZFS web would not count as
      distribution, assuming there's a license incompatibility (and
      hence, the Linux part of OpenZFS is illegal (*), wouldn't this
      pointing count as facilitation of a crime (or misdemeanor or
      contract breach or whatever's the local terminology), wouldn't
      this make Guix or individuals behind Guix accomplishes?
    - even if it's all legal, what about freedom 3 -- the freedom
      to distribute the program?
    - also, not being able to distribute the source code by ourselves
      seems rather inconvenient

  * The ‘we're not doing binary distribution’ argument:

    - That seems rather inconvenient, why not use BTRFS instead which
      seems quite capable and doesn't have this weird restriction?
    - Freedom 3 is:
      ‘The freedom to redistribute copies so you can help others
      (freedom 2).’  Guix redistributes copies is a convenient form,
      to help all users (‘others’).  To help the users, it not only
      redistributes in source form, but also in binary form
      (substitutes).  But the CDDL+GPL combination stops us from
      helping others by redistributing binary copies!

      Basically, if there's the freedom to redistribute copies,
      shouldn't this include _binary_ copies, especially when
      binaries are convenient?
   - We _are_ doing binary distribution:

     (here, ‘we’ includes all relevant users of Guix)

     An uninitiated user might do "guix system image ..." to
     produce an image (that happens to include a binary ZFS),
     dutifully uses "guix build --sources=transitive" and share
     the sources+binary with other people, and accidentally commit
     a violation.

     All the initrd, system image and "guix pack" would need to
     propagate unsubstitutability (and the top-level tools might need
     to error out) and this needs to be tested, AFAIK this has not
     been done.

  * The ‘we're not distributing _modified_ source code’ argument:

    Freedom 4!  We should be able to (legally) distribute modified
    source code as well.

  * Various ’technically, because of Section 1 (bis) alpha Z of this
    license, Paragraph 2 beta 3 of that license, this and that clause
    do not apply’ arguments:

    This seems to be missing the spirit, and the law is, to my limited
    knowledge, not a deterministic automaton with an exact mathematical
    formulation without any bit flips.

Greetings,
Maxime.

(*) The BSD modules are presumably fine though (unverified)!  But Guix
does _not_ (currently) support BSDs.
Simon Tournier March 17, 2022, 6:38 p.m. UTC | #4
Hi Maxime,

On Thu, 17 Mar 2022 at 18:23, Maxime Devos <maximedevos@telenet.be> wrote:

> I have been more-or-less ignoring the ZFS patches since some time after
> <https://issues.guix.gnu.org/45692#44>.  If ZFS people(^), after a
> disagreement about licensing concerns, directly jump to accusations of
> gaslighting and sabogate, completely ignoring my previous arguments (*)
> without trying to refute any of them or bringing new arguments, then I
> don't want to be involved with ZFS.

I sympathize with you Maxime.


> Also, the various ‘work-arounds’ around the GPL<->CDLL incompatibility
> still seem super fishy to me even if they _might_ be technically
> correct.  To me, this makes reviewing the code practically pointless --
> why review the zfs service patches if they will have to be reverted
> due to incompatibility concerns anyway?  Summarised:

You wrote:

        To be clear, I don't oppose the inclusion of the ZFS service on
        basis of the licensing anymore.

        <https://issues.guix.gnu.org/45692#57>

an your accurate summary gives me the impression you changed your
mind.  Just to be sure, did you ?


Cheers,
simon
M March 17, 2022, 7:10 p.m. UTC | #5
zimoun schreef op do 17-03-2022 om 19:38 [+0100]:
> You wrote:
> 
>         To be clear, I don't oppose the inclusion of the ZFS service on
>         basis of the licensing anymore.
> 
>         <https://issues.guix.gnu.org/45692#57>
> 
> an your accurate summary gives me the impression you changed your
> mind.  Just to be sure, did you ?

Yes, indeed.  IIRC, I found myself often changing my mind one way or
the other quite often back then, seems like currently it's back to
opposal on basis of licensing.

Greetings,
Maxime.
raid5atemyhomework March 19, 2022, 2:09 p.m. UTC | #6
Good morning Liliana,

> Why is it necessary to define a
> file system as a service?

Because getting ZFS to work on Linux requires:

* That the ZFS module be added to the `initrd` so that Linux-libre can load it.
* That the ZFS module be actually *loaded*, because otherwise the system assumes it is loaded on some `udev` trigger.
* That the ZFS module is informed of when it should start scanning for ZFS pools in the system.
* That the above step happens before `user-processes` is started.

All that additonal complexity is conveniently packaged in the Guix service system, and you could have learned that if you had just bothered to actually read the patch.

+      (list ;; Install OpenZFS kernel module into kernel profile.
+            (service-extension linux-loadable-module-service-type
+                               zfs-loadable-modules)
+            ;; And load it.
+            (service-extension kernel-module-loader-service-type
+                               (const '("zfs")))
+            ;; Make sure ZFS pools and datasets are mounted at
+            ;; boot.
+            (service-extension shepherd-root-service-type
+                               zfs-shepherd-services)
+            ;; Make sure user-processes don't start until
+            ;; after ZFS does.
+            (service-extension user-processes-service-type
+                               zfs-user-processes)
+            ;; Install automated scrubbing and snapshotting.
+            (service-extension mcron-service-type
+                               zfs-mcron-jobs)
+
+            ;; Install ZFS management commands in the system
+            ;; profile.
+            (service-extension profile-service-type
+                               (compose list make-zfs-package))
+            ;; Install ZFS udev rules.
+            (service-extension udev-service-type
+                               (compose list make-zfs-package))))


> Why do we need to export a seemingly
> unrelated variable?

If you are referring to the `dependency->shepherd-service-name` variable, it is necessary in order to defer starting of ZFS pool scanning to after all `mapped-device` dependencies have been opened.

> Can this be tested? Is this sufficiently tested?

Yes, I have run VMs on each version I have ever sent.

> Are there any points Maxime that were drowned out by a huge wall of
> licensing-related messages being passed back and forth that I will not
> attempt to sift through in order to respond to this message? If so,
> have those been sufficiently addressed?

Other than grammar and wording of documentation / comments, and one point about using Guix-style Scheme instead of a bit of shell (even though there are probably more people who can properly review the latter than the former), Maxime had no other points.
Other reviewers had and those were already addressed.

>
> Another complicating factor for this bug in particular is that the mumi
> web interface and the raw messages are out of sync; I have no idea why
> that is the case, but trying to fetch a patch only to get one of your
> bump messages is not particularly encouraging.

Oh come on stop your shitty excuses.  Nobody would look at this unless I showed up at least once a week to pester Guix maintainers.  I already tried the wait-for-people-and-they-will-come.  Squeaky wheel gets the grease.  If you want contributors to be more respectful, then put up definitive responsibilities of who to contact for what and what to do if patches are being ignored.  Otherwise I am just going to talk smack at you, Prikler.

NO Thanks
raid5atemyhomework
raid5atemyhomework March 19, 2022, 2:24 p.m. UTC | #7
Hello Maxime,

>     -   That seems rather inconvenient, why not use BTRFS instead which
>         seems quite capable and doesn't have this weird restriction?

BTRFS IS NOT CAPABLE.

Did you notice my pseudonym?  "`raid5` ate my homework".  I used the BTRFS `raid5` mode, once.  It LOST MY DATA.  Never again.  ZFS supports RAIDZ1 and has not lost my data at all yet.  I've replaced ZFS disks on my pool.  No data loss.  It keeps on going on.

A file system that loses data is not a file system.  It is a disaster.


BTRFS is not an acceptable substitute for ZFS.

If ZFS is removed from Guix, I am switching to Ubuntu and keeping my ZFS pool, I am not going to switch to BTRFS just to keep running Guix,  I would *like* to run only fully-free software, especially since I took the trouble of paying a premium for a server that had coreboot, but my data is more important and BTRFS is not an acceptable substitute for ZFS.


The only restriction needed is to prevent binary redistribution.  Yes, I agree it is inconvenient to always have to transfer source code and recompile each time.  But it is a ***lot*** more inconvenient to replace my lost data because BTRFS couldn't cut it despite more than a decade of development.  At least I can re-download the source code for ZFS each time from many trivial sources.  My `/home`, I cannot.  That is a bigger inconvenience.


Thanks
raid5atemyhomework
Leo Famulari March 19, 2022, 4:22 p.m. UTC | #8
On Sat, Mar 19, 2022 at 02:09:55PM +0000, raid5atemyhomework via Guix-patches via wrote:
> Oh come on stop your shitty excuses.  Nobody would look at this unless I showed up at least once a week to pester Guix maintainers.  I already tried the wait-for-people-and-they-will-come.  Squeaky wheel gets the grease.  If you want contributors to be more respectful, then put up definitive responsibilities of who to contact for what and what to do if patches are being ignored.  Otherwise I am just going to talk smack at you, Prikler.

Participating in a software project is both a technical and a social
exercise. It's not enough to merely do the technical work.

You may need to complete some smaller and less contentious contributions
along the way in order to build social trust with other people within
Guix.

There is a risk of social problems within GNU if we add ZFS to Guix:
there is not a consensus within GNU about whether it's okay to integrate
ZFS into our software.

It's important for the primary contributor of ZFS (that's you) to
demonstrate that they can manage that gracefully. None of us want to
assume the position of having to argue on your behalf.

It's true that Guix has grown large enough that formalizing teams and
points of contact for various areas would help contributors.

But until then, contributors are free to make an effort: learn who to
contact, build a collegial relationship with them, etc.
Maxim Cournoyer March 20, 2022, 4:42 a.m. UTC | #9
Hi,

raid5atemyhomework via Guix-patches via <guix-patches@gnu.org> writes:

> Hello Maxime,
>
>>     -   That seems rather inconvenient, why not use BTRFS instead which
>>         seems quite capable and doesn't have this weird restriction?
>
> BTRFS IS NOT CAPABLE.
>
> Did you notice my pseudonym?  "`raid5` ate my homework".  I used the
> BTRFS `raid5` mode, once.  It LOST MY DATA.  Never again.  ZFS
> supports RAIDZ1 and has not lost my data at all yet.  I've replaced
> ZFS disks on my pool.  No data loss.  It keeps on going on.
>
> A file system that loses data is not a file system.  It is a disaster.
>
>
> BTRFS is not an acceptable substitute for ZFS.
>
> If ZFS is removed from Guix, I am switching to Ubuntu and keeping my
> ZFS pool, I am not going to switch to BTRFS just to keep running Guix,
> I would *like* to run only fully-free software, especially since I
> took the trouble of paying a premium for a server that had coreboot,
> but my data is more important and BTRFS is not an acceptable
> substitute for ZFS.

Btrfs RAID5 or RAID6 having a write hole leading to potential data loss
upon hard reset has been a known issue for like a decade, and nobody has
worked on improving that [0].  RAID10 is fine though, and so is RAID1 or
RAID0.  I've used it (Btrfs RAID1 with zstd compression) for years on
various Guix systems without any issue.

> The only restriction needed is to prevent binary redistribution.  Yes,
> I agree it is inconvenient to always have to transfer source code and
> recompile each time.  But it is a ***lot*** more inconvenient to
> replace my lost data because BTRFS couldn't cut it despite more than a
> decade of development.  At least I can re-download the source code for
> ZFS each time from many trivial sources.  My `/home`, I cannot.  That
> is a bigger inconvenience.

With my personal experience suggesting that Btrfs is a solid file
system, I respectfully disagree :-).  At any rate, don't forget to
backup your precious data to somewhere safe; as RAID is no substitute
(ever had a PSU failure blowing up multiple components?).

Thanks,

Maxim

[0]  https://btrfs.wiki.kernel.org/index.php/Status#RAID56
diff mbox

Patch

diff --git a/doc/guix.texi b/doc/guix.texi
index c8bb484d94..65e60cd936 100644
--- a/doc/guix.texi
+++ b/doc/guix.texi
@@ -100,6 +100,7 @@  Copyright @copyright{} 2021 Josselin Poiret@*
 Copyright @copyright{} 2021 Andrew Tropin@*
 Copyright @copyright{} 2021 Sarah Morgensen@*
 Copyright @copyright{} 2021 Josselin Poiret@*
+Copyright @copyright{} 2021 raid5atemyhomework@*

 Permission is granted to copy, distribute and/or modify this document
 under the terms of the GNU Free Documentation License, Version 1.3 or
@@ -15594,6 +15595,356 @@  a file system declaration such as:
 compress-force=zstd,space_cache=v2"))
 @end lisp

+@node ZFS File System
+@subsection ZFS File System
+
+Support for ZFS file systems in Guix is based on the OpenZFS project.
+OpenZFS currently only supports Linux-Libre and is not available on the
+Hurd.
+
+OpenZFS is free software; unfortunately its license is incompatible with
+the GNU General Public License (GPL), the license of the Linux kernel,
+which means they cannot be distributed together.  However, as a user,
+you can choose to build ZFS and use it together with Linux; you can
+even rely on Guix to automate this task.  See
+@uref{https://www.fsf.org/licensing/zfs-and-linux, this analysis by
+the Free Software Foundation} for more information.
+
+As a large and complex kernel module, OpenZFS has to be compiled for a
+specific version of Linux-Libre.  At times, the latest OpenZFS package
+available in Guix is not compatible with the latest Linux-Libre version.
+Thus, directly installing the @code{zfs} package can fail.
+
+Instead, you are recommended to select a specific older long-term-support
+Linux-Libre kernel.  Do not use @code{linux-libre-lts}, as even the
+latest long-term-support kernel may be too new for @code{zfs}.  Instead,
+explicitly select a specific older version, such as @code{linux-libre-5.10},
+and upgrade it manually later as new long-term-support kernels become
+available that you have confirmed is compatible with the latest available
+OpenZFS version on Guix.
+
+For example, you can modify your system configuration file to a specific
+Linux-Libre version and add the @code{zfs-service-type} service.
+
+@lisp
+(use-modules (gnu))
+(use-package-modules
+  #;@dots{}
+  linux)
+(use-service-modules
+  #;@dots{}
+  file-systems)
+
+(define my-kernel linux-libre-5.10)
+
+(operating-system
+  (kernel my-kernel)
+  #;@dots{}
+  (services
+    (cons* (service zfs-service-type
+                    (zfs-configuration
+                      (kernel my-kernel)))
+           #;@dots{}
+           %desktop-services))
+  #;@dots{})
+@end lisp
+
+@defvr {Scheme Variable} zfs-service-type
+This is the type for a service that adds ZFS support to your operating
+system.  The service is configured using a @code{zfs-configuration}
+record.
+
+Here is an example use:
+
+@lisp
+(service zfs-service-type
+  (zfs-configuration
+    (kernel linux-libre-5.4)))
+@end lisp
+@end defvr
+
+@deftp {Data Type} zfs-configuration
+This data type represents the configuration of the ZFS support in Guix
+System.  Its fields are:
+
+@table @asis
+@item @code{kernel}
+The package of the Linux-Libre kernel to compile OpenZFS for.  This field
+is always required.  It @emph{must} be the same kernel you use in your
+@code{operating-system} form.
+
+@item @code{base-zfs} (default: @code{zfs})
+The OpenZFS package that will be compiled for the given Linux-Libre kernel.
+
+@item @code{base-zfs-auto-snapshot} (default: @code{zfs-auto-snapshot})
+The @code{zfs-auto-snapshot} package to use.  It will be modified to
+specifically use the OpenZFS compiled for your kernel.
+
+@item @code{dependencies} (default: @code{'()})
+A list of @code{<file-system>} or @code{<mapped-device>} records that must
+be mounted or opened before OpenZFS scans for pools to import.  For example,
+if you have set up LUKS containers as leaf VDEVs in a pool, you have to
+include their corresponding @code{<mapped-ddevice>} records so that OpenZFS
+can import the pool correctly at bootup.
+
+@item @code{auto-mount?} (default: @code{#t})
+Whether to mount datasets with the ZFS @code{mountpoint} property automatically
+at startup.  This is the behavior that ZFS users usually expect.  You might
+set this to @code{#f} for an operating system intended as a ``rescue'' system
+that is intended to help debug problems with the disks rather than actually
+work in production.
+
+@item @code{auto-scrub} (default: @code{'weekly})
+Specifies how often to scrub all pools.  Can be the symbols @code{'weekly} or
+@code{'monthly}, or a schedule specification understood by
+@xref{mcron, mcron job specifications,, mcron, GNU@tie{}mcron}, such as
+@code{"0 3 * * 6"} for ``every 3AM on Saturday''.
+It can also be @code{#f} to disable auto-scrubbing (@strong{not recommended}).
+
+The general guideline is to scrub weekly when using consumer-quality drives, and
+to scrub monthly when using enterprise-quality drives.
+
+@code{'weekly} scrubs are done on Sunday midnight, while @code{monthly} scrubs
+are done on midnight on the first day of each month.
+
+@item @code{auto-snapshot?} (default: @code{#t})
+Specifies whether to auto-snapshot by default.  If @code{#t}, then snapshots
+are automatically created except for ZFS datasets with the
+@code{com.sun:auto-snapshot} ZFS vendor property set to @code{false}.
+
+If @code{#f}, snapshots will not be automatically created, unless the ZFS
+dataset has the @code{com.sun:auto-snapshot} ZFS vendor property set to
+@code{true}.
+
+@item @code{auto-snapshot-keep} (default: @code{'()})
+Specifies an association list of symbol-number pairs, indicating the number
+of automatically-created snapshots to retain for each frequency type.
+
+If not specified via this field, by default there are 4 @code{frequent}, 24
+@code{hourly}, 31 @code{daily}, 8 @code{weekly}, and 12 @code{monthly} snapshots.
+
+For example:
+
+@lisp
+(zfs-configuration
+  (kernel my-kernel)
+  (auto-snapshot-keep
+    '((frequent . 8)
+      (hourly . 12))))
+@end lisp
+
+The above will keep 8 @code{frequent} snapshots and 12 @code{hourly} snapshots.
+@code{daily}, @code{weekly}, and @code{monthly} snapshots will keep their
+defaults (31 @code{daily}, 8 @code{weekly}, and 12 @code{monthly}).
+
+@end table
+@end deftp
+
+@subsubsection ZFS Auto-Snapshot
+
+The ZFS service on Guix System supports auto-snapshots as implemented in the
+Solaris operating system.
+
+@code{frequent} (every 15 minutes), @code{hourly}, @code{daily}, @code{weekly},
+and @code{monthly} snapshots are created automatically for ZFS datasets that
+have auto-snapshot enabled.  They will be named, for example,
+@code{zfs-auto-snap_frequent-2021-03-22-1415}.  You can continue to use
+manually-created snapshots as long as they do not conflict with the naming
+convention used by auto-snapshot.  You can also safely manually destroy
+automatically-created snapshots, for example to free up space.
+
+The @code{com.sun:auto-snapshot} ZFS property controls auto-snapshot on a
+per-dataset level.  Sub-datasets will inherit this property from their parent
+dataset, but can have their own property.
+
+You @emph{must} set this property to @code{true} or @code{false} exactly,
+otherwise it will be treated as if the property is unset.
+
+For example:
+
+@example
+# zfs list -o name
+NAME
+tank
+tank/important-data
+tank/tmp
+# zfs set com.sun:auto-snapshot=true tank
+# zfs set com.sun:auto-snapshot=false tank/tmp
+@end example
+
+The above will set @code{tank} and @code{tank/important-data} to be
+auto-snapshot, while @code{tank/tmp} will not be auto-snapshot.
+
+If the @code{com.sun:auto-snapshot} property is not set for a dataset
+(the default when pools and datasets are created), then whether
+auto-snapshot is done or not will depend on the @code{auto-snapshot?}
+field of the @code{zfs-configuration} record.
+
+There are also @code{com.sun:auto-snapshot:frequent},
+@code{com.sun:auto-snapshot:hourly}, @code{com.sun:auto-snapshot:daily},
+@code{com.sun:auto-snapshot:weekly}, and @code{com.sun:auto-snapshot:monthly}
+properties that give finer-grained control of whether to auto-snapshot a
+dataset at a particular schedule.
+
+The number of snapshots kept for all datasets can be overridden via the
+@code{auto-snapshot-keep} field of the @code{zfs-configuration} record.
+There is currently no support to have different numbers of snapshots to
+keep for different datasets.
+
+@subsubsection ZVOLs
+
+ZFS supports ZVOLs, block devices that ZFS exposes to the operating
+system in the @code{/dev/zvol/} directory.  The ZVOL will have the same
+resilience and self-healing properties as other datasets on your ZFS pool.
+ZVOLs can also be snapshotted (and will be included in auto-snapshotting
+if enabled), which snapshots the state of the block device, effectively
+snapshotting the hosted file system.
+
+You can put any file system inside the ZVOL.  However, in order to mount this
+file system at system start, you need to add @code{%zfs-zvol-dependency} as a
+dependency of each file system inside a ZVOL.
+
+@defvr {Scheme Variable} %zfs-zvol-dependency
+An artificial @code{<mapped-device>} which tells the file system mounting
+service to wait for ZFS to provide ZVOLs before mounting the
+@code{<file-system>} dependent on it.
+@end defvr
+
+For example, suppose you create a ZVOL and put an ext4 filesystem
+inside it:
+
+@example
+# zfs create -V 100G tank/ext4-on-zfs
+# mkfs.ext4 /dev/zvol/tank/ext4-on-zfs
+# mkdir /ext4-on-zfs
+# mount /dev/zvol/tank/ext4-on-zfs /ext4-on-zfs
+@end example
+
+You can then set this up to be mounted at boot by adding this to the
+@code{file-systems} field of your @code{operating-system} record:
+
+@lisp
+(file-system
+  (device "/dev/zvol/tank/ext4-on-zfs")
+  (mount-point "/ext4-on-zfs")
+  (type "ext4")
+  (dependencies (list %zfs-zvol-dependency)))
+@end lisp
+
+You @emph{must not} add @code{%zfs-zvol-dependency} to your
+@code{operating-system}'s @code{mapped-devices} field, and you @emph{must
+not} add it (or any @code{<file-system>}s dependent on it) to the
+@code{dependencies} field of @code{zfs-configuration}.  Finally, you
+@emph{must not} use @code{%zfs-zvol-dependency} unless you actually
+instantiate @code{zfs-service-type} on your system.
+
+@subsubsection Unsupported Features
+
+Some common features and uses of ZFS are currently not supported, or not
+fully supported, on Guix.
+
+@enumerate
+@item
+Shepherd-managed daemons that are configured to read from or write to ZFS
+mountpoints need to include @code{user-processes} in their @code{requirement}
+field.  This is the earliest that ZFS file systems are assured of being
+mounted.
+
+Generally, most daemons will, directly or indirectly, require
+@code{networking}, or @code{user-processes}, or both.  Most implementations
+of @code{networking} will also require @code{user-processes} so daemons that
+require only @code{networking} will also generally start up after
+@code{user-processes}.  A notable exception, however, is
+@code{static-networking-service-type}.  You will need to explicitly add
+@code{user-processes} as a @code{requirement} of your @code{static-networking}
+record.
+
+@item
+@code{mountpoint=legacy} ZFS file systems.  The handlers for the Guix mounting
+system have not yet been modified to support ZFS, and will expect @code{/dev}
+paths in the @code{<file-system>}'s @code{device} field, but ZFS file systems
+are referred to via non-path @code{pool/file/system} names.  Such file systems
+also need to be mounted @emph{after} OpenZFS has scanned for pools.
+
+You can still manually mount these file systems after system boot; what is
+only unsupported is mounting them automatically at system boot by specifying
+them in @code{<file-system>} records of your @code{operating-system}.
+
+@item
+@code{/home} on ZFS.  Guix will create home directories for users, but this
+process currently cannot be scheduled after ZFS file systems are mounted.
+Thus, the ZFS file system might be mounted @emph{after} Guix has created
+home directories at boot, at which point OpenZFS will refuse to mount since
+the mountpoint is not empty.  However, you @emph{can} create an ext4, xfs,
+btrfs, or other supported file system inside a ZVOL, have that depend on
+@code{%zfs-zvol-dependency}, and set it to mount on the @code{/home}
+directory; they will be scheduled to mount before the @code{user-homes}
+process.
+
+Similarly, other locations like @code{/var}, @code{/gnu/store} and so
+on cannot be reliably put in a ZFS file system, though they may be
+possible to create as other file systems inside ZVOL containers.
+
+@item
+@code{/} and @code{/boot} on ZFS.  These require Guix to expose more of
+the @code{initrd} very early boot process to services.  It also requires
+Guix to have the ability to explicitly load modules while still in
+@code{initrd} (currently kernel modules loaded by
+@code{kernel-module-loader-service-type} are loaded after @code{/} is
+mounted).  Further, since one of ZFS's main advantages is that it can
+continue working despite the loss of one or more devices, it makes sense
+to also support installing the bootloader on all devices of the pool that
+contains the @code{/} and @code{/boot}; after all, if ZFS can survive the
+loss of one device, the bootloader should also be able to survive the loss
+of one device.
+
+@item
+ZVOL swap devices.  Mapped swap devices need to be listed in
+@code{mapped-devices} to ensure they are opened before the system attempts
+to use them, but you cannot currently add @code{%zfs-zvol-dependency} to
+@code{mapped-devices}.
+
+This will also require significant amounts of testing, as various kernel
+build options and patches may affect how swapping works, which are possibly
+different on Guix System compared to other distributions that this feature is
+known to work on.
+
+@item
+ZFS Event Daemon.  Support for this has not been written yet, patches are
+welcome.  The main issue is how to design this in a Guix style while
+supporting legacy shell-script styles as well.  In particular, OpenZFS itself
+comes with a number of shell scripts intended for ZFS Event Daemon, and we
+need to figure out how the user can choose to use or not use the provided
+scripts (and configure any settings they have) or override with their own
+custom code (which could be shell scripts they have written and trusted from
+previous ZFS installations).
+
+As-is, you can create your own service that activates the ZFS Event Daemon
+by creating the @file{/etc/zfs/zed} directory and filling it appropriately,
+then launching @code{zed}.
+
+@item
+@file{/etc/zfs/zpool.cache}.  Currently the ZFS support on Guix always forces
+scanning of all devices at bootup to look for ZFS pools.  For systems with
+dozens or hundreds of storage devices, this can lead to slow bootup.  One issue
+is that tools should really not write to @code{/etc} which is supposed to be for
+configuration; possibly it could be moved to @code{/var} instead.  Another issue
+is that if Guix ever supports @code{/} on ZFS, we would need to somehow keep the
+@code{zpool.cache} file inside the @code{initrd} up-to-date with what is in the
+@code{/} mount point.
+
+@item
+@code{zfs share}.  This will require some (unknown amount of) work to integrate
+into the Samba and NFS services of Guix.  You @emph{can} manually set up Samba
+and NFS to share any mounted ZFS datasets by setting up their configurations
+properly; it just can't be done for you by @code{zfs share} and the
+@code{sharesmb} and @code{sharenfs} properties.
+@end enumerate
+
+Hopefully, support for the above only requires code to be written, and users
+are encouraged to hack on Guix to implement the above features.
+
+
 @node Mapped Devices
 @section Mapped Devices

diff --git a/gnu/local.mk b/gnu/local.mk
index 1252643dc0..95e31a4b7e 100644
--- a/gnu/local.mk
+++ b/gnu/local.mk
@@ -49,6 +49,7 @@ 
 # Copyright © 2021 Simon Tournier <zimon.toutoune@gmail.com>
 # Copyright © 2022 Daniel Meißner <daniel.meissner-i4k@ruhr-uni-bochum.de>
 # Copyright © 2022 Remco van 't Veer <remco@remworks.net>
+# Copyright © 2021 raid5atemyhomework <raid5atemyhomework@protonmail.com>
 #
 # This file is part of GNU Guix.
 #
@@ -648,6 +649,7 @@  GNU_SYSTEM_MODULES =				\
   %D%/services/docker.scm			\
   %D%/services/authentication.scm		\
   %D%/services/file-sharing.scm			\
+  %D%/services/file-systems.scm			\
   %D%/services/games.scm			\
   %D%/services/ganeti.scm			\
   %D%/services/getmail.scm				\
diff --git a/gnu/services/base.scm b/gnu/services/base.scm
index fbd01e84d6..aacb9e5e1b 100644
--- a/gnu/services/base.scm
+++ b/gnu/services/base.scm
@@ -220,7 +220,9 @@  (define-module (gnu services base)

             references-file

-            %base-services))
+            %base-services
+
+            dependency->shepherd-service-name))

 ;;; Commentary:
 ;;;
diff --git a/gnu/services/file-systems.scm b/gnu/services/file-systems.scm
new file mode 100644
index 0000000000..867349c3a5
--- /dev/null
+++ b/gnu/services/file-systems.scm
@@ -0,0 +1,363 @@ 
+;;; GNU Guix --- Functional package management for GNU
+;;; Copyright © 2021 raid5atemyhomework <raid5atemyhomework@protonmail.com>
+;;;
+;;; This file is part of GNU Guix.
+;;;
+;;; GNU Guix is free software; you can redistribute it and/or modify it
+;;; under the terms of the GNU General Public License as published by
+;;; the Free Software Foundation; either version 3 of the License, or (at
+;;; your option) any later version.
+;;;
+;;; GNU Guix is distributed in the hope that it will be useful, but
+;;; WITHOUT ANY WARRANTY; without even the implied warranty of
+;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;;; GNU General Public License for more details.
+;;;
+;;; You should have received a copy of the GNU General Public License
+;;; along with GNU Guix.  If not, see <http://www.gnu.org/licenses/>.
+
+(define-module (gnu services file-systems)
+  #:use-module (gnu packages file-systems)
+  #:use-module (gnu services)
+  #:use-module (gnu services base)
+  #:use-module (gnu services linux)
+  #:use-module (gnu services mcron)
+  #:use-module (gnu services shepherd)
+  #:use-module (gnu system mapped-devices)
+  #:use-module (guix gexp)
+  #:use-module (guix modules)
+  #:use-module (guix packages)
+  #:use-module (guix records)
+  #:use-module (srfi srfi-1)
+  #:export (zfs-service-type
+
+            zfs-configuration
+            zfs-configuration?
+            zfs-configuration-kernel
+            zfs-configuration-base-zfs
+            zfs-configuration-base-zfs-auto-snapshot
+            zfs-configuration-dependencies
+            zfs-configuration-auto-mount?
+            zfs-configuration-auto-scrub
+            zfs-configuration-auto-snapshot?
+            zfs-configuration-auto-snapshot-keep
+
+            %zfs-zvol-dependency))
+
+(define-record-type* <zfs-configuration>
+  zfs-configuration
+  make-zfs-configuration
+  zfs-configuration?
+
+  ;; linux-libre kernel you want to compile the base-zfs module for.
+  (kernel                     zfs-configuration-kernel)
+
+  ;; the OpenZFS package that will be modified to compile for the
+  ;; given kernel.
+  ;; Because it is modified and not the actual package that is used,
+  ;; we prepend the name 'base-'.
+  (base-zfs                   zfs-configuration-base-zfs
+                              (default zfs))
+
+  ;; the zfs-auto-snapshot package that will be modified to compile
+  ;; for the given kernel.
+  ;; Because it is modified and not the actual package that is used,
+  ;; we prepend the name 'base-'.
+  (base-zfs-auto-snapshot     zfs-configuration-base-zfs-auto-snapshot
+                              (default zfs-auto-snapshot))
+
+  ;; list of <mapped-device> or <file-system> objects that must be
+  ;; opened/mounted before we import any ZFS pools.
+  (dependencies               zfs-configuration-dependencies
+                              (default '()))
+
+  ;; #t to mount all mountable datasets by default.
+  ;; #f if not mounting.
+  ;; #t is the expected behavior on other operating systems, the
+  ;; #f is only supported for "rescue" operating systems where
+  ;; the user wants lower-level control of when to mount.
+  (auto-mount?                zfs-configuration-auto-mount?
+                              (default #t))
+
+  ;; 'weekly for weekly scrubbing, 'monthly for monthly scrubbing, an
+  ;; mcron time specification that can be given to `job`, or #f to
+  ;; disable.
+  (auto-scrub                 zfs-configuration-auto-scrub
+                              (default 'weekly))
+
+  ;; #t to auto-snapshot by default (and `com.sun:auto-snapshot=false`
+  ;; disables auto-snapshot per dataset), #f to not auto-snapshot
+  ;; by default (and `com.sun:auto-snapshot=true` enables auto-snapshot
+  ;; per dataset).
+  (auto-snapshot?             zfs-configuration-auto-snapshot?
+                              (default #t))
+
+  ;; association list of symbol-number pairs to indicate the number
+  ;; of automatic snapshots to keep for each of 'frequent, 'hourly,
+  ;; 'daily, 'weekly, and 'monthly.
+  ;; e.g. '((frequent . 8) (hourly . 12))
+  (auto-snapshot-keep         zfs-configuration-auto-snapshot-keep
+                              (default '())))
+
+(define %default-auto-snapshot-keep
+  '((frequent .  4)
+    (hourly .    24)
+    (daily .     31)
+    (weekly .    8)
+    (monthly .   12)))
+
+(define %auto-snapshot-mcron-schedule
+  '((frequent .  "0,15,30,45 * * * *")
+    (hourly .    "0 * * * *")
+    (daily .     "0 0 * * *")
+    (weekly .    "0 0 * * 7")
+    (monthly .   "0 0 1 * *")))
+
+;; A synthetic and unusable MAPPED-DEVICE intended for use when
+;; the user has created a mountable filesystem inside a ZFS
+;; zvol and wants it mounted inside the configuration.scm.
+(define %zfs-zvol-dependency
+  (mapped-device
+    (source '())
+    (targets '("zvol/*"))
+    (type #f)))
+
+(define (make-zfs-package conf)
+  "Creates a zfs package based on the given zfs-configuration.
+
+  OpenZFS is a kernel package and to ensure best compatibility
+  it should be compiled with the specific Linux-Libre kernel
+  used on the system.  This simply overrides the kernel used
+  in compilation with that given in the configuration, which
+  the user has to ensure is the same as in the operating-system."
+  (let ((kernel    (zfs-configuration-kernel conf))
+        (base-zfs  (zfs-configuration-base-zfs conf)))
+    (package
+      (inherit base-zfs)
+      (arguments (cons* #:linux kernel
+                        (package-arguments base-zfs))))))
+
+(define (make-zfs-auto-snapshot-package conf)
+  "Creates a zfs-auto-snapshot package based on the given
+  zfs-configuration.
+
+  Since the OpenZFS tools above are compiled to a specific
+  kernel version, zfs-auto-snapshot --- which calls into the
+  OpenZFS tools --- has to be compiled with the specific
+  modified OpenZFS package created in the make-zfs-package
+  procedure."
+  (let ((zfs                    (make-zfs-package conf))
+        (base-zfs-auto-snapshot (zfs-configuration-base-zfs-auto-snapshot conf)))
+    (package
+      (inherit base-zfs-auto-snapshot)
+      (inputs `(("zfs" ,zfs))))))
+
+(define (zfs-loadable-modules conf)
+  "Specifies that the specific 'module' output of the OpenZFS
+  package is to be used; for use in indicating it as a
+  loadable kernel module."
+  (list (list (make-zfs-package conf) "module")))
+
+(define (zfs-shepherd-services conf)
+  "Constructs a list of Shepherd services that is installed
+  by the ZFS Guix service.
+
+  'zfs-scan' scans all devices for ZFS pools, and makes them
+  available to 'zpool' commands.
+  'device-mapping-zvol/*' waits for /dev/zvol/* to be
+  populated by 'udev', and runs after 'zfs-scan'.
+  'zfs-auto-mount' mounts all ZFS datasets with a 'mount'
+  property, which defaults to '/' followed by the name of
+  the dataset.
+
+  All the above behavior is expected by ZFS users from
+  typical ZFS installations.  A mild difference is that
+  scanning is usually based on '/etc/zfs/zpool.cache'
+  instead of the 'scan all devices' used below, but that
+  file is questionable in Guix since ideally '/etc/'
+  files are modified by the sysad directly;
+  '/etc/zfs/zpool.cache' is modified by ZFS tools."
+  (let* ((zfs-package     (make-zfs-package conf))
+         (zpool           (file-append zfs-package "/sbin/zpool"))
+         (zfs             (file-append zfs-package "/sbin/zfs"))
+         (zvol_wait       (file-append zfs-package "/bin/zvol_wait"))
+         (scheme-modules  `((srfi srfi-1)
+                            (srfi srfi-34)
+                            (srfi srfi-35)
+                            (rnrs io ports)
+                            ,@%default-modules)))
+    (define zfs-scan
+      (shepherd-service
+        (provision '(zfs-scan))
+        (requirement `(root-file-system
+                       kernel-module-loader
+                       udev
+                       ,@(map dependency->shepherd-service-name
+                              (zfs-configuration-dependencies conf))))
+        (documentation "Scans for and imports ZFS pools.")
+        (modules scheme-modules)
+        (start #~(lambda _
+                   (guard (c ((message-condition? c)
+                              (format (current-error-port)
+                                      "zfs: error importing pools: ~s~%"
+                                      (condition-message c))
+                              #f))
+                     ;; TODO: optionally use a cachefile.
+                     (invoke #$zpool "import" "-a" "-N"))))
+        ;; Why not one-shot?  Because we don't really want to rescan
+        ;; this each time a requiring process is restarted, as scanning
+        ;; can take a long time and a lot of I/O.
+        (stop #~(const #f))))
+
+    (define device-mapping-zvol/*
+      (shepherd-service
+        (provision '(device-mapping-zvol/*))
+        (requirement '(zfs-scan))
+        (documentation "Waits for all ZFS ZVOLs to be opened.")
+        (modules scheme-modules)
+        (start #~(lambda _
+                   (guard (c ((message-condition? c)
+                              (format (current-error-port)
+                                      "zfs: error opening zvols: ~s~%"
+                                      (condition-message c))
+                              #f))
+                     (invoke #$zvol_wait))))
+        (stop #~(const #f))))
+
+    (define zfs-auto-mount
+      (shepherd-service
+        (provision '(zfs-auto-mount))
+        (requirement '(zfs-scan))
+        (documentation "Mounts all non-legacy mounted ZFS filesystems.")
+        (modules scheme-modules)
+        (start #~(lambda _
+                   (guard (c ((message-condition? c)
+                              (format (current-error-port)
+                                      "zfs: error mounting file systems: ~s~%"
+                                      (condition-message c))
+                              #f))
+                     ;; Output to current-error-port, otherwise the
+                     ;; user will not see any prompts for passwords
+                     ;; of encrypted datasets.
+                     ;; XXX Maybe better to explicitly open /dev/console ?
+                     (with-output-to-port (current-error-port)
+                       (lambda ()
+                         (invoke #$zfs "mount" "-a" "-l"))))))
+        (stop #~(lambda _
+                  ;; Make sure that Shepherd does not have a CWD that
+                  ;; is a mounted ZFS filesystem, which would prevent
+                  ;; unmounting.
+                  (chdir "/")
+                  (invoke #$zfs "unmount" "-a" "-f")))))
+
+    `(,zfs-scan
+      ,device-mapping-zvol/*
+      ,@(if (zfs-configuration-auto-mount? conf)
+            `(,zfs-auto-mount)
+            '()))))
+
+(define (zfs-user-processes conf)
+  "Provides the last Shepherd service that 'user-processes' has to
+  wait for.
+
+  If not auto-mounting, then user-processes should only wait for
+  the device scan."
+  (if (zfs-configuration-auto-mount? conf)
+      '(zfs-auto-mount)
+      '(zfs-scan)))
+
+(define (zfs-mcron-auto-snapshot-jobs conf)
+  "Creates a list of mcron jobs for auto-snapshotting, one for each
+  of the standard durations."
+  (let* ((user-auto-snapshot-keep      (zfs-configuration-auto-snapshot-keep conf))
+         ;; assoc-ref has earlier entries overriding later ones.
+         (auto-snapshot-keep           (append user-auto-snapshot-keep
+                                               %default-auto-snapshot-keep))
+         (auto-snapshot?               (zfs-configuration-auto-snapshot? conf))
+         (zfs-auto-snapshot-package    (make-zfs-auto-snapshot-package conf))
+         (zfs-auto-snapshot            (file-append zfs-auto-snapshot-package
+                                                    "/sbin/zfs-auto-snapshot")))
+    (map
+      (lambda (label)
+        (let ((keep   (assoc-ref auto-snapshot-keep label))
+              (sched  (assoc-ref %auto-snapshot-mcron-schedule label)))
+          #~(job '#$sched
+                 (lambda ()
+                   (system* #$zfs-auto-snapshot
+                            "--quiet"
+                            "--syslog"
+                            #$(string-append "--label="
+                                             (symbol->string label))
+                            #$(string-append "--keep="
+                                             (number->string keep))
+                            "//")))))
+      (map first %auto-snapshot-mcron-schedule))))
+
+(define (zfs-mcron-auto-scrub-jobs conf)
+  "Creates a list of mcron jobs for auto-scrubbing."
+  (let* ((zfs-package    (make-zfs-package conf))
+         (zpool          (file-append zfs-package "/sbin/zpool"))
+         (auto-scrub     (zfs-configuration-auto-scrub conf))
+         (sched          (cond
+                           ((eq? auto-scrub 'weekly)  "0 0 * * 7")
+                           ((eq? auto-scrub 'monthly) "0 0 1 * *")
+                           (else                      auto-scrub))))
+    (define code
+      ;; We need to get access to (guix build utils) for the
+      ;; invoke procedures.
+      (with-imported-modules (source-module-closure '((guix build utils)))
+        #~(begin
+            (use-modules (guix build utils)
+                         (ice-9 ports))
+            ;; The ZFS pools in the system.
+            (define pools
+              (invoke/quiet #$zpool "list" "-o" "name" "-H"))
+            ;; Only scrub if there are actual ZFS pools, as the
+            ;; zpool scrub command errors out if given an empty
+            ;; argument list.
+            (unless (null? pools)
+              ;; zpool scrub only initiates the scrub and otherwise
+              ;; prints nothing.  Results are always seen on the
+              ;; zpool status command.
+              (apply invoke #$zpool "scrub" pools)))))
+    (list
+      #~(job '#$sched
+             #$(program-file "mcron-zfs-scrub.scm" code)))))
+
+(define (zfs-mcron-jobs conf)
+  "Creates a list of mcron jobs for ZFS management."
+  (append (zfs-mcron-auto-snapshot-jobs conf)
+          (if (zfs-configuration-auto-scrub conf)
+              (zfs-mcron-auto-scrub-jobs conf)
+              '())))
+
+(define zfs-service-type
+  (service-type
+    (name 'zfs)
+    (extensions
+      (list ;; Install OpenZFS kernel module into kernel profile.
+            (service-extension linux-loadable-module-service-type
+                               zfs-loadable-modules)
+            ;; And load it.
+            (service-extension kernel-module-loader-service-type
+                               (const '("zfs")))
+            ;; Make sure ZFS pools and datasets are mounted at
+            ;; boot.
+            (service-extension shepherd-root-service-type
+                               zfs-shepherd-services)
+            ;; Make sure user-processes don't start until
+            ;; after ZFS does.
+            (service-extension user-processes-service-type
+                               zfs-user-processes)
+            ;; Install automated scrubbing and snapshotting.
+            (service-extension mcron-service-type
+                               zfs-mcron-jobs)
+
+            ;; Install ZFS management commands in the system
+            ;; profile.
+            (service-extension profile-service-type
+                               (compose list make-zfs-package))
+            ;; Install ZFS udev rules.
+            (service-extension udev-service-type
+                               (compose list make-zfs-package))))
+    (description "Installs ZFS, an advanced filesystem and volume manager.")))