lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOQ4uxgZbw1L1=oyFvz1=ZPt8VxgR3cVxVTLzqGg+iBA+ffaCw@mail.gmail.com>
Date:   Fri, 15 Oct 2021 11:38:08 +0300
From:   Amir Goldstein <amir73il@...il.com>
To:     Gabriel Krisman Bertazi <krisman@...labora.com>
Cc:     Jan Kara <jack@...e.com>, "Darrick J. Wong" <djwong@...nel.org>,
        Theodore Tso <tytso@....edu>,
        David Howells <dhowells@...hat.com>,
        Khazhismel Kumykov <khazhy@...gle.com>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        Ext4 <linux-ext4@...r.kernel.org>,
        Linux API <linux-api@...r.kernel.org>,
        Matthew Bobrowski <repnop@...gle.com>, kernel@...labora.com,
        Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH v7 00/28] file system-wide error monitoring

On Fri, Oct 15, 2021 at 12:37 AM Gabriel Krisman Bertazi
<krisman@...labora.com> wrote:
>
> Hi,
>
> This attempts to get the ball rolling again for the FAN_FS_ERROR.  This
> version is slightly different from the previous approaches, since it uses
> mempool for memory allocation, as suggested by Jan.  It has the
> advantage of simplifying a lot the enqueue/dequeue, which is now much
> more similar to other event types, but it also means the guarantee that
> an error event will be available is diminished.

Makes me very happy not having to worry about new enqueue/dequeue bugs :)

>
> The way we propagate superblock errors also changed. Now we use
> FILEID_ROOT internally, and mangle it prior to copy_to_user.
>
> I am no longer sure how to guarantee that at least one mempoll slot will
> be available for each filesystem.  Since we are now tying the poll to
> the entire group, a stream of errors in a single file system might
> prevent others from emitting an error.  The possibility of this is
> reduced since we merge errors to the same filesystem, but it is still
> possible that they occur during the small window where the event is
> dequeued and before it is freed, in which case another filesystem might
> not be able to obtain a slot.

Double buffering. Each mark/fs should have one slot reserved for equeue
and one reserved for copying the event to user.

>
> I'm also creating a poll of 32 entries initially to avoid spending too
> much memory.  This means that only 32 filesystems can be watched per
> group with the FAN_FS_ERROR mark, before fanotify_mark starts returning
> ENOMEM.

I don't see a problem to grow the pool dynamically up to a reasonable
size, although it is a shame that the pool is not accounted to the group's
memcg (I think?).

Overall, the series looks very good to me, modulo to above comments
about the mempool size/resize and a few minor implementation details.

Good job!

Thanks,
Amir.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ