lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Oct 2021 15:17:33 -0300
From:   Gabriel Krisman Bertazi <krisman@...labora.com>
To:     Jan Kara <jack@...e.cz>
Cc:     Amir Goldstein <amir73il@...il.com>, Jan Kara <jack@...e.com>,
        "Darrick J. Wong" <djwong@...nel.org>,
        Theodore Tso <tytso@....edu>,
        Dave Chinner <david@...morbit.com>,
        David Howells <dhowells@...hat.com>,
        Khazhismel Kumykov <khazhy@...gle.com>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        Ext4 <linux-ext4@...r.kernel.org>,
        Linux API <linux-api@...r.kernel.org>, kernel@...labora.com
Subject: Re: [PATCH v8 20/32] fanotify: Dynamically resize the FAN_FS_ERROR
 pool

Jan Kara <jack@...e.cz> writes:

> On Tue 19-10-21 08:50:23, Amir Goldstein wrote:
>> On Tue, Oct 19, 2021 at 3:03 AM Gabriel Krisman Bertazi
>> <krisman@...labora.com> wrote:
>> >
>> > Allow the FAN_FS_ERROR group mempool to grow up to an upper limit
>> > dynamically, instead of starting already at the limit.  This doesn't
>> > bother resizing on mark removal, but next time a mark is added, the slot
>> > will be either reused or resized.  Also, if several marks are being
>> > removed at once, most likely the group is going away anyway.
>> >
>> > Signed-off-by: Gabriel Krisman Bertazi <krisman@...labora.com>
>> > ---
>> >  fs/notify/fanotify/fanotify_user.c | 26 +++++++++++++++++++++-----
>> >  include/linux/fsnotify_backend.h   |  1 +
>> >  2 files changed, 22 insertions(+), 5 deletions(-)
>> >
>> > diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
>> > index f77581c5b97f..a860c286e885 100644
>> > --- a/fs/notify/fanotify/fanotify_user.c
>> > +++ b/fs/notify/fanotify/fanotify_user.c
>> > @@ -959,6 +959,10 @@ static int fanotify_remove_mark(struct fsnotify_group *group,
>> >
>> >         removed = fanotify_mark_remove_from_mask(fsn_mark, mask, flags,
>> >                                                  umask, &destroy_mark);
>> > +
>> > +       if (removed & FAN_FS_ERROR)
>> > +               group->fanotify_data.error_event_marks--;
>> > +
>> >         if (removed & fsnotify_conn_mask(fsn_mark->connector))
>> >                 fsnotify_recalc_mask(fsn_mark->connector);
>> >         if (destroy_mark)
>> > @@ -1057,12 +1061,24 @@ static struct fsnotify_mark *fanotify_add_new_mark(struct fsnotify_group *group,
>> >
>> >  static int fanotify_group_init_error_pool(struct fsnotify_group *group)
>> >  {
>> > -       if (mempool_initialized(&group->fanotify_data.error_events_pool))
>> > -               return 0;
>> > +       int ret;
>> > +
>> > +       if (group->fanotify_data.error_event_marks >=
>> > +           FANOTIFY_DEFAULT_MAX_FEE_POOL)
>> > +               return -ENOMEM;
>> >
>> > -       return mempool_init_kmalloc_pool(&group->fanotify_data.error_events_pool,
>> > -                                        FANOTIFY_DEFAULT_MAX_FEE_POOL,
>> > -                                        sizeof(struct fanotify_error_event));
>> > +       if (!mempool_initialized(&group->fanotify_data.error_events_pool))
>> > +               ret = mempool_init_kmalloc_pool(
>> > +                               &group->fanotify_data.error_events_pool,
>> > +                                1, sizeof(struct fanotify_error_event));
>> > +       else
>> > +               ret = mempool_resize(&group->fanotify_data.error_events_pool,
>> > +                                    group->fanotify_data.error_event_marks + 1);
>> > +
>> > +       if (!ret)
>> > +               group->fanotify_data.error_event_marks++;
>> > +
>> > +       return ret;
>> >  }
>> 
>> This is not what I had in mind.
>> I was thinking start with ~32 and double each time limit is reached.
>
> Do you mean when number of FS_ERROR marks reaches the number of preallocated
> events? We could do that but note that due to mempool implementation limits
> there cannot be more than 255 preallocated events, also mempool_resize()
> will only update number of slots for preallocated events but these slots
> will be empty. You have to manually allocate and free events to fill these
> slots with preallocated events.
>
>> And also, this code grows the pool to infinity with add/remove mark loop.
>
> I see a cap at FANOTIFY_DEFAULT_MAX_FEE_POOL in the code there. But I don't
> think there's a good enough reason to hard-limit number of FS_ERROR marks
> at 128. As I explained in the previous version of the series, in vast
> majority of cases we will not use even a single preallocated event...
>
>> Anyway, since I clearly did not understand how mempool works and
>> Jan had some different ideas I would leave it to Jan to explain
>> how he wants the mempool init limit and resize to be implemented.
>
> Honestly, I'm for keeping it simple for now. Just 32 preallocated events
> and try to come up with something more clever only if someone actually
> complains.

So, If I understand correctly the conclusion, you are fine if I revert to
the version I had in v7: 32 fields pre-allocated, no dynamic growth and
just limit the number of FAN_FS_ERROR marks to <= 32?  In the future, if
this ever becomes a problem, we look into dynamic resizing/increasing
the limit?

I think either option is fine by me.  I thought that growing 1 by 1 like
I did here would be ugly, but before sending the patch, I checked and I
was quite satisfied with how simple mempool_resize actually is.

-- 
Gabriel Krisman Bertazi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ