lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 15 Oct 2021 10:33:50 +0300 From: Amir Goldstein <amir73il@...il.com> To: Gabriel Krisman Bertazi <krisman@...labora.com> Cc: Jan Kara <jack@...e.com>, "Darrick J. Wong" <djwong@...nel.org>, Theodore Tso <tytso@....edu>, David Howells <dhowells@...hat.com>, Khazhismel Kumykov <khazhy@...gle.com>, linux-fsdevel <linux-fsdevel@...r.kernel.org>, Ext4 <linux-ext4@...r.kernel.org>, Linux API <linux-api@...r.kernel.org>, Matthew Bobrowski <repnop@...gle.com>, kernel@...labora.com Subject: Re: [PATCH v7 18/28] fanotify: Pre-allocate pool of error events On Fri, Oct 15, 2021 at 9:19 AM Amir Goldstein <amir73il@...il.com> wrote: > > On Fri, Oct 15, 2021 at 12:39 AM Gabriel Krisman Bertazi > <krisman@...labora.com> wrote: > > > > Error reporting needs to be done in an atomic context. This patch > > introduces a group-wide mempool of error events, shared by all > > marks in this group. > > > > Signed-off-by: Gabriel Krisman Bertazi <krisman@...labora.com> > > --- > > fs/notify/fanotify/fanotify.c | 3 +++ > > fs/notify/fanotify/fanotify.h | 11 +++++++++++ > > fs/notify/fanotify/fanotify_user.c | 26 +++++++++++++++++++++++++- > > include/linux/fsnotify_backend.h | 2 ++ > > 4 files changed, 41 insertions(+), 1 deletion(-) > > > > diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c > > index 8f152445d75c..01d68dfc74aa 100644 > > --- a/fs/notify/fanotify/fanotify.c > > +++ b/fs/notify/fanotify/fanotify.c > > @@ -819,6 +819,9 @@ static void fanotify_free_group_priv(struct fsnotify_group *group) > > if (group->fanotify_data.ucounts) > > dec_ucount(group->fanotify_data.ucounts, > > UCOUNT_FANOTIFY_GROUPS); > > + > > + if (mempool_initialized(&group->fanotify_data.error_events_pool)) > > + mempool_exit(&group->fanotify_data.error_events_pool); > > } > > > > static void fanotify_free_path_event(struct fanotify_event *event) > > diff --git a/fs/notify/fanotify/fanotify.h b/fs/notify/fanotify/fanotify.h > > index c42cf8fd7d79..a577e87fac2b 100644 > > --- a/fs/notify/fanotify/fanotify.h > > +++ b/fs/notify/fanotify/fanotify.h > > @@ -141,6 +141,7 @@ enum fanotify_event_type { > > FANOTIFY_EVENT_TYPE_PATH, > > FANOTIFY_EVENT_TYPE_PATH_PERM, > > FANOTIFY_EVENT_TYPE_OVERFLOW, /* struct fanotify_event */ > > + FANOTIFY_EVENT_TYPE_FS_ERROR, /* struct fanotify_error_event */ > > __FANOTIFY_EVENT_TYPE_NUM > > }; > > > > @@ -196,6 +197,16 @@ FANOTIFY_NE(struct fanotify_event *event) > > return container_of(event, struct fanotify_name_event, fae); > > } > > > > +struct fanotify_error_event { > > + struct fanotify_event fae; > > +}; > > + > > +static inline struct fanotify_error_event * > > +FANOTIFY_EE(struct fanotify_event *event) > > +{ > > + return container_of(event, struct fanotify_error_event, fae); > > +} > > + > > static inline __kernel_fsid_t *fanotify_event_fsid(struct fanotify_event *event) > > { > > if (event->type == FANOTIFY_EVENT_TYPE_FID) > > diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c > > index 66ee3c2805c7..f1cf863d6f9f 100644 > > --- a/fs/notify/fanotify/fanotify_user.c > > +++ b/fs/notify/fanotify/fanotify_user.c > > @@ -30,6 +30,7 @@ > > #define FANOTIFY_DEFAULT_MAX_EVENTS 16384 > > #define FANOTIFY_OLD_DEFAULT_MAX_MARKS 8192 > > #define FANOTIFY_DEFAULT_MAX_GROUPS 128 > > +#define FANOTIFY_DEFAULT_FEE_POOL 32 > > > > We can probably start with a more generous pool (128?) > It doesn't cost that much. > But anyway, I think this pool needs to auto-grow (up to a maximum size) > instead of having a rigid arbitrary limit. > As long as the pool grows, I don't mind if it start at size 32, but I just noticed that mempools cannot be accounted to memcg?? Then surely the maximum size need to be kept pretty low. Thanks, Amir.
Powered by blists - more mailing lists