[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <r6r5osn7darsuaqv7cjkonaq63zioo6kup7ds3wyeyxjyohiso@hb3ftogkuw6m>
Date: Tue, 22 Apr 2025 15:38:41 +0200
From: Jan Kara <jack@...e.cz>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: brauner@...nel.org, viro@...iv.linux.org.uk, jack@...e.cz,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
kernel test robot <oliver.sang@...el.com>
Subject: Re: [PATCH v2] fs: fall back to file_ref_put() for non-last reference
On Fri 18-04-25 14:57:56, Mateusz Guzik wrote:
> This reduces the slowdown in face of multiple callers issuing close on
> what turns out to not be the last reference.
>
> Signed-off-by: Mateusz Guzik <mjguzik@...il.com>
> Reported-by: kernel test robot <oliver.sang@...el.com>
> Closes: https://lore.kernel.org/oe-lkp/202504171513.6d6f8a16-lkp@intel.com
Yeah, this looks good. Feel free to add:
Reviewed-by: Jan Kara <jack@...e.cz>
Honza
> ---
>
> v2:
> - fix a copy pasto with bad conditional
>
> fs/file.c | 2 +-
> include/linux/file_ref.h | 19 ++++++-------------
> 2 files changed, 7 insertions(+), 14 deletions(-)
>
> diff --git a/fs/file.c b/fs/file.c
> index dc3f7e120e3e..3a3146664cf3 100644
> --- a/fs/file.c
> +++ b/fs/file.c
> @@ -26,7 +26,7 @@
>
> #include "internal.h"
>
> -bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt)
> +static noinline bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt)
> {
> /*
> * If the reference count was already in the dead zone, then this
> diff --git a/include/linux/file_ref.h b/include/linux/file_ref.h
> index 7db62fbc0500..31551e4cb8f3 100644
> --- a/include/linux/file_ref.h
> +++ b/include/linux/file_ref.h
> @@ -61,7 +61,6 @@ static inline void file_ref_init(file_ref_t *ref, unsigned long cnt)
> atomic_long_set(&ref->refcnt, cnt - 1);
> }
>
> -bool __file_ref_put_badval(file_ref_t *ref, unsigned long cnt);
> bool __file_ref_put(file_ref_t *ref, unsigned long cnt);
>
> /**
> @@ -178,20 +177,14 @@ static __always_inline __must_check bool file_ref_put(file_ref_t *ref)
> */
> static __always_inline __must_check bool file_ref_put_close(file_ref_t *ref)
> {
> - long old, new;
> + long old;
>
> old = atomic_long_read(&ref->refcnt);
> - do {
> - if (unlikely(old < 0))
> - return __file_ref_put_badval(ref, old);
> -
> - if (old == FILE_REF_ONEREF)
> - new = FILE_REF_DEAD;
> - else
> - new = old - 1;
> - } while (!atomic_long_try_cmpxchg(&ref->refcnt, &old, new));
> -
> - return new == FILE_REF_DEAD;
> + if (likely(old == FILE_REF_ONEREF)) {
> + if (likely(atomic_long_try_cmpxchg(&ref->refcnt, &old, FILE_REF_DEAD)))
> + return true;
> + }
> + return file_ref_put(ref);
> }
>
> /**
> --
> 2.48.1
>
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists