[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200827123040.GE14765@casper.infradead.org>
Date: Thu, 27 Aug 2020 13:30:40 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Shaokun Zhang <zhangshaokun@...ilicon.com>
Cc: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
Yuqi Jin <jinyuqi@...wei.com>, Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>
Subject: Re: [PATCH] fs: Optimized fget to improve performance
On Thu, Aug 27, 2020 at 06:19:44PM +0800, Shaokun Zhang wrote:
> From: Yuqi Jin <jinyuqi@...wei.com>
>
> It is well known that the performance of atomic_add is better than that of
> atomic_cmpxchg.
I don't think that's well-known at all.
> +static inline bool get_file_unless_negative(atomic_long_t *v, long a)
> +{
> + long c = atomic_long_read(v);
> +
> + if (c <= 0)
> + return 0;
> +
> + return atomic_long_add_return(a, v) - 1;
> +}
> +
> #define get_file_rcu_many(x, cnt) \
> - atomic_long_add_unless(&(x)->f_count, (cnt), 0)
> + get_file_unless_negative(&(x)->f_count, (cnt))
> #define get_file_rcu(x) get_file_rcu_many((x), 1)
> #define file_count(x) atomic_long_read(&(x)->f_count)
I think you should be proposing a patch to fix atomic_long_add_unless()
on arm64 instead.
Powered by blists - more mailing lists