[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZDEUFlwlDOTfLshS@snowbird>
Date: Sat, 8 Apr 2023 00:13:26 -0700
From: Dennis Zhou <dennis@...nel.org>
To: Ye Bin <yebin@...weicloud.com>
Cc: tj@...nel.org, cl@...ux.com, linux-mm@...ck.org,
yury.norov@...il.com, andriy.shevchenko@...ux.intel.com,
linux@...musvillemoes.dk, linux-kernel@...r.kernel.org,
yebin10@...wei.com, dchinner@...hat.com
Subject: Re: [PATCH v2 2/2] lib/percpu_counter: fix dying cpu compare race
Hello,
On Thu, Apr 06, 2023 at 09:56:29AM +0800, Ye Bin wrote:
> From: Ye Bin <yebin10@...wei.com>
>
> In commit 8b57b11cca88 ("pcpcntrs: fix dying cpu summation race") a race
> condition between a cpu dying and percpu_counter_sum() iterating online CPUs
> was identified.
> Acctually, there's the same race condition between a cpu dying and
> __percpu_counter_compare(). Here, use 'num_online_cpus()' for quick judgment.
> But 'num_online_cpus()' will be decreased before call 'percpu_counter_cpu_dead()',
> then maybe return incorrect result.
> To solve above issue, also need to add dying CPUs count when do quick judgment
> in __percpu_counter_compare().
>
I've thought a lot of about this since you've sent v1. For the general
problem, you haven't addressed Dave's concerns from [1].
I agree you've found a valid race condition, but as Dave mentioned,
there's no synchronization in __percpu_counter_compare() and
consequently no guarantees about the accuracy of the value.
However, I might be missing something, but I do think the use case in
5825bea05265 ("xfs: __percpu_counter_compare() inode count debug too expensive")
is potentially valid. If the rhs is an expected lower bound or upper
bound (depending on if you're counting up or down, but not both) and the
count you're accounting has the same expectations as percpu_refcount
(you can only subtract what you've already added), then should the
percpu_counter_sum() ever be on the wrong side of rhs, that should be an
error and visible via percpu_counter_compare().
I need to think a little longer, but my initial thought is while you
close a race condition, the function itself is inherently vulnerable.
[1] https://lore.kernel.org/lkml/ZCu9LtdA+NMrfG9x@rh/
Thanks,
Dennis
> Signed-off-by: Ye Bin <yebin10@...wei.com>
> ---
> lib/percpu_counter.c | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
> index 5004463c4f9f..399840cb0012 100644
> --- a/lib/percpu_counter.c
> +++ b/lib/percpu_counter.c
> @@ -227,6 +227,15 @@ static int percpu_counter_cpu_dead(unsigned int cpu)
> return 0;
> }
>
> +static __always_inline unsigned int num_count_cpus(void)
> +{
> +#ifdef CONFIG_HOTPLUG_CPU
> + return (num_online_cpus() + num_dying_cpus());
> +#else
> + return num_online_cpus();
> +#endif
> +}
> +
> /*
> * Compare counter against given value.
> * Return 1 if greater, 0 if equal and -1 if less
> @@ -237,7 +246,7 @@ int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch)
>
> count = percpu_counter_read(fbc);
> /* Check to see if rough count will be sufficient for comparison */
> - if (abs(count - rhs) > (batch * num_online_cpus())) {
> + if (abs(count - rhs) > (batch * num_count_cpus())) {
> if (count > rhs)
> return 1;
> else
> --
> 2.31.1
>
>
Powered by blists - more mailing lists