[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5715380C.5050608@hpe.com>
Date: Mon, 18 Apr 2016 15:39:56 -0400
From: Waiman Long <waiman.long@....com>
To: Davidlohr Bueso <dave@...olabs.net>
CC: <mingo@...nel.org>, <peterz@...radead.org>,
<linux-kernel@...r.kernel.org>, Davidlohr Bueso <dbueso@...e.de>
Subject: Re: [PATCH -tip 2/3] locking/pvqspinlock: Avoid double resetting
of stats
On 04/18/2016 02:31 AM, Davidlohr Bueso wrote:
> ... remove the redundant second iteration, this is most
> likely a copy/past buglet.
>
> Signed-off-by: Davidlohr Bueso<dbueso@...e.de>
> ---
> kernel/locking/qspinlock_stat.h | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/kernel/locking/qspinlock_stat.h b/kernel/locking/qspinlock_stat.h
> index d734b7502001..72722334237a 100644
> --- a/kernel/locking/qspinlock_stat.h
> +++ b/kernel/locking/qspinlock_stat.h
> @@ -191,8 +191,6 @@ static ssize_t qstat_write(struct file *file, const char __user *user_buf,
>
> for (i = 0 ; i< qstat_num; i++)
> WRITE_ONCE(ptr[i], 0);
> - for (i = 0 ; i< qstat_num; i++)
> - WRITE_ONCE(ptr[i], 0);
> }
> return count;
> }
The double write is done on purpose. As the statistics count update
isn't atomic, there is a very small chance (p) that clearing the count
may happen in the middle of read-modify-write bus transaction. Doing a
double write will reduce the chance further to p^2. This isn't failsafe,
but I think is good enough.
However, I don't mind eliminate the double write either as we can always
view the statistics count after a reset to make sure that they are
properly cleared.
Cheers,
Longman
Powered by blists - more mailing lists