[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170620203049.GH21326@htj.duckdns.org>
Date: Tue, 20 Jun 2017 16:30:49 -0400
From: Tejun Heo <tj@...nel.org>
To: Nikolay Borisov <nborisov@...e.com>
Cc: jbacik@...com, linux-kernel@...r.kernel.org,
mgorman@...hsingularity.net
Subject: Re: [PATCH v2 2/2] writeback: Rework wb_[dec|inc]_stat family of
functions
Hello,
On Tue, Jun 20, 2017 at 11:28:30PM +0300, Nikolay Borisov wrote:
> > Heh, looks like I was confused. __percpu_counter_add() is not
> > irq-safe. It disables preemption and uses __this_cpu_read(), so
> > there's no protection against irq. If writeback statistics want
> > irq-safe operations and it does, it would need these separate
> > operations. Am I missing something?
>
> So looking at the history of the commit initially there was
> preempt_disable + this_cpu_ptr which was later changed in:
>
> 819a72af8d66 ("percpucounter: Optimize __percpu_counter_add a bit
> through the use of this_cpu() options.")
>
> I believe that having __this_cpu_read ensures that we get an atomic
> snapshot of the variable but when we are doing the actual write e.g. the
> else {} branch we actually use this_cpu_add which ought to be preempt +
> irq safe, meaning we won't get torn write. In essence we have atomic
> reads by merit of __this_cpu_read + atomic writes by merit of using
> raw_spin_lock_irqsave in the if() branch and this_cpu_add in the else {}
> branch.
Ah, you're right. The initial read is speculative. The slow path is
protected with irq spinlock. The fast path is this_cpu_add() which is
irq-safe. We really need to document these functions.
Can I bother you with adding documentation to them while you're at it?
Thanks.
--
tejun
Powered by blists - more mailing lists