lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Jun 2017 23:32:49 +0300
From:   Nikolay Borisov <nborisov@...e.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     jbacik@...com, linux-kernel@...r.kernel.org,
        mgorman@...hsingularity.net
Subject: Re: [PATCH v2 2/2] writeback: Rework wb_[dec|inc]_stat family of
 functions



On 20.06.2017 23:30, Tejun Heo wrote:
> Hello,
> 
> On Tue, Jun 20, 2017 at 11:28:30PM +0300, Nikolay Borisov wrote:
>>> Heh, looks like I was confused.  __percpu_counter_add() is not
>>> irq-safe.  It disables preemption and uses __this_cpu_read(), so
>>> there's no protection against irq.  If writeback statistics want
>>> irq-safe operations and it does, it would need these separate
>>> operations.  Am I missing something?
>>
>> So looking at the history of the commit initially there was
>> preempt_disable + this_cpu_ptr which was later changed in:
>>
>> 819a72af8d66 ("percpucounter: Optimize __percpu_counter_add a bit
>> through the use of this_cpu() options.")
>>
>> I believe that having __this_cpu_read ensures that we get an atomic
>> snapshot of the variable but when we are doing the actual write e.g. the
>> else {} branch we actually use this_cpu_add which ought to be preempt +
>> irq safe, meaning we won't get torn write. In essence we have atomic
>> reads by merit of __this_cpu_read + atomic writes by merit of using
>> raw_spin_lock_irqsave in the if() branch and this_cpu_add in the else {}
>> branch.
> 
> Ah, you're right.  The initial read is speculative.  The slow path is
> protected with irq spinlock.  The fast path is this_cpu_add() which is
> irq-safe.  We really need to document these functions.
> 
> Can I bother you with adding documentation to them while you're at it?

Sure, I will likely resend with a fresh head on my shoulders.

> 
> Thanks.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ