lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <274063e4-57d0-5a87-1f43-28f5232af52b@suse.com>
Date:   Tue, 20 Jun 2017 23:28:30 +0300
From:   Nikolay Borisov <nborisov@...e.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     jbacik@...com, linux-kernel@...r.kernel.org,
        mgorman@...hsingularity.net
Subject: Re: [PATCH v2 2/2] writeback: Rework wb_[dec|inc]_stat family of
 functions



On 20.06.2017 22:37, Tejun Heo wrote:
> Hello, Nikolay.
> 
> On Tue, Jun 20, 2017 at 09:02:00PM +0300, Nikolay Borisov wrote:
>> Currently the writeback statistics code uses a percpu counters to hold
>> various statistics. Furthermore we have 2 families of functions - those which
>> disable local irq and those which doesn't and whose names begin with
>> double underscore. However, they both end up calling __add_wb_stats which in
>> turn calls percpu_counter_add_batch which is already irq-safe.
> 
> Heh, looks like I was confused.  __percpu_counter_add() is not
> irq-safe.  It disables preemption and uses __this_cpu_read(), so
> there's no protection against irq.  If writeback statistics want
> irq-safe operations and it does, it would need these separate
> operations.  Am I missing something?

So looking at the history of the commit initially there was
preempt_disable + this_cpu_ptr which was later changed in:

819a72af8d66 ("percpucounter: Optimize __percpu_counter_add a bit
through the use of this_cpu() options.")


I believe that having __this_cpu_read ensures that we get an atomic
snapshot of the variable but when we are doing the actual write e.g. the
else {} branch we actually use this_cpu_add which ought to be preempt +
irq safe, meaning we won't get torn write. In essence we have atomic
reads by merit of __this_cpu_read + atomic writes by merit of using
raw_spin_lock_irqsave in the if() branch and this_cpu_add in the else {}
branch.

> 
> Thanks.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ