lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 17 Jan 2017 08:37:45 -0800
From:   Tejun Heo <tj@...nel.org>
To:     Joonsoo Kim <iamjoonsoo.kim@....com>
Cc:     Vladimir Davydov <vdavydov@...antool.org>, cl@...ux.com,
        penberg@...nel.org, rientjes@...gle.com, akpm@...ux-foundation.org,
        jsvana@...com, hannes@...xchg.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, cgroups@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH 2/9] slab: remove synchronous rcu_barrier() call in memcg
 cache release path

Hello, Joonsoo.

On Tue, Jan 17, 2017 at 09:07:54AM +0900, Joonsoo Kim wrote:
> Long time no see! :)

Yeah, happy new year!

> IIUC, rcu_barrier() here prevents to destruct the kmem_cache until all
> slab pages in it are freed. These slab pages are freed through call_rcu().

Hmm... why do we need that tho?  SLAB_DESTROY_BY_RCU only needs to
protect the slab pages, not kmem cache struct.  I thought that this
was because kmem cache destruction is allowed to release pages w/o RCU
delaying it.

> Your patch changes it to another call_rcu() and, I think, if sequence of
> executing rcu callbacks is the same with sequence of adding rcu
> callbacks, it would work. However, I'm not sure that it is
> guaranteed by RCU API. Am I missing something?

The call sequence doesn't matter.  Whether you're using call_rcu() or
rcu_barrier(), you're just waiting for a grace period to pass before
continuing.  It doens't give any other ordering guarantees, so the new
code should be equivalent to the old one except for being asynchronous.

Thanks.

-- 
tejun

Powered by blists - more mailing lists