lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200504190138.GU2869@paulmck-ThinkPad-P72>
Date:   Mon, 4 May 2020 12:01:38 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Joel Fernandes <joel@...lfernandes.org>
Cc:     Uladzislau Rezki <urezki@...il.com>,
        LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        "Theodore Y . Ts'o" <tytso@....edu>,
        Matthew Wilcox <willy@...radead.org>,
        RCU <rcu@...r.kernel.org>,
        Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: Re: [PATCH 09/24] rcu/tree: cache specified number of objects

On Mon, May 04, 2020 at 02:08:05PM -0400, Joel Fernandes wrote:
> On Mon, May 04, 2020 at 07:48:22PM +0200, Uladzislau Rezki wrote:
> > On Mon, May 04, 2020 at 08:24:37AM -0700, Paul E. McKenney wrote:
> [..] 
> > > > > Presumably the list can also be accessed without holding this lock,
> > > > > because otherwise we shouldn't need llist...
> > > > > 
> > > > Hm... We increase the number of elements in cache, therefore it is not
> > > > lockless. From the other hand i used llist_head to maintain the cache
> > > > because it is single linked list, we do not need "*prev" link. Also
> > > > we do not need to init the list.
> > > > 
> > > > But i can change it to list_head. Please let me know if i need :)
> > > 
> > > Hmmm...  Maybe it is time for a non-atomic singly linked list?  In the RCU
> > > callback processing, the operations were open-coded, but they have been
> > > pushed into include/linux/rcu_segcblist.h and kernel/rcu/rcu_segcblist.*.
> > > 
> > > Maybe some non-atomic/protected/whatever macros in the llist.h file?
> > > Or maybe just open-code the singly linked list?  (Probably not the
> > > best choice, though.)  Add comments stating that the atomic properties
> > > of the llist functions aren't neded?  Something else?
> > >
> > In order to keep it simple i can replace llist_head by the list_head?
> 
> Just to clarify for me, what is the disadvantage of using llist here?

Are there some llist APIs that are not set up for concurrency?  I am
not seeing any.

The overhead isn't that much of a concern, given that these are not on the
hotpath, but people reading the code and seeing the cmpxchg operations
might be forgiven for believing that there is some concurrency involved
somewhere.

Or am I confused and there are now single-threaded add/delete operations
for llist?

> Since we don't care about traversing backwards, isn't it better to use llist
> for this usecase?
> 
> I think Vlad is using locking as we're also tracking the size of the llist to
> know when to free pages. This tracking could suffer from the lost-update
> problem without any locking, 2 lockless llist_add happened simulatenously.
> 
> Also if list_head is used, it will take more space and still use locking.

Indeed, it would be best to use a non-concurrent singly linked list.

							Thanx, Paul

> Thoughts?
> 
> thanks,
> 
>  - Joel
> 
> > > 
> > > The comments would be a good start.  Just to take pity on people seeing
> > > the potential for concurrency and wondering how the concurrent accesses
> > > actually happen.  ;-)
> > > 
> > Sounds like you are kidding me :) 
> > 
> > --
> > Vlad Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ