[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YsSD4k0o6PPYpMBe@slm.duckdns.org>
Date: Tue, 5 Jul 2022 08:33:06 -1000
From: Tejun Heo <tj@...nel.org>
To: Imran Khan <imran.f.khan@...cle.com>
Cc: gregkh@...uxfoundation.org, viro@...iv.linux.org.uk,
m.szyprowski@...sung.com, nathan@...nel.org, michael@...le.cc,
robh@...nel.org, linux-serial@...r.kernel.org,
linux-kernel@...r.kernel.org, guillaume.tucker@...labora.com,
pmladek@...e.com
Subject: Re: [RESEND PATCH] kernfs: Avoid re-adding kernfs_node into
kernfs_notify_list.
Hello,
On Sun, Jul 03, 2022 at 09:09:05PM +1000, Imran Khan wrote:
> Can we use kernfs_notify_lock like below snippet to serialize producers
> (kernfs_notify):
>
> spin_lock_irqsave(&kernfs_notify_lock, flags);
> if (kn->attr.notify_next.next != NULL) {
> kernfs_get(kn);
> llist_add(&kn->attr.notify_next, &kernfs_notify_list);
> schedule_work(&kernfs_notify_work);
> }
> spin_unlock_irqsave(&kernfs_notify_lock, flags);
But then what's the point of using llist?
> As per following comments at the beginning of llist.h
>
> * Cases where locking is not needed:
> * If there are multiple producers and multiple consumers, llist_add can be
> * used in producers and llist_del_all can be used in consumers simultaneously
> * without locking. Also a single consumer can use llist_del_first while
> * multiple producers simultaneously use llist_add, without any locking.
>
> Multiple producers and single consumer can work in parallel but as in our case
> addition is dependent on kn->attr.notify_next.next != NULL, we may keep the
> checking and list addition under kernfs_notify_lock and for consumer just lock
> free->next = NULL under kernfs_notify_lock.
It supports multiple producers in the sense that multiple producers can try
to add their own llist_nodes concurrently. It doesn't support multiple
producers trying to add the same llist_node whether that depends on NULL
check or not.
> Having said this, I am okay with reverting the llist change as well, because
> anyways it is not helping in the contentions that we are chasing here, but I
> thought of sharing the above idea to see if it is reliable and better than
> revert option.
The original conversion is broken and the right thing to do, for now, is
reverting it.
Thanks.
--
tejun
Powered by blists - more mailing lists