[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <deaaabe5-0a0c-b6f9-c85b-5080874f6437@oracle.com>
Date: Sun, 3 Jul 2022 21:09:05 +1000
From: Imran Khan <imran.f.khan@...cle.com>
To: Tejun Heo <tj@...nel.org>
Cc: gregkh@...uxfoundation.org, viro@...iv.linux.org.uk,
m.szyprowski@...sung.com, nathan@...nel.org, michael@...le.cc,
robh@...nel.org, linux-serial@...r.kernel.org,
linux-kernel@...r.kernel.org, guillaume.tucker@...labora.com,
pmladek@...e.com
Subject: Re: [RESEND PATCH] kernfs: Avoid re-adding kernfs_node into
kernfs_notify_list.
Hello Tejun,
Thanks for your feedback.
On 2/7/22 6:11 am, Tejun Heo wrote:
> Hello,
>
> On Sat, Jul 02, 2022 at 01:46:04AM +1000, Imran Khan wrote:
>> @@ -992,9 +993,11 @@ void kernfs_notify(struct kernfs_node *kn)
>> rcu_read_unlock();
>>
>> /* schedule work to kick fsnotify */
>> - kernfs_get(kn);
>> - llist_add(&kn->attr.notify_next, &kernfs_notify_list);
>> - schedule_work(&kernfs_notify_work);
>> + if (kn->attr.notify_next.next != NULL) {
>> + kernfs_get(kn);
>> + llist_add(&kn->attr.notify_next, &kernfs_notify_list);
>> + schedule_work(&kernfs_notify_work);
>> + }
>
> Aren't you just narrowing the race window here? What prevents two
> threads simultaneously testing for non NULL and then entering the
> addition block together?
>
Indeed that is possible.
> Looked at the llist code and it doesn't support multiple producers
> trying to add the same node, unfortunately, so I'm not sure llist is
> gonna work here. For now, the right thing to do prolly is reverting
> it.
>
Can we use kernfs_notify_lock like below snippet to serialize producers
(kernfs_notify):
spin_lock_irqsave(&kernfs_notify_lock, flags);
if (kn->attr.notify_next.next != NULL) {
kernfs_get(kn);
llist_add(&kn->attr.notify_next, &kernfs_notify_list);
schedule_work(&kernfs_notify_work);
}
spin_unlock_irqsave(&kernfs_notify_lock, flags);
As per following comments at the beginning of llist.h
* Cases where locking is not needed:
* If there are multiple producers and multiple consumers, llist_add can be
* used in producers and llist_del_all can be used in consumers simultaneously
* without locking. Also a single consumer can use llist_del_first while
* multiple producers simultaneously use llist_add, without any locking.
Multiple producers and single consumer can work in parallel but as in our case
addition is dependent on kn->attr.notify_next.next != NULL, we may keep the
checking and list addition under kernfs_notify_lock and for consumer just lock
free->next = NULL under kernfs_notify_lock.
Having said this, I am okay with reverting the llist change as well, because
anyways it is not helping in the contentions that we are chasing here, but I
thought of sharing the above idea to see if it is reliable and better than
revert option.
Thanks
-- Imran
Powered by blists - more mailing lists