[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180327142848.GA19341@redhat.com>
Date: Tue, 27 Mar 2018 16:28:48 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Tejun Heo <tj@...nel.org>
Cc: torvalds@...ux-foundation.org, jannh@...gle.com,
paulmck@...ux.vnet.ibm.com, bcrl@...ck.org,
viro@...iv.linux.org.uk, kent.overstreet@...il.com,
security@...nel.org, linux-kernel@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH 8/8] fs/aio: Use rcu_work instead of explicit rcu and
work item
Hi Tejun,
On 03/26, Tejun Heo wrote:
>
> On Thu, Mar 22, 2018 at 12:24:12PM +0100, Oleg Nesterov wrote:
> >
> > But why flush_rcu_work() can't simply do flush_work() ?
> >
> > If WORK_STRUCT_PENDING_BIT was set by us (rcu_work_rcufn() succeeded) we do not
> > need rcu_barrier(). Why should we care about other rcu callbacks?
> >
> > If rcu_work_rcufn() fails and someone else sets PENDING, how this rcu_barrier()
> > can help? We didn't even do call_rcu() in this case.
> >
> > In short. Once flush_work() returns we know that rcu callback which queued this
> > work is finished. It doesn't matter if it was fired by us or not. And if it was
> > not fired by us, then rcu_barrier() doesn't imply a grace period anyway.
>
> flush_*work() guarantees to wait for the completion of the latest
> instance of the work item which was visible to the caller. We can't
> guarantee that w/o rcu_barrier().
And this is what I can't understand.
So let me repeat. Could you please describe a use-case which needs flush_rcuwork()
with rcu_barrier() ?
> > And again, at least for fs/aio.c it doesn't offer too much but sub-optimal
> > compared to call_rcu() + schedule_work() by hand.
>
> Sure, this isn't about performance. It's about making code less
> painful on the eyes. If performance matters, we sure can hand-craft
> things, which doesn't seem to be the case, right?
OK, I won't insist.
Oleg.
Powered by blists - more mailing lists