lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 22 Mar 2018 12:24:12 +0100
From:   Oleg Nesterov <oleg@...hat.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     torvalds@...ux-foundation.org, jannh@...gle.com,
        paulmck@...ux.vnet.ibm.com, bcrl@...ck.org,
        viro@...iv.linux.org.uk, kent.overstreet@...il.com,
        security@...nel.org, linux-kernel@...r.kernel.org,
        kernel-team@...com
Subject: Re: [PATCH 8/8] fs/aio: Use rcu_work instead of explicit rcu and
 work item

On 03/21, Tejun Heo wrote:
>
> Hello,
>
> On Wed, Mar 21, 2018 at 06:17:43PM +0100, Oleg Nesterov wrote:
> > Mostly I am asking because I do not really understand
> > "[PATCH 6/8] RCU, workqueue: Implement rcu_work".
> >
> > I mean, the code looks simple and correct but why does it play with
> > WORK_STRUCT_PENDING_BIT? IOW, I do not see a "good" use-case when 2 or more
> > queue_rcu_work()'s can use the same rwork and hit work_pending() == T. And
> > what the caller should do if queue_rcu_work() returns false?
>
> It's just following standard workqueue conventions.

OK. And I agree that the usage of WORK_STRUCT_PENDING_BIT in queue_rcu_work() is
fine. If nothing else the kernel won't crash if you call queue_rcu_work() twice.

But why flush_rcu_work() can't simply do flush_work() ?

If WORK_STRUCT_PENDING_BIT was set by us (rcu_work_rcufn() succeeded) we do not
need rcu_barrier(). Why should we care about other rcu callbacks?

If rcu_work_rcufn() fails and someone else sets PENDING, how this rcu_barrier()
can help? We didn't even do call_rcu() in this case.

In short. Once flush_work() returns we know that rcu callback which queued this
work is finished. It doesn't matter if it was fired by us or not. And if it was
not fired by us, then rcu_barrier() doesn't imply a grace period anyway.


> We can try to
> make it more specialized but then flush_rcu_work()'s behavior would
> have to different too and it gets confusing quickly.  Unless there are
> overriding reasons to deviate, I'd like to keep it consistent.

Perhaps this all is consistent, but I fail to understand this API :/

And again, at least for fs/aio.c it doesn't offer too much but sub-optimal
compared to call_rcu() + schedule_work() by hand.

Oleg.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ