lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 19 Feb 2021 11:27:13 +0100
From:   Petr Mladek <pmladek@...e.com>
To:     Yiwei Zhang <zzyiwei@...roid.com>
Cc:     Christoph Hellwig <hch@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Felix Kuehling <Felix.Kuehling@....com>,
        Jens Axboe <axboe@...nel.dk>,
        "J. Bruce Fields" <bfields@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Frederic Weisbecker <frederic@...nel.org>,
        Marcelo Tosatti <mtosatti@...hat.com>,
        Ilias Stamatis <stamatis.iliass@...il.com>,
        Rob Clark <robdclark@...omium.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Liang Chen <cl@...k-chips.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        kernel-team <kernel-team@...roid.com>
Subject: Re: [PATCH] kthread: add kthread_mod_pending_delayed_work api

On Thu 2021-02-18 22:29:24, Yiwei Zhang wrote:
> > 2. User triggered clean up races with the clean up triggered by
> >   timeout. You try to handle this scenario by this patch.
> Yes, exactly.
> 
> > 3. User does clean up after the clean up has already been done
> >   by the timeout.
> This case is well handled. So only (2) has a potential race.

Just to be sure. Does the user work correctly when the clean up work
is done by the timemout before the user wanted to do the clean up?

> Let me clarify a bit more here. The "clean up" is not the clean up
> when a process tears down, but it's actually a "post-work" to cancel
> out an early "pre-work". The "pre-work" enqueues the delayed "post
> work" for the timeout purpose. That pair of operations can repeatedly
> happen.
> 
> The racing is currently worked around by refcounting the delayed_work
> container, and the later "post-work" will take care of the work
> deallocation.
> 
> I mainly want to reach out to see if we agree that this is a valid API
> to be supported by kthread. Or we extend the existing
> kthread_mod_delayed_work api to take another option to not re-queue if
> the cancel failed.

OK, I could imagine a situation when you want to speed up the delayed
work and avoid this race.

The kthread_worker API has more or less the same semantic as
the workqueue API. It makes it easier to switch between them.

The workqueue API has flush_delayed_work(). It does basically
the same as your code. We should call the function
kthread_worker_flush_delayed_work().

I am personally fine with adding this API. I am going to
comment the original code. Well, there might be a push-back
from other people because there will be no in-tree user.

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ