lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YCz6nz4i136z1+H1@alley>
Date:   Wed, 17 Feb 2021 12:14:39 +0100
From:   Petr Mladek <pmladek@...e.com>
To:     Yiwei Zhang <zzyiwei@...roid.com>
Cc:     Christoph Hellwig <hch@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Felix Kuehling <Felix.Kuehling@....com>,
        Jens Axboe <axboe@...nel.dk>,
        "J. Bruce Fields" <bfields@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Frederic Weisbecker <frederic@...nel.org>,
        Marcelo Tosatti <mtosatti@...hat.com>,
        Ilias Stamatis <stamatis.iliass@...il.com>,
        Rob Clark <robdclark@...omium.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Liang Chen <cl@...k-chips.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        kernel-team <kernel-team@...roid.com>
Subject: Re: [PATCH] kthread: add kthread_mod_pending_delayed_work api

On Tue 2021-02-16 10:58:36, Yiwei Zhang wrote:
> On Mon, Feb 15, 2021 at 5:28 AM Petr Mladek <pmladek@...e.com> wrote:
> >
> > On Sun 2021-02-14 00:06:11, Yiwei Zhang wrote:
> > > The existing kthread_mod_delayed_work api will queue a new work if
> > > failing to cancel the current work due to no longer being pending.
> > > However, there's a case that the same work can be enqueued from both
> > > an async request and a delayed work, and a racing could happen if the
> > > async request comes right after the timeout delayed work gets
> > > scheduled,
> >
> > By other words, you want to modify the delayed work only when
> > it is still waiting in the queue. You do not want to queue new
> > work when it has not been already queued. Do I get it correctly?
> >
> Yes, you are correct.
> 
> > Could you please provide a patch where the new API is used?
> >
> Currently it will only get used in a downstream gpu driver.
> 
> > > because the clean up work may not be safe to run twice.
> >
> > This looks like a bad design of the code. There is likely
> > another race that might break it. You should ask the following
> > questions:
> >
> > Why anyone tries to modify the clean up work when it has been already
> > queued? There should be only one location/caller that triggers the clean up.
> >
> The clean up work was initially queued as a safe timeout work just in
> case the userspace doesn't queue the cleanup work(e.g. process
> crashing), which leaves the global driver in an incorrect driver
> state(e.g. power state due to some hinting). In addition, the cleanup
> work will also have to clean the cache allocated work struct as well
> in the racing scenario.
> 
> > Could anyone queue any work to the workqueue after the clean up
> > work was queued? The cleanup work should be the last queued one.
> > The workqueue user must inform all other users that the queue
> > is being destroyed and nobody is allowed to queue any work
> > any longer.
> >
> User can queue the initial work(internally it queues a cleanup work
> with a big timeout in case user doesn't queue it later). Then after
> user has done stuff within the scope, the user will queue the cleanup
> work again to cancel out the effect, which is when it may race with
> the underlying timeout'ed cleanup work.

And this is exactly the design problem. If there race is possible
then there are three possible scenarios:

1. User does the clean up before the timeout. This is the scenario
   where things work as expected.

2. User triggered clean up races with the clean up triggered by
   timeout. You try to handle this scenario by this patch.

3. User does clean up after the clean up has already been done
   by the timeout. It means that the user used the driver after
   it has already been cleaned up. This should not happen.
   I guess that the user commands will fail when the device has
   been cleaned up in the meantime.

By other words, you are focusing on a small race window. But there
is much bigger problem when the user could still use the cleaned
up driver.

There must be a better solution. You should avoid the timer because
it is not reliable. The following comes to my mind:

1. The userspace application might do the clean up from SIGKILL
   handler. It will do the clean up even when it crashes. But you
   would still rely on userspace to do the correct thing.

2. I do not see a clean solution in kernel

    One possibility might be to register something called from
   __put_task_struct(). It seems profile_handoff_task() calls
   some notifiers that can be registered from anywhere.

   Another possibility might be to register a notifier called by
   profile_task_exit(tsk) that is called from do_exit().

   It is not a clean solution because profile_task has another
   purpose. It might make sense to introduce a new generic notifier
   that is called during the task exit for this purpose.
   I am sure that it might have even more users.

   Anyway, look for put_task_struct(). It seems to be called in some
   drivers when destroying. I wonder if there is something that
   you might need.

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ