lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 26 Nov 2018 18:11:19 -0800
From:   Dan Williams <dan.j.williams@...el.com>
To:     alexander.h.duyck@...ux.intel.com
Cc:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Greg KH <gregkh@...uxfoundation.org>,
        linux-nvdimm <linux-nvdimm@...ts.01.org>,
        Tejun Heo <tj@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux-pm mailing list <linux-pm@...r.kernel.org>,
        jiangshanlai@...il.com, "Rafael J. Wysocki" <rafael@...nel.org>,
        "Brown, Len" <len.brown@...el.com>, Pavel Machek <pavel@....cz>,
        zwisler@...nel.org, Dave Jiang <dave.jiang@...el.com>,
        bvanassche@....org
Subject: Re: [driver-core PATCH v6 4/9] driver core: Move async_synchronize_full
 call

On Thu, Nov 8, 2018 at 10:07 AM Alexander Duyck
<alexander.h.duyck@...ux.intel.com> wrote:
>
> Move the async_synchronize_full call out of __device_release_driver and
> into driver_detach.
>
> The idea behind this is that the async_synchronize_full call will only
> guarantee that any existing async operations are flushed. This doesn't do
> anything to guarantee that a hotplug event that may occur while we are
> doing the release of the driver will not be asynchronously scheduled.
>
> By moving this into the driver_detach path we can avoid potential deadlocks
> as we aren't holding the device lock at this point and we should not have
> the driver we want to flush loaded so the flush will take care of any
> asynchronous events the driver we are detaching might have scheduled.
>

What problem is this patch solving in practice, because if there were
drivers issuing async work from probe they would need to be
responsible for flushing it themselves. That said it seems broken that
the async probing infrastructure takes the device_lock inside
async_schedule and then holds the lock when calling
async_syncrhonize_full. Is it just luck that this hasn't caused
deadlocks in practice?

Given that the device_lock is hidden from lockdep I think it would be
helpful to have a custom lock_map_acquire() setup, similar to the
workqueue core, to try to keep the locking rules enforced /
documented.

The only documentation I can find for async-probe deadlock avoidance
is the comment block in do_init_module() for async_probe_requested.

Stepping back a bit, does this patch have anything to do with the
performance improvement, or is it a separate "by the way I also found
this" kind of patch?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ