lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 10 Aug 2020 18:09:36 +0200
From:   Michał Mirosław <mirq-linux@...e.qmqm.pl>
To:     Mark Brown <broonie@...nel.org>
Cc:     Dmitry Osipenko <digetx@...il.com>,
        Liam Girdwood <lgirdwood@...il.com>,
        linux-kernel@...r.kernel.org
Subject: Re: regulator: deadlock vs memory reclaim

On Mon, Aug 10, 2020 at 04:39:28PM +0100, Mark Brown wrote:
> On Mon, Aug 10, 2020 at 12:25:37AM +0200, Michał Mirosław wrote:
> 
> > regulator_lock_dependent() starts by taking regulator_list_mutex, The
> > same mutex covers eg. regulator initialization, including memory allocations
> > that happen there. This will deadlock when you have filesystem on eg. eMMC
> > (which uses a regulator to control module voltages) and you register
> > a new regulator (hotplug a device?) when under memory pressure.
> 
> OK, that's very much a corner case, it only applies in the case of
> coupled regulators.  The most obvious thing here would be to move the
> allocations on registration out of the locked region, we really only
> need this in the regulator_find_coupler() call I think.  If the
> regulator isn't coupled we don't need to take the lock at all.

Currently, regulator_lock_dependent() is called by eg. regulator_enable() and
regulator_get_voltage(), so actually any regulator can deadlock this way.
I concur that the locking rules can (and need to) be relaxed.

> > Basically, we have a BKL for regulator_enable() and we're using ww_mutex
> > as a recursive mutex with no deadlock prevention whatsoever. The locks
> > also seem to cover way to much (eg. initialization even before making the
> > regulator visible to the system).
> 
> Could you be more specific about what you're looking at here?  There's
> nothing too obvious jumping out from the code here other than the bit
> around the coupling allocation, otherwise it looks like we're locking
> list walks.

When you look at the regulator API (regulator_enable() and friends),
then in their implementation we always start by .._lock_dependent(),
which takes regulator_list_mutex around its work. This mutex is what
makes the code deadlock-prone vs memory allocations. I have a feeling
that this lock is a workaround for historical requirements (recursive
locking of regulator_dev) that might be no longer needed or is just
too defensive programming. Hence my other patches and this inquiry.

Best Regards,
Michał Mirosław

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ