lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAD=FV=UGTZ0UpB2KKdhkuM6fPhMOJTRLzd+X77UcMCo247n6-A@mail.gmail.com>
Date:   Wed, 29 Mar 2023 12:48:45 -0700
From:   Doug Anderson <dianders@...omium.org>
To:     Mark Brown <broonie@...nel.org>
Cc:     Dmitry Osipenko <digetx@...il.com>,
        Lucas Stach <l.stach@...gutronix.de>,
        Stephen Boyd <swboyd@...omium.org>,
        Liam Girdwood <lgirdwood@...il.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] regulator: Avoid lockdep reports when resolving supplies

Hi,

On Wed, Mar 29, 2023 at 11:03 AM Douglas Anderson <dianders@...omium.org> wrote:
>
> An automated bot told me that there was a potential lockdep problem
> with regulators. This was on the chromeos-5.15 kernel, but I see
> nothing that would be different downstream compared to upstream. The
> bot said:
>   ============================================
>   WARNING: possible recursive locking detected
>   5.15.104-lockdep-17461-gc1e499ed6604 #1 Not tainted
>   --------------------------------------------
>   kworker/u16:4/115 is trying to acquire lock:
>   ffffff8083110170 (regulator_ww_class_mutex){+.+.}-{3:3}, at: create_regulator+0x398/0x7ec
>
>   but task is already holding lock:
>   ffffff808378e170 (regulator_ww_class_mutex){+.+.}-{3:3}, at: ww_mutex_trylock+0x3c/0x7b8
>
>   other info that might help us debug this:
>    Possible unsafe locking scenario:
>
>          CPU0
>          ----
>     lock(regulator_ww_class_mutex);
>     lock(regulator_ww_class_mutex);
>
>    *** DEADLOCK ***
>
>    May be due to missing lock nesting notation
>
>   4 locks held by kworker/u16:4/115:
>    #0: ffffff808006a948 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x520/0x1348
>    #1: ffffffc00e0a7cc0 ((work_completion)(&entry->work)){+.+.}-{0:0}, at: process_one_work+0x55c/0x1348
>    #2: ffffff80828a2260 (&dev->mutex){....}-{3:3}, at: __device_attach_async_helper+0xd0/0x2a4
>    #3: ffffff808378e170 (regulator_ww_class_mutex){+.+.}-{3:3}, at: ww_mutex_trylock+0x3c/0x7b8
>
>   stack backtrace:
>   CPU: 2 PID: 115 Comm: kworker/u16:4 Not tainted 5.15.104-lockdep-17461-gc1e499ed6604 #1 9292e52fa83c0e23762b2b3aa1bacf5787a4d5da
>   Hardware name: Google Quackingstick (rev0+) (DT)
>   Workqueue: events_unbound async_run_entry_fn
>   Call trace:
>    dump_backtrace+0x0/0x4ec
>    show_stack+0x34/0x50
>    dump_stack_lvl+0xdc/0x11c
>    dump_stack+0x1c/0x48
>    __lock_acquire+0x16d4/0x6c74
>    lock_acquire+0x208/0x750
>    __mutex_lock_common+0x11c/0x11f8
>    ww_mutex_lock+0xc0/0x440
>    create_regulator+0x398/0x7ec
>    regulator_resolve_supply+0x654/0x7c4
>    regulator_register_resolve_supply+0x30/0x120
>    class_for_each_device+0x1b8/0x230
>    regulator_register+0x17a4/0x1f40
>    devm_regulator_register+0x60/0xd0
>    reg_fixed_voltage_probe+0x728/0xaec
>    platform_probe+0x150/0x1c8
>    really_probe+0x274/0xa20
>    __driver_probe_device+0x1dc/0x3f4
>    driver_probe_device+0x78/0x1c0
>    __device_attach_driver+0x1ac/0x2c8
>    bus_for_each_drv+0x11c/0x190
>    __device_attach_async_helper+0x1e4/0x2a4
>    async_run_entry_fn+0xa0/0x3ac
>    process_one_work+0x638/0x1348
>    worker_thread+0x4a8/0x9c4
>    kthread+0x2e4/0x3a0
>    ret_from_fork+0x10/0x20
>
> The problem was first reported soon after we made many of the
> regulators probe asynchronously, though nothing I've seen implies that
> the problems couldn't have also happened even without that.
>
> I haven't personally been able to reproduce the lockdep issue, but the
> issue does look somewhat legitimate. Specifically, it looks like in
> regulator_resolve_supply() we are holding a "rdev" lock while calling
> set_supply() -> create_regulator() which grabs the lock of a
> _different_ "rdev" (the one for our supply). This is not necessarily
> safe from a lockdep perspective since there is no documented ordering
> between these two locks.
>
> In reality, we should always be locking a regulator before the
> supplying regulator, so I don't expect there to be any real deadlocks
> in practice. However, the regulator framework in general doesn't
> express this to lockdep.
>
> Let's fix the issue by simply grabbing the two locks involved in the
> same way we grab multiple locks elsewhere in the regulator framework:
> using the "wound/wait" mechanisms.
>
> Signed-off-by: Douglas Anderson <dianders@...omium.org>
> ---
> My knowledge of lockdep is not as strong as it should be and my
> knowledge of wait-wound locks is not as strong as it should be. That,
> combined with the fact that I can't actually reproduce the issue, has
> led me to label this as RFC.
>
> I can at least confirm that my system still boots with this patch
> applied, but I can't say 100% for sure that this addresses the issue
> that the bot reported to me. Hopefully others can review and make sure
> that this seems sensible to them.
>
> If this looks reasonable, I can land it and see if that prevents the
> bot from hitting this again.
>
>  drivers/regulator/core.c | 89 ++++++++++++++++++++++++++++++++++++----
>  1 file changed, 81 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
> index 1490eb40c973..822fec20d36a 100644
> --- a/drivers/regulator/core.c
> +++ b/drivers/regulator/core.c
> @@ -207,6 +207,76 @@ static void regulator_unlock(struct regulator_dev *rdev)
>         mutex_unlock(&regulator_nesting_mutex);
>  }
>
> +/**
> + * regulator_lock_two - lock two regulators
> + * @rdev1:             first regulator
> + * @rdev2:             second regulator
> + * @ww_ctx:            w/w mutex acquire context
> + *
> + * Locks both rdevs using the regulator_ww_class.
> + */
> +static void regulator_lock_two(struct regulator_dev *rdev1,
> +                              struct regulator_dev *rdev2,
> +                              struct ww_acquire_ctx *ww_ctx)
> +{
> +       struct regulator_dev *tmp;
> +       int ret;
> +
> +       ww_acquire_init(ww_ctx, &regulator_ww_class);
> +
> +       /* Try to just grab both of them */
> +       ret = regulator_lock_nested(rdev1, ww_ctx);
> +       WARN_ON(ret);
> +       ret = regulator_lock_nested(rdev2, ww_ctx);
> +       if (ret != -EDEADLOCK) {
> +               WARN_ON(ret);
> +               goto exit;
> +       }
> +
> +       while (true) {
> +               /*
> +                * Start of loop: rdev1 was locked and rdev2 was contended.
> +                * Need to unlock rdev1, slowly lock rdev2, then try rdev1
> +                * again.
> +                */
> +               regulator_unlock(rdev1);
> +
> +               ww_mutex_lock_slow(&rdev2->mutex, ww_ctx);
> +               ret = regulator_lock_nested(rdev1, ww_ctx);

Doh. I did some more stress testing right after I sent this out and it
caught a bug where "ref_cnt" was underflowing. I'm fairly certain I
need a:

  rdev2->ref_cnt++;

...right after the ww_mutex_lock_slow(), since that doesn't increment
the ref_cnt. I suspect I also need:

  rdev->mutex_owner = current;

...though I'm confused because I think some other places don't do
that. In any case, I'll confirm and do some more stress testing and
then send out another patch shortly. Sorry for jumping the gun on
sending the first version.

-Doug

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ