lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJZ5v0iLr3JZ49gX9XbkjPNr_wRDMyAtMZDZ6Aoxz1KgQZ_moA@mail.gmail.com>
Date: Mon, 12 May 2025 14:53:56 +0200
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Marek Szyprowski <m.szyprowski@...sung.com>
Cc: "Rafael J. Wysocki" <rafael@...nel.org>, "Rafael J. Wysocki" <rjw@...ysocki.net>, 
	Linux PM <linux-pm@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>, 
	Lukasz Luba <lukasz.luba@....com>, Peter Zijlstra <peterz@...radead.org>, 
	Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>, 
	Dietmar Eggemann <dietmar.eggemann@....com>, Morten Rasmussen <morten.rasmussen@....com>, 
	Vincent Guittot <vincent.guittot@...aro.org>, 
	Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>, 
	Pierre Gondois <pierre.gondois@....com>, Christian Loehle <christian.loehle@....com>
Subject: Re: [PATCH v2 2/7] cpufreq/sched: Move cpufreq-specific EAS checks to cpufreq

On Mon, May 12, 2025 at 8:48 AM Marek Szyprowski
<m.szyprowski@...sung.com> wrote:
>
> On 10.05.2025 13:31, Rafael J. Wysocki wrote:
> > On Sat, May 10, 2025 at 1:49 AM Marek Szyprowski
> > <m.szyprowski@...sung.com> wrote:
> >> On 06.05.2025 22:37, Rafael J. Wysocki wrote:
> >>> From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> >>>
> >>> Doing cpufreq-specific EAS checks that require accessing policy
> >>> internals directly from sched_is_eas_possible() is a bit unfortunate,
> >>> so introduce cpufreq_ready_for_eas() in cpufreq, move those checks
> >>> into that new function and make sched_is_eas_possible() call it.
> >>>
> >>> While at it, address a possible race between the EAS governor check
> >>> and governor change by doing the former under the policy rwsem.
> >>>
> >>> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> >>> Reviewed-by: Christian Loehle <christian.loehle@....com>
> >>> Tested-by: Christian Loehle <christian.loehle@....com>
> >>> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com>
> >> In my tests I've noticed that this patch, merged as commit 4854649b1fb4
> >> ("cpufreq/sched: Move cpufreq-specific EAS checks to cpufreq"), causes a
> >> regression on ARM64 Amlogic Meson SoC based OdroidN2 board. The board
> >> finally lockups. Reverting $subject on top of next-20250509 fixes this
> >> issue. Here is the lockdep warning observed before the lockup:
> > Thanks for the report!
> >
> >> ======================================================
> >> WARNING: possible circular locking dependency detected
> >> 6.15.0-rc5-next-20250509-dirty #10335 Tainted: G         C
> >> cpufreq: cpufreq_policy_online: CPU2: Running at unlisted initial
> >> frequency: 999999 kHz, changing to: 1000000 kHz
> >> ------------------------------------------------------
> >> kworker/3:1/79 is trying to acquire lock:
> >> ffff00000494b380 (&policy->rwsem){++++}-{4:4}, at:
> >> cpufreq_ready_for_eas+0x60/0xbc
> >>
> >> but task is already holding lock:
> >> ffff8000832887a0 (sched_domains_mutex){+.+.}-{4:4}, at:
> >> partition_sched_domains+0x54/0x938
> >>
> >> which lock already depends on the new lock.
> >>
> >> the existing dependency chain (in reverse order) is:
> >>
> >> -> #2 (sched_domains_mutex){+.+.}-{4:4}:
> >>          __mutex_lock+0xa8/0x598
> >>          mutex_lock_nested+0x24/0x30
> >>          partition_sched_domains+0x54/0x938
> >>          rebuild_sched_domains_locked+0x2d4/0x900
> >>          rebuild_sched_domains+0x2c/0x48
> >>          rebuild_sched_domains_energy+0x3c/0x58
> >>          rebuild_sd_workfn+0x10/0x1c
> >>          process_one_work+0x208/0x604
> >>          worker_thread+0x244/0x388
> >>          kthread+0x150/0x228
> >>          ret_from_fork+0x10/0x20
> >>
> >> -> #1 (cpuset_mutex){+.+.}-{4:4}:
> >>          __mutex_lock+0xa8/0x598
> >>          mutex_lock_nested+0x24/0x30
> >>          cpuset_lock+0x1c/0x28
> >>          __sched_setscheduler+0x31c/0x830
> >>          sched_setattr_nocheck+0x18/0x24
> >>          sugov_init+0x1b4/0x388
> >>          cpufreq_init_governor.part.0+0x58/0xd4
> >>          cpufreq_set_policy+0x2c8/0x3ec
> >>          cpufreq_online+0x520/0xb20
> >>          cpufreq_add_dev+0x80/0x98
> >>          subsys_interface_register+0xfc/0x118
> >>          cpufreq_register_driver+0x150/0x238
> >>          dt_cpufreq_probe+0x148/0x488
> >>          platform_probe+0x68/0xdc
> >>          really_probe+0xbc/0x298
> >>          __driver_probe_device+0x78/0x12c
> >>          driver_probe_device+0xdc/0x164
> >>          __device_attach_driver+0xb8/0x138
> >>          bus_for_each_drv+0x80/0xdc
> >>          __device_attach+0xa8/0x1b0
> >>          device_initial_probe+0x14/0x20
> >>          bus_probe_device+0xb0/0xb4
> >>          deferred_probe_work_func+0x8c/0xc8
> >>          process_one_work+0x208/0x604
> >>          worker_thread+0x244/0x388
> >>          kthread+0x150/0x228
> >>          ret_from_fork+0x10/0x20
> >>
> >> -> #0 (&policy->rwsem){++++}-{4:4}:
> >>          __lock_acquire+0x1408/0x2254
> >>          lock_acquire+0x1c8/0x354
> >>          down_read+0x60/0x180
> >>          cpufreq_ready_for_eas+0x60/0xbc
> >>          sched_is_eas_possible+0x144/0x170
> >>          partition_sched_domains+0x504/0x938
> >>          rebuild_sched_domains_locked+0x2d4/0x900
> >>          rebuild_sched_domains+0x2c/0x48
> >>          rebuild_sched_domains_energy+0x3c/0x58
> >>          rebuild_sd_workfn+0x10/0x1c
> >>          process_one_work+0x208/0x604
> >>          worker_thread+0x244/0x388
> >>          kthread+0x150/0x228
> >>          ret_from_fork+0x10/0x20
> >>
> >> other info that might help us debug this:
> >>
> >> Chain exists of:
> >>     &policy->rwsem --> cpuset_mutex --> sched_domains_mutex
> >>
> >>    Possible unsafe locking scenario:
> >>
> >>          CPU0                    CPU1
> >>          ----                    ----
> >>     lock(sched_domains_mutex);
> >>                                  lock(cpuset_mutex);
> >>                                  lock(sched_domains_mutex);
> >>     rlock(&policy->rwsem);
> >>
> >>    *** DEADLOCK ***
> > Well, it turns out that trying to acquire policy->rwsem under
> > sched_domains_mutex is a bad idea.  It was added to
> > cpufreq_policy_is_good_for_eas() to address a theoretical race, so it
> > can be dropped safely.  A theoretical race is better than a real
> > deadlock.
> >
> > Please test the attached patch.
>
> This fixed the observed issue. Thanks!
>
> Reported-by: Marek Szyprowski <m.szyprowski@...sung.com>
> Tested-by: Marek Szyprowski <m.szyprowski@...sung.com>

Thanks for the confirmation!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ