lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d8be2a553690ebcf915cd1ad395c3394158abd58.camel@mediatek.com>
Date: Tue, 2 Sep 2025 12:39:12 +0000
From: Peter Wang (王信友) <peter.wang@...iatek.com>
To: "martin.petersen@...cle.com" <martin.petersen@...cle.com>,
	"James.Bottomley@...senPartnership.com"
	<James.Bottomley@...senPartnership.com>, "alim.akhtar@...sung.com"
	<alim.akhtar@...sung.com>, "avri.altman@....com" <avri.altman@....com>,
	"zhongqiu.han@....qualcomm.com" <zhongqiu.han@....qualcomm.com>,
	"bvanassche@....org" <bvanassche@....org>
CC: "tanghuan@...o.com" <tanghuan@...o.com>, AngeloGioacchino Del Regno
	<angelogioacchino.delregno@...labora.com>, "linux-scsi@...r.kernel.org"
	<linux-scsi@...r.kernel.org>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "adrian.hunter@...el.com"
	<adrian.hunter@...el.com>, "ebiggers@...nel.org" <ebiggers@...nel.org>,
	"quic_mnaresh@...cinc.com" <quic_mnaresh@...cinc.com>,
	"ziqi.chen@....qualcomm.com" <ziqi.chen@....qualcomm.com>,
	"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
	"quic_narepall@...cinc.com" <quic_narepall@...cinc.com>,
	"nitin.rawat@....qualcomm.com" <nitin.rawat@....qualcomm.com>,
	"quic_nguyenb@...cinc.com" <quic_nguyenb@...cinc.com>, "huobean@...il.com"
	<huobean@...il.com>, "neil.armstrong@...aro.org" <neil.armstrong@...aro.org>,
	"liu.song13@....com.cn" <liu.song13@....com.cn>, "can.guo@....qualcomm.com"
	<can.guo@....qualcomm.com>
Subject: Re: [PATCH v2] scsi: ufs: core: Fix data race in CPU latency PM QoS
 request handling

On Tue, 2025-09-02 at 15:48 +0800, Zhongqiu Han wrote:
> 
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
> 
> 
> The cpu_latency_qos_add/remove/update_request interfaces lack
> internal
> synchronization by design, requiring the caller to ensure thread
> safety.
> The current implementation relies on the `pm_qos_enabled` flag, which
> is
> insufficient to prevent concurrent access and cannot serve as a
> proper
> synchronization mechanism. This has led to data races and list
> corruption
> issues.
> 
> A typical race condition call trace is:
> 
> [Thread A]
> ufshcd_pm_qos_exit()
>   --> cpu_latency_qos_remove_request()
>     --> cpu_latency_qos_apply();
>       --> pm_qos_update_target()
>         --> plist_del              <--(1) delete plist node
>     --> memset(req, 0, sizeof(*req));
>   --> hba->pm_qos_enabled = false;
> 
> [Thread B]
> ufshcd_devfreq_target
>   --> ufshcd_devfreq_scale
>     --> ufshcd_scale_clks
>       --> ufshcd_pm_qos_update     <--(2) pm_qos_enabled is true
>         --> cpu_latency_qos_update_request
>           --> pm_qos_update_target
>             --> plist_del          <--(3) plist node use-after-free
> 
> This patch introduces a dedicated mutex to serialize PM QoS
> operations,
> preventing data races and ensuring safe access to PM QoS resources.
> Additionally, READ_ONCE is used in the sysfs interface to ensure
> atomic
> read access to pm_qos_enabled flag.


Hi Zhongqiu,

Introducing an additional mutex lock would impact the efficiency of
devfreq.
Wouldn’t it be better to simply adjust the sequence to avoid race
conditions?
For instance,
ufshcd_pm_qos_exit(hba);
ufshcd_exit_clk_scaling(hba);
could be changed to
ufshcd_exit_clk_scaling(hba);
ufshcd_pm_qos_exit(hba);
This ensures that clock scaling is stopped before pm_qos is removed.

Thanks.
Peter


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ