lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <730f8cdd-e863-4b33-96b3-dcfb9cea7e1e@oss.qualcomm.com>
Date: Thu, 11 Sep 2025 14:56:29 +0800
From: Zhongqiu Han <zhongqiu.han@....qualcomm.com>
To: alim.akhtar@...sung.com, avri.altman@....com, bvanassche@....org,
        James.Bottomley@...senPartnership.com, martin.petersen@...cle.com
Cc: peter.wang@...iatek.com, tanghuan@...o.com, liu.song13@....com.cn,
        quic_nguyenb@...cinc.com, viro@...iv.linux.org.uk, huobean@...il.com,
        adrian.hunter@...el.com, can.guo@....qualcomm.com, ebiggers@...nel.org,
        neil.armstrong@...aro.org, angelogioacchino.delregno@...labora.com,
        quic_narepall@...cinc.com, quic_mnaresh@...cinc.com,
        linux-scsi@...r.kernel.org, linux-kernel@...r.kernel.org,
        nitin.rawat@....qualcomm.com, ziqi.chen@....qualcomm.com,
        zhongqiu.han@....qualcomm.com
Subject: Re: [PATCH v2] scsi: ufs: core: Fix data race in CPU latency PM QoS
 request handling

On 9/2/2025 3:48 PM, Zhongqiu Han wrote:
> The cpu_latency_qos_add/remove/update_request interfaces lack internal
> synchronization by design, requiring the caller to ensure thread safety.
> The current implementation relies on the `pm_qos_enabled` flag, which is
> insufficient to prevent concurrent access and cannot serve as a proper
> synchronization mechanism. This has led to data races and list corruption
> issues.
> 
> A typical race condition call trace is:
> 
> [Thread A]
> ufshcd_pm_qos_exit()
>    --> cpu_latency_qos_remove_request()
>      --> cpu_latency_qos_apply();
>        --> pm_qos_update_target()
>          --> plist_del              <--(1) delete plist node
>      --> memset(req, 0, sizeof(*req));
>    --> hba->pm_qos_enabled = false;
> 
> [Thread B]
> ufshcd_devfreq_target
>    --> ufshcd_devfreq_scale
>      --> ufshcd_scale_clks
>        --> ufshcd_pm_qos_update     <--(2) pm_qos_enabled is true
>          --> cpu_latency_qos_update_request
>            --> pm_qos_update_target
>              --> plist_del          <--(3) plist node use-after-free
> 
> This patch introduces a dedicated mutex to serialize PM QoS operations,
> preventing data races and ensuring safe access to PM QoS resources.
> Additionally, READ_ONCE is used in the sysfs interface to ensure atomic
> read access to pm_qos_enabled flag.
> 
> Fixes: 2777e73fc154 ("scsi: ufs: core: Add CPU latency QoS support for UFS driver")
> Signed-off-by: Zhongqiu Han <zhongqiu.han@....qualcomm.com>

Hi Martin K. Petersen,

Just a gentle ping on this patch,

Would appreciate any feedback when you have time. Thanks!




> ---
> v1 -> v2:
> - Fix misleading indentation by adding braces to if statements in pm_qos logic.
> - Resolve checkpatch strict mode warning by adding an inline comment for pm_qos_mutex.
> - Link to v1: https://lore.kernel.org/all/20250901085117.86160-1-zhongqiu.han@oss.qualcomm.com/
> 
>   drivers/ufs/core/ufs-sysfs.c |  2 +-
>   drivers/ufs/core/ufshcd.c    | 25 ++++++++++++++++++++++---
>   include/ufs/ufshcd.h         |  3 +++
>   3 files changed, 26 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c
> index 4bd7d491e3c5..8f7975010513 100644
> --- a/drivers/ufs/core/ufs-sysfs.c
> +++ b/drivers/ufs/core/ufs-sysfs.c
> @@ -512,7 +512,7 @@ static ssize_t pm_qos_enable_show(struct device *dev,
>   {
>   	struct ufs_hba *hba = dev_get_drvdata(dev);
>   
> -	return sysfs_emit(buf, "%d\n", hba->pm_qos_enabled);
> +	return sysfs_emit(buf, "%d\n", READ_ONCE(hba->pm_qos_enabled));
>   }
>   
>   /**
> diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
> index 926650412eaa..98b9ce583386 100644
> --- a/drivers/ufs/core/ufshcd.c
> +++ b/drivers/ufs/core/ufshcd.c
> @@ -1047,14 +1047,19 @@ EXPORT_SYMBOL_GPL(ufshcd_is_hba_active);
>    */
>   void ufshcd_pm_qos_init(struct ufs_hba *hba)
>   {
> +	mutex_lock(&hba->pm_qos_mutex);
>   
> -	if (hba->pm_qos_enabled)
> +	if (hba->pm_qos_enabled) {
> +		mutex_unlock(&hba->pm_qos_mutex);
>   		return;
> +	}
>   
>   	cpu_latency_qos_add_request(&hba->pm_qos_req, PM_QOS_DEFAULT_VALUE);
>   
>   	if (cpu_latency_qos_request_active(&hba->pm_qos_req))
>   		hba->pm_qos_enabled = true;
> +
> +	mutex_unlock(&hba->pm_qos_mutex);
>   }
>   
>   /**
> @@ -1063,11 +1068,16 @@ void ufshcd_pm_qos_init(struct ufs_hba *hba)
>    */
>   void ufshcd_pm_qos_exit(struct ufs_hba *hba)
>   {
> -	if (!hba->pm_qos_enabled)
> +	mutex_lock(&hba->pm_qos_mutex);
> +
> +	if (!hba->pm_qos_enabled) {
> +		mutex_unlock(&hba->pm_qos_mutex);
>   		return;
> +	}
>   
>   	cpu_latency_qos_remove_request(&hba->pm_qos_req);
>   	hba->pm_qos_enabled = false;
> +	mutex_unlock(&hba->pm_qos_mutex);
>   }
>   
>   /**
> @@ -1077,10 +1087,15 @@ void ufshcd_pm_qos_exit(struct ufs_hba *hba)
>    */
>   static void ufshcd_pm_qos_update(struct ufs_hba *hba, bool on)
>   {
> -	if (!hba->pm_qos_enabled)
> +	mutex_lock(&hba->pm_qos_mutex);
> +
> +	if (!hba->pm_qos_enabled) {
> +		mutex_unlock(&hba->pm_qos_mutex);
>   		return;
> +	}
>   
>   	cpu_latency_qos_update_request(&hba->pm_qos_req, on ? 0 : PM_QOS_DEFAULT_VALUE);
> +	mutex_unlock(&hba->pm_qos_mutex);
>   }
>   
>   /**
> @@ -10764,6 +10779,10 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
>   	mutex_init(&hba->ee_ctrl_mutex);
>   
>   	mutex_init(&hba->wb_mutex);
> +
> +	/* Initialize mutex for PM QoS request synchronization */
> +	mutex_init(&hba->pm_qos_mutex);
> +
>   	init_rwsem(&hba->clk_scaling_lock);
>   
>   	ufshcd_init_clk_gating(hba);
> diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
> index 30ff169878dc..a16f857a052f 100644
> --- a/include/ufs/ufshcd.h
> +++ b/include/ufs/ufshcd.h
> @@ -962,6 +962,7 @@ enum ufshcd_mcq_opr {
>    * @ufs_rtc_update_work: A work for UFS RTC periodic update
>    * @pm_qos_req: PM QoS request handle
>    * @pm_qos_enabled: flag to check if pm qos is enabled
> + * @pm_qos_mutex: synchronizes PM QoS request and status updates
>    * @critical_health_count: count of critical health exceptions
>    * @dev_lvl_exception_count: count of device level exceptions since last reset
>    * @dev_lvl_exception_id: vendor specific information about the
> @@ -1135,6 +1136,8 @@ struct ufs_hba {
>   	struct delayed_work ufs_rtc_update_work;
>   	struct pm_qos_request pm_qos_req;
>   	bool pm_qos_enabled;
> +	/* synchronizes PM QoS request and status updates */
> +	struct mutex pm_qos_mutex;
>   
>   	int critical_health_count;
>   	atomic_t dev_lvl_exception_count;


-- 
Thx and BRs,
Zhongqiu Han

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ