lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <242b1d10-2c11-4bb3-8f77-c939ecb5f1a0@acm.org>
Date: Mon, 28 Oct 2024 13:04:47 -0700
From: Bart Van Assche <bvanassche@....org>
To: Avri Altman <avri.altman@....com>,
 "Martin K . Petersen" <martin.petersen@...cle.com>
Cc: linux-scsi@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] scsi: ufs: core: Introduce a new clock_gating lock

On 10/27/24 1:25 AM, Avri Altman wrote:
> Introduce a new clock gating lock to seriliaze access to the clock
                                        ^^^^^^^^^
                                        serialize

> diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
> index 099373a25017..b7c7a7dd327f 100644
> --- a/drivers/ufs/core/ufshcd.c
> +++ b/drivers/ufs/core/ufshcd.c
> @@ -1817,13 +1817,13 @@ static void ufshcd_ungate_work(struct work_struct *work)
>   
>   	cancel_delayed_work_sync(&hba->clk_gating.gate_work);
>   
> -	spin_lock_irqsave(hba->host->host_lock, flags);
> +	spin_lock_irqsave(&hba->clk_gating.lock, flags);
>   	if (hba->clk_gating.state == CLKS_ON) {
> -		spin_unlock_irqrestore(hba->host->host_lock, flags);
> +		spin_unlock_irqrestore(&hba->clk_gating.lock, flags);
>   		return;
>   	}
>   
> -	spin_unlock_irqrestore(hba->host->host_lock, flags);
> +	spin_unlock_irqrestore(&hba->clk_gating.lock, flags);
>   	ufshcd_hba_vreg_set_hpm(hba);
>   	ufshcd_setup_clocks(hba, true);

This would be a great opportunity to replace the spinlock calls with
scoped_guard(), isn't it?

> @@ -1928,7 +1928,7 @@ static void ufshcd_gate_work(struct work_struct *work)
>   	unsigned long flags;
>   	int ret;
>   
> -	spin_lock_irqsave(hba->host->host_lock, flags);
> +	spin_lock_irqsave(&hba->clk_gating.lock, flags);
>   	/*
>   	 * In case you are here to cancel this work the gating state
>   	 * would be marked as REQ_CLKS_ON. In this case save time by
> @@ -1946,7 +1946,7 @@ static void ufshcd_gate_work(struct work_struct *work)
>   	if (ufshcd_is_ufs_dev_busy(hba) || hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL)
>   		goto rel_lock;
>   
> -	spin_unlock_irqrestore(hba->host->host_lock, flags);
> +	spin_unlock_irqrestore(&hba->clk_gating.lock, flags);

Same comment here: please consider using scoped_guard().

>   	/* put the link into hibern8 mode before turning off clocks */
>   	if (ufshcd_can_hibern8_during_gating(hba)) {
> @@ -1977,14 +1977,14 @@ static void ufshcd_gate_work(struct work_struct *work)
>   	 * prevent from doing cancel work multiple times when there are
>   	 * new requests arriving before the current cancel work is done.
>   	 */
> -	spin_lock_irqsave(hba->host->host_lock, flags);
> +	spin_lock_irqsave(&hba->clk_gating.lock, flags);
>   	if (hba->clk_gating.state == REQ_CLKS_OFF) {
>   		hba->clk_gating.state = CLKS_OFF;
>   		trace_ufshcd_clk_gating(dev_name(hba->dev),
>   					hba->clk_gating.state);
>   	}
>   rel_lock:
> -	spin_unlock_irqrestore(hba->host->host_lock, flags);
> +	spin_unlock_irqrestore(&hba->clk_gating.lock, flags);
>   out:
>   	return;
>   }

ufshcd_gate_work() can be simplified by using guard() and
scoped_guard().

> @@ -2015,9 +2015,9 @@ void ufshcd_release(struct ufs_hba *hba)
>   {
>   	unsigned long flags;
>   
> -	spin_lock_irqsave(hba->host->host_lock, flags);
> +	spin_lock_irqsave(&hba->clk_gating.lock, flags);
>   	__ufshcd_release(hba);
> -	spin_unlock_irqrestore(hba->host->host_lock, flags);
> +	spin_unlock_irqrestore(&hba->clk_gating.lock, flags);

For this function and also for later changes, please use guard().

> diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
> index 9ea2a7411bb5..52c822fe2944 100644
> --- a/include/ufs/ufshcd.h
> +++ b/include/ufs/ufshcd.h
> @@ -413,6 +413,7 @@ enum clk_gating_state {
>    * @active_reqs: number of requests that are pending and should be waited for
>    * completion before gating clocks.
>    * @clk_gating_workq: workqueue for clock gating work.
> + * @lock: serielize access to the clk_gating members
              ^^^^^^^^^
              serialize

I don't think that the added comment is correct - 'lock' is used to
serialize access to some struct ufs_clk_gating members but not for
serializing access to all members. Accesses to e.g. gate_work,
ungate_work and clk_gating_workq are not serialized. Please reorder the
struct ufs_clk_gating members as follows:
- Members that are not serialized first.
- Next, 'lock'.
- Finally, the members serialized by 'lock'.

I think it is common in Linux kernel code that structure members are
organized this way.

Thanks,

Bart.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ