lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z2A6VEMN5jkXg2lr@uudg.org>
Date: Mon, 16 Dec 2024 11:33:56 -0300
From: "Luis Claudio R. Goncalves" <lgoncalv@...hat.com>
To: Ryo Takakura <ryotkkr98@...il.com>
Cc: nirmal.patel@...ux.intel.com, jonathan.derrick@...ux.dev,
	lpieralisi@...nel.org, kw@...ux.com,
	manivannan.sadhasivam@...aro.org, robh@...nel.org,
	bhelgaas@...gle.com, linux-pci@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-rt-devel@...ts.linux.dev
Subject: Re: [PATCH] PCI: vmd: Fix spinlock usage on config access for RT
 kernel

On Sun, Dec 15, 2024 at 11:13:21PM +0900, Ryo Takakura wrote:
> PCI config access is locked with pci_lock which is raw_spinlock.
> Convert cfg_lock to raw_spinlock so that the lock usage is consistent
> for RT kernel.
> 
> Reported as following:
> [   18.756807] BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
> [   18.756810] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 1617, name: nodedev-init
> [   18.756810] preempt_count: 1, expected: 0
> [   18.756811] RCU nest depth: 0, expected: 0
> [   18.756811] INFO: lockdep is turned off.
> [   18.756812] irq event stamp: 0

This problem has been discussed in the past at:

    [PATCH v6.4-rc5-rt4] rt: vmd: make cfg_lock a raw spinlock
    https://lore.kernel.org/linux-rt-users/20230608210232.056e731f@gandalf.local.home/T/#t

And there was a suggestion to remove the lock altogether as (at the time) the
only two call sites of vmd_pci_read() were already protected by pci_lock.

I wrote the patch removing the lock and it worked as expected, but never
sent it to the list. If you think that for the current code that makes
sense, please have fun removing the lock. :)

Best regards,
Luis

> [   18.756812] hardirqs last  enabled at (0): [<0000000000000000>] 0x0
> [   18.756815] hardirqs last disabled at (0): [<ffffffff864f1fe2>] copy_process+0xa62/0x2d90
> [   18.756819] softirqs last  enabled at (0): [<ffffffff864f1fe2>] copy_process+0xa62/0x2d90
> [   18.756820] softirqs last disabled at (0): [<0000000000000000>] 0x0
> [   18.756822] CPU: 3 UID: 0 PID: 1617 Comm: nodedev-init Tainted: G        W          6.12.1 #11
> [   18.756823] Tainted: [W]=WARN
> [   18.756824] Hardware name: Dell Inc. Vostro 3710/0K1D6X, BIOS 1.14.0 06/09/2023
> [   18.756825] Call Trace:
> [   18.756826]  <TASK>
> [   18.756827]  dump_stack_lvl+0x9b/0xf0
> [   18.756830]  dump_stack+0x10/0x20
> [   18.756831]  __might_resched+0x158/0x230
> [   18.756833]  rt_spin_lock+0x4e/0x130
> [   18.756837]  ? vmd_pci_read+0x8d/0x100 [vmd]
> [   18.756839]  vmd_pci_read+0x8d/0x100 [vmd]
> [   18.756840]  pci_user_read_config_byte+0x6f/0xe0
> [   18.756843]  pci_read_config+0xfe/0x290
> [   18.756845]  sysfs_kf_bin_read+0x68/0x90
> [   18.756847]  kernfs_fop_read_iter+0xd7/0x200
> [   18.756850]  vfs_read+0x26d/0x360
> [   18.756853]  ksys_read+0x70/0xf0
> [   18.756855]  __x64_sys_read+0x1a/0x20
> [   18.756857]  x64_sys_call+0x1715/0x20d0
> [   18.756859]  do_syscall_64+0x8f/0x170
> [   18.756861]  ? syscall_exit_to_user_mode+0xcd/0x2c0
> [   18.756863]  ? do_syscall_64+0x9b/0x170
> [   18.756865]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> 
> Signed-off-by: Ryo Takakura <ryotkkr98@...il.com>
> ---
>  drivers/pci/controller/vmd.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index 9d9596947..94ceec50a 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -125,7 +125,7 @@ struct vmd_irq_list {
>  struct vmd_dev {
>  	struct pci_dev		*dev;
>  
> -	spinlock_t		cfg_lock;
> +	raw_spinlock_t		cfg_lock;
>  	void __iomem		*cfgbar;
>  
>  	int msix_count;
> @@ -391,7 +391,7 @@ static int vmd_pci_read(struct pci_bus *bus, unsigned int devfn, int reg,
>  	if (!addr)
>  		return -EFAULT;
>  
> -	spin_lock_irqsave(&vmd->cfg_lock, flags);
> +	raw_spin_lock_irqsave(&vmd->cfg_lock, flags);
>  	switch (len) {
>  	case 1:
>  		*value = readb(addr);
> @@ -406,7 +406,7 @@ static int vmd_pci_read(struct pci_bus *bus, unsigned int devfn, int reg,
>  		ret = -EINVAL;
>  		break;
>  	}
> -	spin_unlock_irqrestore(&vmd->cfg_lock, flags);
> +	raw_spin_unlock_irqrestore(&vmd->cfg_lock, flags);
>  	return ret;
>  }
>  
> @@ -426,7 +426,7 @@ static int vmd_pci_write(struct pci_bus *bus, unsigned int devfn, int reg,
>  	if (!addr)
>  		return -EFAULT;
>  
> -	spin_lock_irqsave(&vmd->cfg_lock, flags);
> +	raw_spin_lock_irqsave(&vmd->cfg_lock, flags);
>  	switch (len) {
>  	case 1:
>  		writeb(value, addr);
> @@ -444,7 +444,7 @@ static int vmd_pci_write(struct pci_bus *bus, unsigned int devfn, int reg,
>  		ret = -EINVAL;
>  		break;
>  	}
> -	spin_unlock_irqrestore(&vmd->cfg_lock, flags);
> +	raw_spin_unlock_irqrestore(&vmd->cfg_lock, flags);
>  	return ret;
>  }
>  
> @@ -1009,7 +1009,7 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
>  	if (features & VMD_FEAT_OFFSET_FIRST_VECTOR)
>  		vmd->first_vec = 1;
>  
> -	spin_lock_init(&vmd->cfg_lock);
> +	raw_spin_lock_init(&vmd->cfg_lock);
>  	pci_set_drvdata(dev, vmd);
>  	err = vmd_enable_domain(vmd, features);
>  	if (err)
> -- 
> 2.34.1
> 
> 
---end quoted text---


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ