lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 31 Dec 2008 23:54:56 +1030
From:	Rusty Russell <rusty@...tcorp.com.au>
To:	Jesse Barnes <jbarnes@...tuousgeek.org>
Cc:	linux-kernel@...r.kernel.org, linux-pci@...r.kernel.org,
	Mike Travis <travis@....com>
Subject: Re: [PATCH 1/1] work_on_cpu: use in drivers/pci/pci-driver.c

On Wednesday 10 December 2008 09:22:48 Jesse Barnes wrote:
> On Monday, December 01, 2008 12:20 am Rusty Russell wrote:
> > This uses work_on_cpu(), rather than altering the cpumask of the
> > thread which we happen to be.
> 
> Nice, I applied this to my linux-next branch to get some build/test coverage.  
> Please cc linux-pci in the future though.

Hi Jesse,

   I don't see this in linux-next, nor Linus' tree.  Here it is again...

Subject: work_on_cpu: use in drivers/pci/pci-driver.c

This uses work_on_cpu(), rather than altering the cpumask of the
thread which we happen to be.

Note the cleanups:

1) I've removed the CONFIG_NUMA test, since dev_to_node() returns -1
   for !CONFIG_NUMA anyway and the compiler will eliminate it.

2) No need to reset mempolicy to default (a bad idea anyway) since
   work_on_cpu is run from a workqueue.

Signed-off-by: Rusty Russell <rusty@...tcorp.com.au>
Cc: Jesse Barnes <jbarnes@...tuousgeek.org>
---
 drivers/pci/pci-driver.c |   52 ++++++++++++++++++++++++++++-------------------
 1 file changed, 32 insertions(+), 20 deletions(-)

diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -16,6 +16,7 @@
 #include <linux/string.h>
 #include <linux/slab.h>
 #include <linux/sched.h>
+#include <linux/cpu.h>
 #include "pci.h"
 
 /*
@@ -183,32 +184,43 @@ static const struct pci_device_id *pci_m
 	return pci_match_id(drv->id_table, dev);
 }
 
+struct drv_dev_and_id {
+	struct pci_driver *drv;
+	struct pci_dev *dev;
+	const struct pci_device_id *id;
+};
+
+static long local_pci_probe(void *_ddi)
+{
+	struct drv_dev_and_id *ddi = _ddi;
+
+	return ddi->drv->probe(ddi->dev, ddi->id);
+}
+
 static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
 			  const struct pci_device_id *id)
 {
-	int error;
-#ifdef CONFIG_NUMA
-	/* Execute driver initialization on node where the
-	   device's bus is attached to.  This way the driver likely
-	   allocates its local memory on the right node without
-	   any need to change it. */
-	struct mempolicy *oldpol;
-	cpumask_t oldmask = current->cpus_allowed;
-	int node = dev_to_node(&dev->dev);
+	int error, node;
+	struct drv_dev_and_id ddi = { drv, dev, id };
 
+	/* Execute driver initialization on node where the device's
+	   bus is attached to.  This way the driver likely allocates
+	   its local memory on the right node without any need to
+	   change it. */
+	node = dev_to_node(&dev->dev);
 	if (node >= 0) {
+		int cpu;
 		node_to_cpumask_ptr(nodecpumask, node);
-		set_cpus_allowed_ptr(current, nodecpumask);
-	}
-	/* And set default memory allocation policy */
-	oldpol = current->mempolicy;
-	current->mempolicy = NULL;	/* fall back to system default policy */
-#endif
-	error = drv->probe(dev, id);
-#ifdef CONFIG_NUMA
-	set_cpus_allowed_ptr(current, &oldmask);
-	current->mempolicy = oldpol;
-#endif
+
+		get_online_cpus();
+		cpu = cpumask_any_and(nodecpumask, cpu_online_mask);
+		if (cpu < nr_cpu_ids)
+			error = work_on_cpu(cpu, local_pci_probe, &ddi);
+		else
+			error = local_pci_probe(&ddi);
+		put_online_cpus();
+	} else
+		error = local_pci_probe(&ddi);
 	return error;
 }
 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ