lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 23 Oct 2012 19:42:03 +0200 From: Bart Van Assche <bvanassche@....org> To: Jeff Moyer <jmoyer@...hat.com> CC: axboe@...nel.dk, linux-kernel@...r.kernel.org, SCSI Mailing List <linux-scsi@...r.kernel.org> Subject: Re: [patch/rfc/rft] sd: allocate request_queue on device's local numa node On 10/23/12 18:52, Jeff Moyer wrote: > Bart Van Assche <bvanassche@....org> writes: >> Please keep in mind that a >> single PCIe bus may have a minimal distance to more than one NUMA >> node. See e.g. the diagram at the top of page 8 in >> http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c03261871/c03261871.pdf >> for a system diagram of a NUMA system where each PCIe bus has a >> minimal distance to two different NUMA nodes. > > That's an interesting configuration. I wonder what the numa_node sysfs > file contains for such systems--do you know? I'm not sure how we could > allow this to be user-controlled at probe time. Did you have a specific > mechanism in mind? Module parameters? Something else? As far as I can see in drivers/pci/pci-sysfs.c the numa_node sysfs attribute contains a single number, even for a topology like the one described above. With regard to user control of the numa node: I'm not sure how to solve this in general. But for the ib_srp driver this should be easy to do: SCSI host creation is triggered by sending a login string to a sysfs attribute ("add_target"). It wouldn't take much time to add a parameter to that login string that specifies the NUMA node. Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists