lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 28 Dec 2010 12:35:45 -0800 (PST) From: David Rientjes <rientjes@...gle.com> To: Tejun Heo <tj@...nel.org> cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>, tglx@...utronix.de, "H. Peter Anvin" <hpa@...or.com>, x86@...nel.org, eric.dumazet@...il.com, yinghai@...nel.org, brgerst@...il.com, gorcunov@...il.com, Pekka Enberg <penberg@...nel.org>, shaohui.zheng@...el.com Subject: Re: [PATCH 13/16] x86: Unify cpu/apicid <-> NUMA node mapping between 32 and 64bit On Tue, 28 Dec 2010, Tejun Heo wrote: > diff --git a/arch/x86/mm/srat_64.c b/arch/x86/mm/srat_64.c > index a35cb9d..1af9c6e 100644 > --- a/arch/x86/mm/srat_64.c > +++ b/arch/x86/mm/srat_64.c > @@ -79,7 +79,7 @@ static __init void bad_srat(void) > printk(KERN_ERR "SRAT: SRAT not used.\n"); > acpi_numa = -1; > for (i = 0; i < MAX_LOCAL_APIC; i++) > - apicid_to_node[i] = NUMA_NO_NODE; > + set_apicid_to_node(i, NUMA_NO_NODE); > for (i = 0; i < MAX_NUMNODES; i++) { > nodes[i].start = nodes[i].end = 0; > nodes_add[i].start = nodes_add[i].end = 0; > @@ -134,7 +134,7 @@ acpi_numa_x2apic_affinity_init(struct acpi_srat_x2apic_cpu_affinity *pa) > } > > apic_id = pa->apic_id; > - apicid_to_node[apic_id] = node; > + set_apicid_to_node(apic_id, node); > node_set(node, cpu_nodes_parsed); > acpi_numa = 1; > printk(KERN_INFO "SRAT: PXM %u -> APIC 0x%04x -> Node %u\n", > @@ -168,7 +168,7 @@ acpi_numa_processor_affinity_init(struct acpi_srat_cpu_affinity *pa) > apic_id = (pa->apic_id << 8) | pa->local_sapic_eid; > else > apic_id = pa->apic_id; > - apicid_to_node[apic_id] = node; > + set_apicid_to_node(apic_id, node); > node_set(node, cpu_nodes_parsed); > acpi_numa = 1; > printk(KERN_INFO "SRAT: PXM %u -> APIC 0x%02x -> Node %u\n", > @@ -512,13 +512,13 @@ void __init acpi_fake_nodes(const struct bootnode *fake_nodes, int num_nodes) > * node, it must now point to the fake node ID. > */ > for (j = 0; j < MAX_LOCAL_APIC; j++) > - if (apicid_to_node[j] == nid && > + if (__apicid_to_node[j] == nid && > fake_apicid_to_node[j] == NUMA_NO_NODE) > fake_apicid_to_node[j] = i; > } > for (i = 0; i < num_nodes; i++) > __acpi_map_pxm_to_node(fake_node_to_pxm_map[i], i); > - memcpy(apicid_to_node, fake_apicid_to_node, sizeof(apicid_to_node)); > + memcpy(__apicid_to_node, fake_apicid_to_node, sizeof(__apicid_to_node)); > > nodes_clear(nodes_parsed); > for (i = 0; i < num_nodes; i++) This is going to conflict with a387e95a ("") in x86/numa, so you'll need the following hunk for acpi_fake_nodes(). I'm not sure why this patchset is being based on x86/apic-cleanup rather than x86/numa? diff --git a/arch/x86/mm/srat_64.c b/arch/x86/mm/srat_64.c --- a/arch/x86/mm/srat_64.c +++ b/arch/x86/mm/srat_64.c @@ -511,7 +511,7 @@ void __init acpi_fake_nodes(const struct bootnode *fake_nodes, int num_nodes * node, it must now point to the fake node ID. */ for (j = 0; j < MAX_LOCAL_APIC; j++) - if (apicid_to_node[j] == nid && + if (__apicid_to_node[j] == nid && fake_apicid_to_node[j] == NUMA_NO_NODE) fake_apicid_to_node[j] = i; } @@ -522,13 +522,13 @@ void __init acpi_fake_nodes(const struct bootnode *fake_nodes, int num_nod * value. */ for (i = 0; i < MAX_LOCAL_APIC; i++) - if (apicid_to_node[i] != NUMA_NO_NODE && + if (__apicid_to_node[i] != NUMA_NO_NODE && fake_apicid_to_node[i] == NUMA_NO_NODE) fake_apicid_to_node[i] = 0; for (i = 0; i < num_nodes; i++) __acpi_map_pxm_to_node(fake_node_to_pxm_map[i], i); - memcpy(apicid_to_node, fake_apicid_to_node, sizeof(apicid_to_node)); + memcpy(__apicid_to_node, fake_apicid_to_node, sizeof(__apicid_to_node)); nodes_clear(nodes_parsed); for (i = 0; i < num_nodes; i++) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists