With the preempt checking logic for __this_cpu_ops we will get false positives from locations in the code that use numa_node_id. Before the __this_cpu ops where introduced there were no checks for preemption present either. smp_raw_processor_id() was used. See http://www.spinics.net/lists/linux-numa/msg00641.html Therefore we need to use raw_cpu_read here to avoid false postives. Note that this issue has been discussed in prior years. If the process changes nodes after retrieving the current numa node then that is acceptable since most uses of numa_node etc are for optimization and not for correctness. There were suggestions to implement a raw_numa_node_id in order to do preempt checks for numa_node_id as well. But I think we better defer that to another patch since that would mean investigating how numa_node_id() is used throughout the kernel which would increase the scope of this patchset significantly. After all preemption was never checked before when numa_node_id() was used. Some sample traces: __this_cpu_read operation in preemptible [00000000] code: login/1456 caller is __this_cpu_preempt_check+0x2b/0x2d CPU: 0 PID: 1456 Comm: login Not tainted 3.12.0-rc4-cl-00062-g2fe80d3-dirty #185 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 000000000000013c ffff88001f31ba58 ffffffff8147cf5e ffff88001f31bfd8 ffff88001f31ba88 ffffffff8127eea9 0000000000000000 ffff88001f3975c0 00000000f7707000 ffff88001f3975c0 ffff88001f31bac0 ffffffff8127eeef Call Trace: [] dump_stack+0x4e/0x82 [] check_preemption_disabled+0xc5/0xe0 [] __this_cpu_preempt_check+0x2b/0x2d [] ? show_stack+0x3b/0x3d [] get_task_policy+0x1d/0x49 [] get_vma_policy+0x14/0x76 [] alloc_pages_vma+0x35/0xff [] handle_mm_fault+0x290/0x73b [] __do_page_fault+0x3fe/0x44d [] ? trace_hardirqs_on_caller+0x142/0x19e [] ? trace_hardirqs_on+0xd/0xf [] ? trace_hardirqs_off_thunk+0x3a/0x3c [] ? find_get_pages_contig+0x18e/0x18e [] ? find_get_pages_contig+0x18e/0x18e [] do_page_fault+0x9/0xc [] page_fault+0x22/0x30 [] ? find_get_pages_contig+0x18e/0x18e [] ? find_get_pages_contig+0x18e/0x18e [] ? file_read_actor+0x3a/0x15a [] ? find_get_pages_contig+0x18e/0x18e [] generic_file_aio_read+0x38e/0x624 [] do_sync_read+0x54/0x73 [] vfs_read+0x9d/0x12a [] SyS_read+0x47/0x7e [] cstar_dispatch+0x7/0x23 caller is __this_cpu_preempt_check+0x2b/0x2d CPU: 0 PID: 1456 Comm: login Not tainted 3.12.0-rc4-cl-00062-g2fe80d3-dirty #185 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 00000000000000e8 ffff88001f31bbf8 ffffffff8147cf5e ffff88001f31bfd8 ffff88001f31bc28 ffffffff8127eea9 ffffffff823c5c40 00000000000213da 0000000000000000 0000000000000000 ffff88001f31bc60 ffffffff8127eeef Call Trace: [] dump_stack+0x4e/0x82 [] check_preemption_disabled+0xc5/0xe0 [] __this_cpu_preempt_check+0x2b/0x2d [] ? install_special_mapping+0x11/0xe4 [] alloc_pages_current+0x8f/0xbc [] __page_cache_alloc+0xb/0xd [] __do_page_cache_readahead+0xf4/0x219 [] ? __do_page_cache_readahead+0x72/0x219 [] ra_submit+0x1c/0x20 [] ondemand_readahead+0x28c/0x2b4 [] page_cache_sync_readahead+0x38/0x3a [] generic_file_aio_read+0x261/0x624 [] do_sync_read+0x54/0x73 [] vfs_read+0x9d/0x12a [] SyS_read+0x47/0x7e [] cstar_dispatch+0x7/0x23 Cc: linux-mm@kvack.org Cc: Alex Shi Signed-off-by: Christoph Lameter Index: linux/include/linux/topology.h =================================================================== --- linux.orig/include/linux/topology.h 2013-12-02 16:07:51.304591590 -0600 +++ linux/include/linux/topology.h 2013-12-02 16:07:51.304591590 -0600 @@ -188,7 +188,7 @@ DECLARE_PER_CPU(int, numa_node); /* Returns the number of the current Node. */ static inline int numa_node_id(void) { - return __this_cpu_read(numa_node); + return raw_cpu_read(numa_node); } #endif @@ -245,7 +245,7 @@ static inline void set_numa_mem(int node /* Returns the number of the nearest Node with memory */ static inline int numa_mem_id(void) { - return __this_cpu_read(_numa_mem_); + return raw_cpu_read(_numa_mem_); } #endif -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/