[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C1A5CD4.3020802@kernel.org>
Date: Thu, 17 Jun 2010 19:35:16 +0200
From: Tejun Heo <tj@...nel.org>
To: Cliff Wickman <cpw@....com>
CC: linux-kernel@...r.kernel.org
Subject: Re: per_cpu_ptr_to_phys() failure on UV x86_64
On 06/17/2010 07:08 PM, Tejun Heo wrote:
> (scratching head...) So, that means it's given an address for which
> !pcpu_addr_in_first_chunk() but outside of vmalloc area. Strange.
> I'll find out what's going on.
Does the following patch work? The original patch assumed that @addr
would be the address of the base cpu which isn't true. I only compile
tested the patch so it might be broken (sorry, I gotta go somewhere
now) but this should be the right direction.
Thanks.
diff --git a/mm/percpu.c b/mm/percpu.c
index 46485e1..8956155 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -978,14 +978,23 @@ bool is_kernel_percpu_address(unsigned long addr)
*/
phys_addr_t per_cpu_ptr_to_phys(void *addr)
{
- if (pcpu_addr_in_first_chunk(addr)) {
- if ((unsigned long)addr < VMALLOC_START ||
- (unsigned long)addr >= VMALLOC_END)
- return __pa(addr);
- else
- return page_to_phys(vmalloc_to_page(addr));
- } else
- return page_to_phys(pcpu_addr_to_page(addr));
+ void __percpu *base = __addr_to_pcpu_ptr(pcpu_base_addr);
+ unsigned int cpu;
+
+ for_each_possible_cpu(cpu) {
+ void *start = per_cpu_ptr(base, cpu);
+
+ if (addr >= start && addr < start + pcpu_unit_size) {
+ /* in the first chunk */
+ if ((unsigned long)addr < VMALLOC_START ||
+ (unsigned long)addr >= VMALLOC_END)
+ return __pa(addr);
+ else
+ return page_to_phys(vmalloc_to_page(addr));
+ }
+ }
+ /* in one of the other chunks */
+ return page_to_phys(pcpu_addr_to_page(addr));
}
static inline size_t pcpu_calc_fc_sizes(size_t static_size,
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists