[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1103251041240.27814@router.home>
Date: Fri, 25 Mar 2011 10:45:24 -0500 (CDT)
From: Christoph Lameter <cl@...ux.com>
To: Tejun Heo <tj@...nel.org>
cc: Eric Dumazet <eric.dumazet@...il.com>,
Pekka Enberg <penberg@...nel.org>, Ingo Molnar <mingo@...e.hu>,
torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
npiggin@...nel.dk, David Rientjes <rientjes@...gle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [GIT PULL] SLAB changes for v2.6.39-rc1
On Fri, 25 Mar 2011, Tejun Heo wrote:
> I've looked through the code but can't figure out what the difference
> is. The memset code is in mm/percpu-vm.c::pcpu_populate_chunk().
>
> for_each_possible_cpu(cpu)
> memset((void *)pcpu_chunk_addr(chunk, cpu, 0) + off, 0, size);
>
> (pcpu_chunk_addr(chunk, cpu, 0) + off) is the same vaddr as will be
> obtained by per_cpu_ptr(ptr, cpu), so all allocated memory regions are
> accessed before being returned. Dazed and confused (seems like the
> theme of today for me).
>
> Could it be that the vmalloc page is taking more than one faults?
The vmalloc page only contains per cpu data from a single cpu right?
Could anyone have set write access restrictions that would require a fault
to get rid of?
Or does an access from a different cpu require a "page table sync"?
There is some rather strange looking code in arch/x86/mm/fault.c:vmalloc_fault
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists