[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070311.135946.28788670.davem@davemloft.net>
Date: Sun, 11 Mar 2007 13:59:46 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: clameter@....com
Cc: linux-kernel@...r.kernel.org, ak@...e.com, holt@....com,
linux-ia64@...r.kernel.org, mpm@...enic.com
Subject: Re: [QUICKLIST 0/6] Arch independent quicklists V1
From: Christoph Lameter <clameter@....com>
Date: Sat, 10 Mar 2007 18:09:23 -0800 (PST)
> Page table pages have the characteristics that they are typically zero
> or in a known state when they are freed. This is usually the exactly
> same state as needed after allocation. So it makes sense to build a list
> of freed page table pages and then consume the pages already in use
> first. Those pages have already been initialized correctly (thus no
> need to zero them) and are likely already cached in such a way that
> the MMU can use them most effectively.
I'm going to make the radical declaration that it be perhaps often
better to always initialize page table chunks to all zeros on
allocation.
The reason is that every time I've monitored the allocation patterns
of these things on SMP, the page table chunks always get released on a
different cpu than where they were initialized.
It's precisely suboptimal for a workload that forks off a lot of
very short lived jobs, watch what happens during runs of lmbench's
lat_proc for example.
The allocator side just does nothing but emit L2 cache line ownership
transactions as the pte page is touched. Especially on chips like
PowerPC where zero initialization is absurdly cheap, we can avoid all
of the cache line transfers if we just initialize it at allocation
time.
Look, I like this trick too, sparc and sparc64 were the first Linux
platforms to implement page table caching about 8 years ago, but
I'm wondering whether it really makes sense any more.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists