[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090510142322.690186a4@infradead.org>
Date: Sun, 10 May 2009 14:23:22 -0700
From: Arjan van de Ven <arjan@...radead.org>
To: Rik van Riel <riel@...hat.com>
Cc: Alan Cox <alan@...rguk.ukuu.org.uk>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Wu Fengguang <fengguang.wu@...el.com>, hannes@...xchg.org,
linux-kernel@...r.kernel.org, tytso@....edu, linux-mm@...ck.org,
elladan@...imo.com, npiggin@...e.de, cl@...ux-foundation.org,
minchan.kim@...il.com
Subject: Re: [PATCH -mm] vmscan: make mapped executable pages the first
class citizen
On Sun, 10 May 2009 16:37:33 -0400
Rik van Riel <riel@...hat.com> wrote:
> Alan Cox wrote:
>
> > Make your swap decisions depend upon I/O load on storage devices.
> > Make your paging decisions based upon writing and reading large
> > contiguous chunks (512K costs the same as 8K pretty much) - but you
> > already know that .
>
> Even a 2MB chunk only takes 3x as much time to write to
> or read from disk as a 4kB page.
... if your disk rotates.
If instead it's a voltage level in a transistor... the opposite is
true... it starts to approach linear-with-size then ;-)
At least we know for the block device which of the two types it is
inside the kernel (ok, there's a few false positives towards rotating,
but those we could/should quirk away)
>
> > Historically BSD tackled some of this by actually swapping
> > processes out once pressure got very high
>
> Our big problem today usually isn't throughput though,
> but latency - the time it takes to bring a previously
> inactive application back to life.
Could we do a chain? E.g. store which page we paged out next (for the
vma) as part of the first pageout, and then page them just right back
in? Or even have a (bitmap?) of pages that have been in memory for the
vma, and on a re-fault, look for other pages "nearby" that used to be
in but are now out ?
>
> If we have any throughput related memory problems,
> they often seem to be due to TLB miss penalties.
TLB miss is cheap on x86. For most non-HPC workloads they
tend to be hidden by the out of order execution...
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists