[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070422233936.97d78677.akpm@linux-foundation.org>
Date: Sun, 22 Apr 2007 23:39:36 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Miklos Szeredi <miklos@...redi.hu>
Cc: a.p.zijlstra@...llo.nl, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, neilb@...e.de, dgc@....com,
tomoki.sekiyama.qu@...achi.com, nikita@...sterfs.com,
trond.myklebust@....uio.no, yingchao.zhou@...il.com
Subject: Re: [PATCH 10/10] mm: per device dirty threshold
On Mon, 23 Apr 2007 08:29:59 +0200 Miklos Szeredi <miklos@...redi.hu> wrote:
> > > What about swapout? That can increase the number of writeback pages,
> > > without decreasing the number of dirty pages, no?
> >
> > Could we not solve that by enabling cap_account_writeback on
> > swapper_space, and thereby account swap writeback pages. Then the VM
> > knows it has outstanding IO and need not panic.
>
> Hmm, I'm not sure that would be right, because then those writeback
> pages would be accounted twice: once for swapper_space, and once for
> the real device.
>
> So there's a condition, when lots of anonymous pages are turned into
> swap-cache writeback pages, and we should somehow throttle this, because
>
> >>> This means that all memory is pinned and unreclaimable and the VM gets
> >>> upset and goes oom.
>
> although, it's not quite clear in my mind, how the VM gets upset about
> this.
I've been scratching my head on and off for a couple of days over this.
We've traditionally had reclaim problems when there's a huge amount of
dirty MAP_SHARED data, which the VM didn't know was dirty. It's the old
"map a file which is the same size as physical memory and write to it all"
stresstest.
But we do not have such problems with anonymous memory, and I'm darned if I
can remember why :(
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists