[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1187641056.5337.32.camel@lappy>
Date: Mon, 20 Aug 2007 22:17:36 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Christoph Lameter <clameter@....com>
Cc: Pavel Machek <pavel@....cz>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
dkegel@...gle.com, David Miller <davem@...emloft.net>,
Nick Piggin <npiggin@...e.de>
Subject: Re: [RFC 2/9] Use NOMEMALLOC reclaim to allow reclaim if
PF_MEMALLOC is set
On Mon, 2007-08-20 at 12:00 -0700, Christoph Lameter wrote:
> On Sat, 18 Aug 2007, Pavel Machek wrote:
>
> > > The reclaim is of particular important to stacked filesystems that may
> > > do a lot of allocations in the write path. Reclaim will be working
> > > as long as there are clean file backed pages to reclaim.
> >
> > I don't get it. Lets say that we have stacked filesystem that needs
> > it. That filesystem is broken today.
> >
> > Now you give it second chance by reclaiming clean pages, but there are
> > no guarantees that we have any.... so that filesystem is still broken
> > with your patch...?
>
> There is a guarantee that we have some because the user space program is
> executing. Meaning the executable pages can be retrieved. The amount
> dirty memory in the system is limited by the dirty_ratio. So the VM can
> only get into trouble if there is a sufficient amount of anonymous pages
> and all executables have been reclaimed. That is pretty rare.
>
> Plus the same issue can happen today. Writes are usually not completed
> during reclaim. If the writes are sufficiently deferred then you have the
> same issue now.
Once we have initiated (disk) writeout we do not need more memory to
complete it, all we need to do is wait for the completion interrupt.
Networking is different here in that an unbounded amount of net traffic
needs to be processed in order to find the completion event.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists