[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1282811361.1975.273.camel@laptop>
Date: Thu, 26 Aug 2010 10:29:21 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: David Rientjes <rientjes@...gle.com>
Cc: Ted Ts'o <tytso@....edu>, Jens Axboe <jaxboe@...ionio.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Neil Brown <neilb@...e.de>, Alasdair G Kergon <agk@...hat.com>,
Chris Mason <chris.mason@...cle.com>,
Steven Whitehouse <swhiteho@...hat.com>,
Jan Kara <jack@...e.cz>,
Frederic Weisbecker <fweisbec@...il.com>,
"linux-raid@...r.kernel.org" <linux-raid@...r.kernel.org>,
"linux-btrfs@...r.kernel.org" <linux-btrfs@...r.kernel.org>,
"cluster-devel@...hat.com" <cluster-devel@...hat.com>,
"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
"reiserfs-devel@...r.kernel.org" <reiserfs-devel@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [patch 1/5] mm: add nofail variants of kmalloc kcalloc and
kzalloc
On Wed, 2010-08-25 at 20:09 -0700, David Rientjes wrote:
> > Oh, we can determine an upper bound. You might just not like it.
> > Actually ext3/ext4 shouldn't be as bad as XFS, which Dave estimated to
> > be around 400k for a transaction. My guess is that the worst case for
> > ext3/ext4 is probably around 256k or so; like XFS, most of the time,
> > it would be a lot less. (At least, if data != journalled; if we are
> > doing data journalling and every single data block begins with
> > 0xc03b3998U, we'll need to allocate a 4k page for every single data
> > block written.) We could dynamically calculate an upper bound if we
> > had to. Of course, if ext3/ext4 is attached to a network block
> > device, then it could get a lot worse than 256k, of course.
> >
> On my 8GB machine, /proc/zoneinfo says the min watermark for ZONE_NORMAL
> is 5086 pages, or ~20MB. GFP_ATOMIC would allow access to ~12MB of that,
> so perhaps we should consider this is an acceptable abuse of GFP_ATOMIC as
> a fallback behavior when GFP_NOFS or GFP_NOIO fails?
Agreed with the fact that 400k isn't much to worry about.
Not agreed with the GFP_ATOMIC stmt.
Direct reclaim already has PF_MEMALLOC, but then we also need a
concurrency limit on that, otherwise you can still easily blow though
your reserves by having 100 concurrency users of it, resulting in an
upper bound of 40000k instead, which will be too much.
There were patches to limit the direct reclaim contexts, not sure they
ever got anywhere..
It is something to consider in the re-design of the whole
direct-reclaim/writeback paths though..
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists