[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090330142610.4b94935d.akpm@linux-foundation.org>
Date: Mon, 30 Mar 2009 14:26:10 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Eric Paris <eparis@...hat.com>
Cc: linux-kernel@...r.kernel.org, mingo@...e.hu, peterz@...radead.org,
balbir@...ux.vnet.ibm.com, eparis@...hat.com,
linux-fsdevel@...r.kernel.org, aviro@...hat.com
Subject: Re: [PATCH] make inotify event handles use GFP_NOFS
On Wed, 18 Mar 2009 14:27:32 -0400
Eric Paris <eparis@...hat.com> wrote:
> I think this is a bandaide to shut up lockdep. I could either figure
> out lockdep classes and figure out how to reclassify inotify locks since
> I believe Nick is correct when he says inotify watches pin the inode in
> core so memory pressure can't evict it.
It's pretty sad to degrading the strength of the memory allocation just
to squish a lockdep report.
> I don't want to do that as I
> think the real fix is my next generation fsnotify which does zero
> allocations under locks and so everything can be GFP_KERNEL.
I assume that's the 13-patch series further down in my todo pile.
Perhaps this workaround is suitable for 2.6.29.x, or 2.6.30 if the
13-patch-series was too late. But do we care enough?
> I'm
> posting this as it is clearly safe and should fix the issue.
>
> http://marc.info/?l=linux-kernel&m=123617147432377&w=2
>
> includes a lockdep warning that shows while we are reclaiming FS memory
> and inode may get evicted which generates an IN_IGNORED message. Half
> of that code path already used GFP_NOFS but a second allocation to store
> the filename was using GFP_KERNEL. As a precaution I also moved the
> audit handle_event code path to use GFP_NOFS.
>
> This is much the same as the precaution in f04b30de3c82528 which did
> something similar.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists