[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1284581305.2703.127.camel@localhost.localdomain>
Date: Wed, 15 Sep 2010 16:08:25 -0400
From: Eric Paris <eparis@...hat.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Jiri Slaby <jirislaby@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
Al Viro <viro@...iv.linux.org.uk>
Subject: Re: audit_tree: sleep inside atomic
On Mon, 2010-09-13 at 17:05 -0700, Andrew Morton wrote:
> On Fri, 03 Sep 2010 15:52:30 +0200
> Jiri Slaby <jirislaby@...il.com> wrote:
>
> > Ideas, comments?
>
> Apparently not.
Sorry, I've been slacking off on vacation the last couple weeks.
> The question is: why is nobody reporting this bug? Obviously nobody's
> running that code path. Why not?
The only people who run this code path, that I know of, are govt orgs
who run in certified environments. I don't know of any upstream kernel
users who really would hit it.
In any case I don't think it would be particularly painful to just
always allocate a chunk between the two locks. this is not a hot path
by any stretch of the imagination. I'll see if I can't code something
up today/tomorrow.
-Eric
> > On 06/21/2010 05:15 PM, Jiri Slaby wrote:
> > > Hi,
> > >
> > > stanse found a sleep inside atomic added by the following commit:
> > > commit fb36de479642bc9bdd3af251ae48b882d8a1ad5d
> > > Author: Eric Paris <eparis@...hat.com>
> > > Date: Thu Dec 17 20:12:05 2009 -0500
> > >
> > > audit: reimplement audit_trees using fsnotify rather than inotify
> > >
> > > Simply switch audit_trees from using inotify to using fsnotify for it's
> > > inode pinning and disappearing act information.
> > >
> > > Signed-off-by: Eric Paris <eparis@...hat.com>
> > >
> > >
> > > In untag_chunk, there is
> > > spin_lock(&entry->lock);
> > > ...
> > > new = alloc_chunk(size);
> > > ...
> > > spin_unlock(&entry->lock);
> > >
> > > with
> > > static struct audit_chunk *alloc_chunk(int count)
> > > {
> > > struct audit_chunk *chunk;
> > > ...
> > > chunk = kzalloc(size, GFP_KERNEL);
> > >
> > > But this can sleep. How big the allocations are? Could it be ATOMIC or
> > > moved outside the spinlock?
>
> Yes, we could make it GFP_ATOMIC - the code tries to handle allocation
> failures.
>
> But if we did that we'd be adding a rarely-executed codepath to an
> apparently-never-executed code path. We'd end up shipping stuff which
> nobody had tested, ever.
>
> Plus GFP_ATOMIC is unreliable and using it because we screwed up the
> locking is lame.
>
> untag_chunk() could be converted to use GFP_KERNEL outside hash_lock
> and ->entry_lock. The usual way is to take a peek, see if it looks
> like we'll probably need to do an allocation and if so, do it outside
> the locks then free it again if it turned out that we didn't need it
> after all. Or to maintain a one-deep static-local cache and preload
> that cache if it's empty. Neither are particularly pretty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists