[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100913170522.f0b8b1e8.akpm@linux-foundation.org>
Date: Mon, 13 Sep 2010 17:05:22 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Jiri Slaby <jirislaby@...il.com>
Cc: Eric Paris <eparis@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Al Viro <viro@...iv.linux.org.uk>
Subject: Re: audit_tree: sleep inside atomic
On Fri, 03 Sep 2010 15:52:30 +0200
Jiri Slaby <jirislaby@...il.com> wrote:
> Ideas, comments?
Apparently not.
The question is: why is nobody reporting this bug? Obviously nobody's
running that code path. Why not?
> On 06/21/2010 05:15 PM, Jiri Slaby wrote:
> > Hi,
> >
> > stanse found a sleep inside atomic added by the following commit:
> > commit fb36de479642bc9bdd3af251ae48b882d8a1ad5d
> > Author: Eric Paris <eparis@...hat.com>
> > Date: Thu Dec 17 20:12:05 2009 -0500
> >
> > audit: reimplement audit_trees using fsnotify rather than inotify
> >
> > Simply switch audit_trees from using inotify to using fsnotify for it's
> > inode pinning and disappearing act information.
> >
> > Signed-off-by: Eric Paris <eparis@...hat.com>
> >
> >
> > In untag_chunk, there is
> > spin_lock(&entry->lock);
> > ...
> > new = alloc_chunk(size);
> > ...
> > spin_unlock(&entry->lock);
> >
> > with
> > static struct audit_chunk *alloc_chunk(int count)
> > {
> > struct audit_chunk *chunk;
> > ...
> > chunk = kzalloc(size, GFP_KERNEL);
> >
> > But this can sleep. How big the allocations are? Could it be ATOMIC or
> > moved outside the spinlock?
Yes, we could make it GFP_ATOMIC - the code tries to handle allocation
failures.
But if we did that we'd be adding a rarely-executed codepath to an
apparently-never-executed code path. We'd end up shipping stuff which
nobody had tested, ever.
Plus GFP_ATOMIC is unreliable and using it because we screwed up the
locking is lame.
untag_chunk() could be converted to use GFP_KERNEL outside hash_lock
and ->entry_lock. The usual way is to take a peek, see if it looks
like we'll probably need to do an allocation and if so, do it outside
the locks then free it again if it turned out that we didn't need it
after all. Or to maintain a one-deep static-local cache and preload
that cache if it's empty. Neither are particularly pretty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists