[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1251140949.25096.16.camel@dhcp231-106.rdu.redhat.com>
Date: Mon, 24 Aug 2009 15:09:09 -0400
From: Eric Paris <eparis@...hat.com>
To: Frans Pop <elendil@...net.nl>
Cc: linux-kernel@...r.kernel.org, linux-fs-devel@...r.kernel.org,
zdenek.kabelac@...il.com, torvalds@...ux-foundation.org,
christoph.thielecke@....de, akpm@...ux-foundation.org,
viro@...iv.linux.org.uk, grant.wilson@....co.uk,
mikko.cal@...il.com
Subject: Re: [PATCH 2/3] inotify: do not BUG on idr entries at inotify
destruction
On Mon, 2009-08-24 at 20:34 +0200, Frans Pop wrote:
> Eric Paris wrote:
>
> > If an inotify watch is left in the idr when an fsnotify group is destroyed
> > this will lead to a BUG. This is not a dangerous situation and really
> > indicates a programming bug and leak of memory. This patch changes it to
> > use a WARN and a printk rather than killing people's boxes.
> >
> > Signed-off-by: Eric Paris <eparis@...hat.com>
> > ---
> >
> > --- a/fs/notify/inotify/inotify_fsnotify.c
> > +++ b/fs/notify/inotify/inotify_fsnotify.c
> > @@ -107,6 +107,16 @@ static bool inotify_should_send_event(struct
> > fsnotify_group *group, struct inode
> >
> > static int idr_callback(int id, void *p, void *data)
> > {
> > + struct fsnotify_mark_entry *entry;
> > + struct inotify_inode_mark_entry *ientry;
> > +
> > + entry = p;
> > + ientry = container_of(entry, struct inotify_inode_mark_entry, fsn_entry);
> > +
> > + WARN(1, "inotify closing but id=%d still in idr. Probably leaking memory\n", id); +
> > + printk(KERN_WARNING "group=%p entry->group=%p inode=%p wd=%d\n",
> > + data, entry->group, entry->inode, ientry->wd);
> > BUG();
> > return 0;
> > }
>
> I suspect you intended to remove the BUG?
You suspect correctly. :( I'll fix in my tree before I request a
pull....
-Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists