lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6bffcb0e0705220745v67456895hed030364d22617f7@mail.gmail.com>
Date:	Tue, 22 May 2007 16:45:25 +0200
From:	"Michal Piotrowski" <michal.k.k.piotrowski@...il.com>
To:	"David Chinner" <dgc@....com>
Cc:	"Christoph Hellwig" <hch@....de>, xfs-masters@....sgi.com,
	"Andrew Morton" <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [xfs-masters] Re: 2.6.22-rc1-mm1

Hi David,

On 21/05/07, David Chinner <dgc@....com> wrote:
> On Fri, May 18, 2007 at 12:11:14PM +1000, David Chinner wrote:
> > On Thu, May 17, 2007 at 10:05:11PM +0200, Michal Piotrowski wrote:
> > > I applied your patch and I get another oops
> > >
> > > [  261.491499] XFS mounting filesystem loop0
> > > [  261.501641] Ending clean XFS mount for filesystem: loop0
> > > [  261.507698] SELinux: initialized (dev loop0, type xfs), uses xattr
> > > [  261.567441] XFS mounting filesystem loop0
> > > [  261.573931] allocation failed: out of vmalloc space - use vmalloc=<size> to increase size.
> > > [  261.582935] xfs_buf_get_noaddr: failed to map pages
> > > [  261.592478] Ending clean XFS mount for filesystem: loop0
> > > [  261.618543] SELinux: initialized (dev loop0, type xfs), uses xattr
> > > [  261.691563] XFS mounting filesystem loop0
> > > [  261.698927] allocation failed: out of vmalloc space - use vmalloc=<size> to increase size.
> > >                                   ^^^^^^^^^^^^^^^^^^^^
> > >                                   interesting
> >
> > Yeah, looks like a vmalloc leak is occurring. I haven't noticed
> > it before because:
> >
> > VmallocTotal: 137427898368 kB
> > VmallocUsed:   3128272 kB
> > VmallocChunk: 137424770048 kB
> >
> > It takes a long time to leak enough vmapped space to run out on ia64...
> >
> > That tends to imply we have a mapped buffer being leaked somewhere.
> > Interestingly, I don't see a memory leak so we must be freeing the
> > memory associated with the buffer, just not unmapping it first. Not
> > sure how that can happen yet.....
> .....
> >
> > Looks like we're leaking 272kB of vmalloc space on each mount/unmount
> > cycle. I'm trying to track this down now....
>
> I've found what is going on here - kmem_alloc() is decidedly more
> forgiving than manually built page arrays and vmap/vunmap. Prior
> to this change we wouldn't have even leaked memory....
>
> Christoph - this is an interaction with xfs_buf_associate_memory();
> I'm not sure what it is doing is at all safe now that it never gets
> passed kmem_alloc()d memory - it works for the log recovery case
> because we use it in pairs - once to shorten the buffer and then once
> to put it back the way it was.
>
> But that doesn't work for the log buffers (we never return them to their
> original state) and the log wrap case looks to work mostly by accident
> now (and could posibly lead to double freeing pages)....
>
> It seems that what we really need with the new code is a xfs_buf_clone()
> operation followed by trimming the range to what the secondary I/O needs
> to span. This would work for the log buffer case as well. Your thoughts?
>
> In the meantime, the following patch appears to fix the leak.

After a few minutes of mount/umount cycle everything seems to be ok,
problem fixed.

Thanks!

>
> Cheers,
>
> Dave.

Regards,
Michal

-- 
Michal K. K. Piotrowski
Kernel Monkeys
(http://kernel.wikidot.com/start)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ