[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070521102321.GA2632@lst.de>
Date: Mon, 21 May 2007 12:23:21 +0200
From: Christoph Hellwig <hch@....de>
To: David Chinner <dgc@....com>
Cc: Michal Piotrowski <michal.k.k.piotrowski@...il.com>,
Christoph Hellwig <hch@....de>, xfs-masters@....sgi.com,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [xfs-masters] Re: 2.6.22-rc1-mm1
On Mon, May 21, 2007 at 08:11:42PM +1000, David Chinner wrote:
> Christoph - this is an interaction with xfs_buf_associate_memory();
> I'm not sure what it is doing is at all safe now that it never gets
> passed kmem_alloc()d memory - it works for the log recovery case
> because we use it in pairs - once to shorten the buffer and then once
> to put it back the way it was.
>
> But that doesn't work for the log buffers (we never return them to their
> original state) and the log wrap case looks to work mostly by accident
> now (and could posibly lead to double freeing pages)....
>
> It seems that what we really need with the new code is a xfs_buf_clone()
> operation followed by trimming the range to what the secondary I/O needs
> to span. This would work for the log buffer case as well. Your thoughts?
xfs_buf_associate_memory is a mess. My original plan was to get rid of
it, but I kept that out to keep that patchset small and easily reviable,
but it seems like that was a mistake. My plan is the following:
- xlog_bread and thus the whole buffer I/O path grows an iooffset
paramater that specifies at which offset into the buffer we start
the actual I/O. That gets rid of all the xfs_buf_associate_memory
memory uses in the log recovery code
- add a buffer clone operation as suggested by you above, and use
the offset in xlog_sync aswell.
until then you patch below looks fine.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists