[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <18195.5992.545271.871100@notabene.brown>
Date: Mon, 15 Oct 2007 17:31:52 +1000
From: Neil Brown <neilb@...e.de>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: Milan Broz <mbroz@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Torsten Kaiser <just.for.lkml@...glemail.com>,
linux-kernel@...r.kernel.org, Alasdair G Kergon <agk@...hat.com>,
dm-devel@...hat.com
Subject: Re: 2.6.23-mm1
On Monday October 15, jens.axboe@...cle.com wrote:
> On Mon, Oct 15 2007, Milan Broz wrote:
> >
> > clone->bi_size is zero here now, so crypt_free_buffer_pages will not
> > work correctly (previously there was count of processed bytes).
> >
> > But because it seems that bio cannot be processed partially now, we can
> > simplify crypt_free_buffer_pages to always remove all allocated pages.
>
> Neil, this doesn't look very good. dm-crypt needs to know the clone io
> size, so ->bi_size was definitely used properly in this context before.
> Now it's gone. Suggestions on how to fix that up?
How about the following - even more code simplification gained by this
approach :-)
I originally had the patch for removing the 'size' argument after a
patch (series) that made bi_size unchanged. It seemed that patch
would face a harder path upstream so I re-ordered them and missed this
dependency. Mea Culpa.
>
> I've been less than impressed with the bi_end_io() patchset so far, it's
> been full of typos and bad conversions. I'm tempted to revert the whole
> thing, clearly it wasn't ready for merge.
I must have missed something ....
I've seen: A fix for a bi_end_io in jfs that I missed.
A correction for that fix ("return 0" was remove instead
of just the '0' removed)
Some fixed for code that is only in -mm (which I didn't do
because I thought you wanted it against a non-mm tree).
I think it was definitely ready for merging in -mm. Possibly not for
mainline just yet.
NeilBrown
Signed-off-by: Neil Brown <neilb@...e.de>
### Diffstat output
./drivers/md/dm-crypt.c | 30 +++++-------------------------
1 file changed, 5 insertions(+), 25 deletions(-)
diff .prev/drivers/md/dm-crypt.c ./drivers/md/dm-crypt.c
--- .prev/drivers/md/dm-crypt.c 2007-10-15 17:18:20.000000000 +1000
+++ ./drivers/md/dm-crypt.c 2007-10-15 17:21:43.000000000 +1000
@@ -444,32 +444,12 @@ static struct bio *crypt_alloc_buffer(st
}
static void crypt_free_buffer_pages(struct crypt_config *cc,
- struct bio *clone, unsigned int bytes)
+ struct bio *clone)
{
- unsigned int i, start, end;
+ unsigned int i;
struct bio_vec *bv;
- /*
- * This is ugly, but Jens Axboe thinks that using bi_idx in the
- * endio function is too dangerous at the moment, so I calculate the
- * correct position using bi_vcnt and bi_size.
- * The bv_offset and bv_len fields might already be modified but we
- * know that we always allocated whole pages.
- * A fix to the bi_idx issue in the kernel is in the works, so
- * we will hopefully be able to revert to the cleaner solution soon.
- */
- i = clone->bi_vcnt - 1;
- bv = bio_iovec_idx(clone, i);
- end = (i << PAGE_SHIFT) + (bv->bv_offset + bv->bv_len) - clone->bi_size;
- start = end - bytes;
-
- start >>= PAGE_SHIFT;
- if (!clone->bi_size)
- end = clone->bi_vcnt;
- else
- end >>= PAGE_SHIFT;
-
- for (i = start; i < end; i++) {
+ for (i = 0; i < clone->bi_vcnt; i++) {
bv = bio_iovec_idx(clone, i);
BUG_ON(!bv->bv_page);
mempool_free(bv->bv_page, cc->page_pool);
@@ -539,7 +519,7 @@ static void crypt_endio(struct bio *clon
* free the processed pages
*/
if (!read_io) {
- crypt_free_buffer_pages(cc, clone, clone->bi_size);
+ crypt_free_buffer_pages(cc, clone);
goto out;
}
@@ -628,7 +608,7 @@ static void process_write(struct dm_cryp
ctx.idx_out = 0;
if (unlikely(crypt_convert(cc, &ctx) < 0)) {
- crypt_free_buffer_pages(cc, clone, clone->bi_size);
+ crypt_free_buffer_pages(cc, clone);
bio_put(clone);
crypt_dec_pending(io, -EIO);
return;
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists