lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1238583884-13517-5-git-send-email-tj@kernel.org>
Date:	Wed,  1 Apr 2009 20:04:40 +0900
From:	Tejun Heo <tj@...nel.org>
To:	axboe@...nel.dk, bharrosh@...asas.com, linux-kernel@...r.kernel.org
Cc:	Tejun Heo <tj@...nel.org>
Subject: [PATCH 4/8] bio: actually inline inline bvecs into bio

Impact: cleanup

BIO_INLINE_VECS bvecs are inlined into bio to avoid bvec allocation
for small transfers.  This was achieved by declaring zero sized bvec
array at the end of bio and allocating bio with extra bytes at the
end.  As BIO_INLINE_VECS is constant, there is no reason to do this
allocation trick.  This patch simply defines BIO_INLINE_VECS sized
bvec array at the end.  This will help fixing bio_kmalloc().

Signed-off-by: Tejun Heo <tj@...nel.org>
---
 fs/bio.c            |    9 +--------
 include/linux/bio.h |    9 +++++----
 2 files changed, 6 insertions(+), 12 deletions(-)

diff --git a/fs/bio.c b/fs/bio.c
index a040cde..7574839 100644
--- a/fs/bio.c
+++ b/fs/bio.c
@@ -31,12 +31,6 @@
 
 DEFINE_TRACE(block_split);
 
-/*
- * Test patch to inline a certain number of bi_io_vec's inside the bio
- * itself, to shrink a bio data allocation from two mempool calls to one
- */
-#define BIO_INLINE_VECS		4
-
 static mempool_t *bio_split_pool __read_mostly;
 
 /*
@@ -1550,7 +1544,6 @@ void bioset_free(struct bio_set *bs)
  */
 struct bio_set *bioset_create(unsigned int pool_size, unsigned int front_pad)
 {
-	unsigned int back_pad = BIO_INLINE_VECS * sizeof(struct bio_vec);
 	struct bio_set *bs;
 
 	bs = kzalloc(sizeof(*bs), GFP_KERNEL);
@@ -1559,7 +1552,7 @@ struct bio_set *bioset_create(unsigned int pool_size, unsigned int front_pad)
 
 	bs->front_pad = front_pad;
 
-	bs->bio_slab = bio_find_or_create_slab(front_pad + back_pad);
+	bs->bio_slab = bio_find_or_create_slab(front_pad);
 	if (!bs->bio_slab) {
 		kfree(bs);
 		return NULL;
diff --git a/include/linux/bio.h b/include/linux/bio.h
index b05b1d4..14e5f42 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -40,6 +40,8 @@
 #define BIO_MAX_SIZE		(BIO_MAX_PAGES << PAGE_CACHE_SHIFT)
 #define BIO_MAX_SECTORS		(BIO_MAX_SIZE >> 9)
 
+#define BIO_INLINE_VECS		4
+
 /*
  * was unsigned short, but we might as well be ready for > 64kB I/O pages
  */
@@ -104,11 +106,10 @@ struct bio {
 	bio_destructor_t	*bi_destructor;	/* destructor */
 
 	/*
-	 * We can inline a number of vecs at the end of the bio, to avoid
-	 * double allocations for a small number of bio_vecs. This member
-	 * MUST obviously be kept at the very end of the bio.
+	 * Inline small number of vecs in the bio to avoid double
+	 * allocations for a small number of bio_vecs.
 	 */
-	struct bio_vec		bi_inline_vecs[0];
+	struct bio_vec		bi_inline_vecs[BIO_INLINE_VECS];
 };
 
 /*
-- 
1.6.0.2

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ