[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49eikmph5t.fsf@segfault.boston.devel.redhat.com>
Date: Mon, 15 Feb 2010 12:01:18 -0500
From: Jeff Moyer <jmoyer@...hat.com>
To: Richard Kennedy <richard@....demon.co.uk>
Cc: Vivek Goyal <vgoyal@...hat.com>,
Jens Axboe <jens.axboe@...cle.com>,
Corrado Zoccolo <czoccolo@...il.com>,
Gui Jianfeng <guijianfeng@...fujitsu.com>,
lkml <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] cfq: reorder cfq_queue removing padding on 64bit & allowing more objects/slab in it's kmem_cache
Richard Kennedy <richard@....demon.co.uk> writes:
> This removes 8 bytes of padding from struct cfq_queue on 64 bit builds,
> shrinking it's size to 256 bytes, so fitting into 1 fewer cachelines and
> allowing 1 more object/slab in it's kmem_cache.
OK, I ran pahole to verify your findings:
$ pahole -C cfq_queue build/master/block/cfq-iosched.o
struct cfq_queue {
...
unsigned int allocated_slice; /* 136 4 */
/* XXX 4 bytes hole, try to pack */
long unsigned int slice_start; /* 144 8 */
...
pid_t pid; /* 216 4 */
/* XXX 4 bytes hole, try to pack */
struct cfq_rb_root * service_tree; /* 224 8 */
...
/* size: 264, cachelines: 5, members: 34 */
/* sum members: 256, holes: 2, sum holes: 8 */
/* last cacheline: 8 bytes */
After applying the patch, it indeed does save a cacheline.
Reviewed-by: Jeff Moyer <jmoyer@...hat.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists