lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 7 Dec 2006 11:22:18 -0800
From:	"Nate Diller" <nate.diller@...il.com>
To:	"Chen, Kenneth W" <kenneth.w.chen@...el.com>
Cc:	"Jens Axboe" <jens.axboe@...cle.com>,
	linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [patch] speed up single bio_vec allocation

On 12/6/06, Chen, Kenneth W <kenneth.w.chen@...el.com> wrote:
> Jens Axboe wrote on Wednesday, December 06, 2006 2:09 AM
> > > > I will try that too.  I'm a bit touchy about sharing a cache line for
> > > > different bio.  But given that there are 200,000 I/O per second we are
> > > > currently pushing the kernel, the chances of two cpu working on two
> > > > bio that sits in the same cache line are pretty small.
> > >
> > > Yep I really think so. Besides, it's not like we are repeatedly writing
> > > to these objects in the first place.
> >
> > This is what I had in mind, in case it wasn't completely clear. Not
> > tested, other than it compiles. Basically it eliminates the small
> > bio_vec pool, and grows the bio by 16-bytes on 64-bit archs, or by
> > 12-bytes on 32-bit archs instead and uses the room at the end for the
> > bio_vec structure.
>
> Yeah, I had a very similar patch queued internally for the large benchmark
> measurement.  I will post the result as soon as I get it.
>
>
> > I still can't help but think we can do better than this, and that this
> > is nothing more than optimizing for a benchmark. For high performance
> > I/O, you will be doing > 1 page bio's anyway and this patch wont help
> > you at all. Perhaps we can just kill bio_vec slabs completely, and
> > create bio slabs instead with differing sizes. So instead of having 1
> > bio slab and 5 bio_vec slabs, change that to 5 bio slabs that leave room
> > for the bio_vec list at the end. That would always eliminate the extra
> > allocation, at the cost of blowing the 256-page case into a order 1 page
> > allocation (256*16 + sizeof(*bio) > PAGE_SIZE) for the 4kb 64-bit archs,
> > which is something I've always tried to avoid.
>
> I took a quick query of biovec-* slab stats on various production machines,
> majority of the allocation is on 1 and 4 segments, usages falls off quickly
> on 16 or more.  256 segment biovec allocation is really rare.  I think it
> makes sense to heavily bias towards smaller biovec allocation and have
> separate biovec allocation for really large ones.

what file system?  have you tested with more than one?  have you
tested with file systems that build their own bio's instead of using
get_block() calls?  have you tested with large files or streaming
workloads?  how about direct I/O?

i think that a "heavy bias" toward small biovecs is FS and workload
dependent, and that it's irresponsible to make such unjustified
changes just to show improvement on your particular benchmark.

i do however agree with killing SLAB_HWCACHE_ALIGN for biovecs,
pending reasonable regression benchmarks.

NATE
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ