lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 24 Nov 2008 19:13:39 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Jeff Moyer <jmoyer@...hat.com>
Cc:	"Vitaly V. Bursov" <vitalyb@...enet.dn.ua>,
	linux-kernel@...r.kernel.org
Subject: Re: Slow file transfer speeds with CFQ IO scheduler in some cases

On Mon, Nov 24 2008, Jeff Moyer wrote:
> Jens Axboe <jens.axboe@...cle.com> writes:
> 
> > nfsd aside (which does seem to have some different behaviour skewing the
> > results), the original patch came about because dump(8) has a really
> > stupid design that offloads IO to a number of processes. This basically
> > makes fairly sequential IO more random with CFQ, since each process gets
> > its own io context. My feeling is that we should fix dump instead of
> > introducing a fair bit of complexity (and slowdown) in CFQ. I'm not
> > aware of any other good programs out there that would do something
> > similar, so I don't think there's a lot of merrit to spending cycles on
> > detecting cooperating processes.
> >
> > Jeff will take a look at fixing dump instead, and I may have promised
> > him that santa will bring him something nice this year if he does (since
> > I'm sure it'll be painful on the eyes).
> 
> Sorry to drum up this topic once again, but we've recently run into
> another instance where the close cooperator patch helps significantly.
> The case is KVM using the virtio disk driver.  The host-side uses
> posix_aio calls to issue I/O on behalf of the guest.  It's worth noting
> that pthread_create does not pass CLONE_IO (at least that was my reading
> of the code).  It is questionable whether it really should as that will
> change the I/O scheduling dynamics.
> 
> So, Jens, what do you think?  Should we collect some performance numbers
> to make sure that the close cooperator patch doesn't hurt the common
> case?

No, posix aio is a piece of crap on Linux/glibc so we want to be fixing
that instead. A quick fix is again to use CLONE_IO, though posix aio
needs more work than that. I told the qemu guys not to use posix aio a
long time ago since it does stink and doesn't perform well under any
circumstance... So I don't consider that a valid use case, there's a
reason that basically nobody is using posix aio.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ