[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081124185123.GL26308@kernel.dk>
Date: Mon, 24 Nov 2008 19:51:23 +0100
From: Jens Axboe <jens.axboe@...cle.com>
To: Jeff Moyer <jmoyer@...hat.com>
Cc: "Vitaly V. Bursov" <vitalyb@...enet.dn.ua>,
linux-kernel@...r.kernel.org
Subject: Re: Slow file transfer speeds with CFQ IO scheduler in some cases
On Mon, Nov 24 2008, Jeff Moyer wrote:
> Jens Axboe <jens.axboe@...cle.com> writes:
>
> > On Mon, Nov 24 2008, Jeff Moyer wrote:
> >> Jens Axboe <jens.axboe@...cle.com> writes:
> >>
> >> > nfsd aside (which does seem to have some different behaviour skewing the
> >> > results), the original patch came about because dump(8) has a really
> >> > stupid design that offloads IO to a number of processes. This basically
> >> > makes fairly sequential IO more random with CFQ, since each process gets
> >> > its own io context. My feeling is that we should fix dump instead of
> >> > introducing a fair bit of complexity (and slowdown) in CFQ. I'm not
> >> > aware of any other good programs out there that would do something
> >> > similar, so I don't think there's a lot of merrit to spending cycles on
> >> > detecting cooperating processes.
> >> >
> >> > Jeff will take a look at fixing dump instead, and I may have promised
> >> > him that santa will bring him something nice this year if he does (since
> >> > I'm sure it'll be painful on the eyes).
> >>
> >> Sorry to drum up this topic once again, but we've recently run into
> >> another instance where the close cooperator patch helps significantly.
> >> The case is KVM using the virtio disk driver. The host-side uses
> >> posix_aio calls to issue I/O on behalf of the guest. It's worth noting
> >> that pthread_create does not pass CLONE_IO (at least that was my reading
> >> of the code). It is questionable whether it really should as that will
> >> change the I/O scheduling dynamics.
> >>
> >> So, Jens, what do you think? Should we collect some performance numbers
> >> to make sure that the close cooperator patch doesn't hurt the common
> >> case?
> >
> > No, posix aio is a piece of crap on Linux/glibc so we want to be fixing
> > that instead. A quick fix is again to use CLONE_IO, though posix aio
> > needs more work than that. I told the qemu guys not to use posix aio a
> > long time ago since it does stink and doesn't perform well under any
> > circumstance... So I don't consider that a valid use case, there's a
> > reason that basically nobody is using posix aio.
>
> It doesn't help that we never took in patches to the kernel that would
> allow for a usable posix aio implementation, but I digress.
>
> My question to you is how many use cases do we dismiss as broken before
> recognizing that people actually do this, and that we should at least
> try to detect and gracefully deal with it? Is this too much to expect
> from the default I/O scheduler? Sorry to beat a dead horse, but folks
> do view this as a regression, and they won't be changing their
> applications, they'll be switching I/O schedulers to fix this.
Yes, I'm aware of that. If posix aio was in wide spread use it would be
an issue, and it's really a shame that it sucks as much as it does. A
single case like dump is worth changing, if there was 1 or 2 other real
cases I'd say we'd have a real case for doing the coop checking.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists