[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49skphdtqm.fsf@segfault.boston.devel.redhat.com>
Date: Mon, 24 Nov 2008 10:33:05 -0500
From: Jeff Moyer <jmoyer@...hat.com>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: "Vitaly V. Bursov" <vitalyb@...enet.dn.ua>,
linux-kernel@...r.kernel.org
Subject: Re: Slow file transfer speeds with CFQ IO scheduler in some cases
Jens Axboe <jens.axboe@...cle.com> writes:
> nfsd aside (which does seem to have some different behaviour skewing the
> results), the original patch came about because dump(8) has a really
> stupid design that offloads IO to a number of processes. This basically
> makes fairly sequential IO more random with CFQ, since each process gets
> its own io context. My feeling is that we should fix dump instead of
> introducing a fair bit of complexity (and slowdown) in CFQ. I'm not
> aware of any other good programs out there that would do something
> similar, so I don't think there's a lot of merrit to spending cycles on
> detecting cooperating processes.
>
> Jeff will take a look at fixing dump instead, and I may have promised
> him that santa will bring him something nice this year if he does (since
> I'm sure it'll be painful on the eyes).
Sorry to drum up this topic once again, but we've recently run into
another instance where the close cooperator patch helps significantly.
The case is KVM using the virtio disk driver. The host-side uses
posix_aio calls to issue I/O on behalf of the guest. It's worth noting
that pthread_create does not pass CLONE_IO (at least that was my reading
of the code). It is questionable whether it really should as that will
change the I/O scheduling dynamics.
So, Jens, what do you think? Should we collect some performance numbers
to make sure that the close cooperator patch doesn't hurt the common
case?
Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists