[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49iq2odv6p.fsf@segfault.boston.devel.redhat.com>
Date: Thu, 02 Sep 2010 11:25:18 -0400
From: Jeff Moyer <jmoyer@...hat.com>
To: Doug Neal <dneallkml@...il.com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: I/O scheduler deadlocks on Xen virtual block devices
Doug Neal <dneallkml@...il.com> writes:
>>>>
>>>> Did you try these different I/O schedulers in the domU or on the dom0?
>>>> Does switching I/O schedulers in either place make the problem go away
>>>> when it happens?
>>>>
>>>
>>> In the domU, and the bug was present in all cases. The dom0 was using
>>> cfq. I'll run the tests again using each scheduler in the dom0 with
>>> domU set to noop and report back.
>>
>> While I think this is an interesting test, you need only test one I/O
>> scheduler in the dom0. Also, I think you misunderstood the second
>> question. I'd like to know if switching I/O schedulers while the system
>> is in this bad state helps get I/O going again.
>>
> Jeff,
>
> Issue confirmed in the following cases:
>
> dom0:domU
> noop:noop
> noop:deadline
> noop:anticipatory
> cfq:noop
> cfq:deadline
> cfq:anticipatory
> cfq:cfq
>
> I try switching schedulers in the domU during the lockup, the echo x >
> /sys/block/xvda/queue/scheduler command also hangs. In the dom0, no
> effect.
OK. So, this means that the I/O is not hung up in the I/O scheduler. I
don't have a whole lot of experience with the Xen I/O stack, but I think
that's where I'd start digging (especially if there's nothing in the
logs about command timed out).
I hope this was a somewhat helpful exercise. Sorry I can't be of more
help.
Cheers!
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists