[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BFB3D66.1080001@cn.fujitsu.com>
Date: Tue, 25 May 2010 11:00:54 +0800
From: Gui Jianfeng <guijianfeng@...fujitsu.com>
To: Vivek Goyal <vgoyal@...hat.com>
CC: Jens Axboe <jens.axboe@...cle.com>,
linux kernel mailing list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/4] io-controller: Add new interfaces to trace backlogged
group status
Vivek Goyal wrote:
> On Tue, May 25, 2010 at 09:37:31AM +0800, Gui Jianfeng wrote:
>> Vivek Goyal wrote:
>>> On Mon, May 24, 2010 at 09:12:05AM +0800, Gui Jianfeng wrote:
>>>> Vivek Goyal wrote:
>>>>> On Fri, May 21, 2010 at 04:40:50PM +0800, Gui Jianfeng wrote:
>>>>>> Hi,
>>>>>>
>>>>>> This series implements three new interfaces to keep track of tranferred bytes,
>>>>>> elapsing time and io rate since group getting backlogged. If the group dequeues
>>>>>> from service tree, these three interfaces will reset and shows zero.
>>>>> Hi Gui,
>>>>>
>>>>> Can you give some details regarding how this functionality is useful? Why
>>>>> would somebody be interested in only in stats of till group was
>>>>> backlogged and not in total stats?
>>>>>
>>>>> Groups can come and go so fast and these stats will reset so many times
>>>>> that I am not able to visualize how these stats will be useful.
>>>> Hi Vivek,
>>>>
>>>> Currently, we assign weight to a group, but user still doesn't know how fast the
>>>> group runs. With io rate interface, users can check the rate of a group at any
>>>> moment, or to determine whether the weight assigned to a group is enough.
>>>> bytes and time interface is just for debug purpose.
>>> Gui,
>>>
>>> I still don't understand that why blkio.sectors or blkio.io_service_bytes
>>> or blkio.io_serviced interfaces are not good enough to determine at what
>>> rate a group is doing IO.
>>>
>>> I think we can very well write something in userspace like "iostat" to
>>> display the per group rate. Utility can read the any of the above files
>>> say at the interfval of 1s, calculate the diff between the values and
>>> display that as group effective rate.
>> Hi Vivek,
>>
>> blkio.io_active_rate reflects the rate since group get backlogged, so the rate is a smooth
>> value. This value represents the actual rate a group runs. IMO, io rate calculated from
>> user space is not accurate in following two scenarios:
>>
>> 1 Userspace app chooses the interval of 1s, if 0.5s is backlogged and 0.5s is not, the
>> rate calculated in this interval doesn't make sense.
>>
>
> If you are not servicing groups for long time, anyway it is very bad for
> latency. So that's why soft limit of 300ms of CFQ makes sense and
> practically I am not sure you will be blocking groups for .5s.
>
> Even if you do, then user just needs to choose a bigger interval and you
> will see more smooth rates. Reduce the interval and you might see little
> bursty rate.
Vivek,
IIUC, the most big problem for user app is the user app doesn't know how long
the group has been dequeued during the interval. For example, user choose
10s interval, 8s of which is not backlogged, but when user app calculates
io rate, this 8s still include. So this rate isn't what we want. Am i missing
something?
"io_active_rate" will never take un-backlogged time into account when calculating
io rate.
>
> And, why do you say that "io_active_rate" is smooth interface. IIUC, the
> value of group rate will vary depending on time when I read the file.
"io_active_rate" always shows backlogged io rate. Maybe when io_active_time
is very small, don't need to calculate "io_active_rate".
Thanks
Gui
>
> Assume a group gets serviced for 30ms and then is put back in the queue
> and is serviced again after 50ms. If I read the "io_active_rate"
> immediately after group has been serviced I should see a high rate value
> and if I read the same file after another 30ms I would see a reduced rate.
>
> Point being that to get a better idea of average rate of group, we need
> to observe byte transferred over a little longer period. If you sample
> bytes transferred from a group over a very short interval then you can
> expect bursty output. There is no way to avoid that?
>
>> 2 Consider there're several groups are waiting for service, but most part of the interval
>> is just fall into the period that the group is under-service. such rate calculated by user
>> app isn't acurate, rate burst might occur.
>
> Actually I think that whole notion of relying on time calculations of CFQ
> is not very good. these are very approximate time calculations. There are
> many situations where calculating time is not possible and we approximate
> the slice_used to 1ms. So relying on that time for rate calculation is
> much more inaccurate.
>
> Hence I think calculating group's rate in user space makes much more
> sense.
>
>> Further more, once max weight control is available, we can make use of such interface to realize
>> how well this group works.
>
> Again I don't understand with max BW controller, why can't we monitor the
> group's BW in userspace accurately?
>
> Vivek
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists