[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c19008c3-512e-92cc-6be1-3ecc24e74341@huaweicloud.com>
Date: Wed, 12 Oct 2022 09:33:46 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Tejun Heo <tj@...nel.org>, Yu Kuai <yukuai1@...weicloud.com>
Cc: axboe@...nel.dk, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, yi.zhang@...wei.com,
"yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH -next 4/5] blk-iocost: bypass if only one cgroup issues io
Hi, Tejun!
在 2022/10/12 1:02, Tejun Heo 写道:
> On Tue, Oct 11, 2022 at 04:35:46PM +0800, Yu Kuai wrote:
>> From: Yu Kuai <yukuai3@...wei.com>
>>
>> In this special case, there is no need to throttle io.
>>
>> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
>> ---
>> block/blk-iocost.c | 9 +++++++--
>> 1 file changed, 7 insertions(+), 2 deletions(-)
>>
>> diff --git a/block/blk-iocost.c b/block/blk-iocost.c
>> index 5acc5f13bbd6..32e7e416d67c 100644
>> --- a/block/blk-iocost.c
>> +++ b/block/blk-iocost.c
>> @@ -2564,8 +2564,13 @@ static void ioc_rqos_throttle(struct rq_qos *rqos, struct bio *bio)
>> bool use_debt, ioc_locked;
>> unsigned long flags;
>>
>> - /* bypass IOs if disabled, still initializing, or for root cgroup */
>> - if (!ioc->enabled || !iocg || !iocg->level)
>> + /*
>> + * bypass IOs if disabled, still initializing, for root cgroup,
>> + * or the cgroup is the only cgroup with io.
>> + */
>> + if (!ioc->enabled || !iocg || !iocg->level ||
>> + (iocg->hweight_inuse == WEIGHT_ONE &&
>> + atomic_read(&ioc->hweight_gen) == iocg->hweight_gen))
>
> I'm not sure about this one. Bypassing here means that we lose track of how
> much IO it's issuing which can affect future throttling decisions, right?
Yes, you're right, this patch doesn't look good in this case.
The reason why I tried to do this is because during test, I found that
io performance is affected when I only issue io from one cgroup(only
happened in some environment with default configuration), and I found
out that each io is throttled for some time before dispatching.
Perhaps a suitable configuration can avoid this problem.
Thanks,
Kuai
>
> Thanks.
>
Powered by blists - more mailing lists