lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 27 Apr 2018 10:40:40 +0800
From:   Joseph Qi <jiangqi903@...il.com>
To:     "jianchao.wang" <jianchao.w.wang@...cle.com>,
        Tejun Heo <tj@...nel.org>
Cc:     Paolo Valente <paolo.valente@...aro.org>,
        linux-block <linux-block@...r.kernel.org>,
        Jens Axboe <axboe@...nel.dk>, Shaohua Li <shli@...com>,
        Mark Brown <broonie@...nel.org>,
        Linus Walleij <linus.walleij@...aro.org>,
        Ulf Hansson <ulf.hansson@...aro.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: testing io.low limit for blk-throttle

Hi Jianchao,

On 18/4/27 10:09, jianchao.wang wrote:
> Hi Tejun and Joseph
> 
> On 04/27/2018 02:32 AM, Tejun Heo wrote:
>> Hello,
>>
>> On Tue, Apr 24, 2018 at 02:12:51PM +0200, Paolo Valente wrote:
>>> +Tejun (I guess he might be interested in the results below)
>>
>> Our experiments didn't work out too well either.  At this point, it
>> isn't clear whether io.low will ever leave experimental state.  We're
>> trying to find a working solution.
> 
> Would you please take a look at the following two patches.
> 
> https://marc.info/?l=linux-block&m=152325456307423&w=2
> https://marc.info/?l=linux-block&m=152325457607425&w=2
> 
> In addition, when I tested blk-throtl io.low on NVMe card, I always got
> even if the iops has been lower than io.low limit for a while, but the
> due to group is not idle, the downgrade always fails.
> 
>        tg->latency_target && tg->bio_cnt &&
> 		tg->bad_bio_cnt * 5 < tg->bio_cn
> 

I'm afraid the latency check is a must for io.low. Because idle time
check can only apply to simple scenarios from my test.

Yes, in some cases last_low_overflow_time does have problems.
And for not downgrade properly, I've also posted two patches before,
waiting Shaohua's review. You can also have a try.

https://patchwork.kernel.org/patch/10177185/
https://patchwork.kernel.org/patch/10177187/

Thanks,
Joseph

> the latency always looks well even the sum of two groups's iops has reached the top.
> so I disable this check on my test, plus the 2 patches above, the io.low
> could basically works.
> 
> My NVMe card's max bps is ~600M, and max iops is ~160k.
> Here is my config
> io.low riops=50000 wiops=50000 rbps=209715200 wbps=209715200 idle=200 latency=10
> io.max riops=150000
> There are two cgroups in my test, both of them have same config.
> 
> In addition, saying "basically work" is due to the iops of the two cgroup will jump up and down.
> such as, I launched one fio test per cgroup, the iops will wave as following:
> 
> group0   30k  50k   70k   60k  40k
> group1   120k 100k  80k   90k  110k
> 
> however, if I launched two fio tests only in one cgroup, the iops of two test could stay 
> about 70k~80k.
> 
> Could help to explain this scenario ?
> 
> Thanks in advance
> Jianchao
> 

Powered by blists - more mailing lists