lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3640cd7f-f32a-1509-dbef-6000b6e14e75@linux.alibaba.com>
Date:   Thu, 7 Mar 2019 23:06:47 +0800
From:   Shile Zhang <shile.zhang@...ux.alibaba.com>
To:     Coly Li <colyli@...e.de>
Cc:     Kent Overstreet <kent.overstreet@...il.com>,
        linux-bcache@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] bcache: add cond_resched() in __bch_cache_cmp()


On 2019/3/7 18:34, Coly Li wrote:
> On 2019/3/7 1:15 下午, shile.zhang@...ux.alibaba.com wrote:
>> From: Shile Zhang <shile.zhang@...ux.alibaba.com>
>>
>> Read /sys/fs/bcache/<uuid>/cacheN/priority_stats can take very long
>> time with huge cache after long run.
>>
>> Signed-off-by: Shile Zhang <shile.zhang@...ux.alibaba.com>
> Hi Shile,
>
> Do you test your change ? It will be helpful with more performance data
> (what problem that you improved).

In case of 960GB SSD cache device, once read of the 'priority_stats' 
costs about 600ms in our test environment.

The perf tool shown that near 50% CPU time consumed by 'sort()', this 
means once sort will hold the CPU near 300ms.

In our case, the statistics collector reads the 'priority_stats' 
periodically, it will trigger the schedule latency jitters of the

task which shared same CPU core.

>
> Thanks.
>
> Coly Li
>
>> ---
>>   drivers/md/bcache/sysfs.c | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
>> index 557a8a3..028fea1 100644
>> --- a/drivers/md/bcache/sysfs.c
>> +++ b/drivers/md/bcache/sysfs.c
>> @@ -897,6 +897,7 @@ static void bch_cache_set_internal_release(struct kobject *k)
>>   
>>   static int __bch_cache_cmp(const void *l, const void *r)
>>   {
>> +	cond_resched();
>>   	return *((uint16_t *)r) - *((uint16_t *)l);
>>   }
>>   
>>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ