lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <88d9e8f4-cc5a-4897-f511-8f5b7d9a0acd@opensource.wdc.com>
Date:   Mon, 25 Apr 2022 20:20:15 +0900
From:   Damien Le Moal <damien.lemoal@...nsource.wdc.com>
To:     "yukuai (C)" <yukuai3@...wei.com>, axboe@...nel.dk,
        bvanassche@....org, andriy.shevchenko@...ux.intel.com,
        john.garry@...wei.com, ming.lei@...hat.com, qiulaibin@...wei.com
Cc:     linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
        yi.zhang@...wei.com
Subject: Re: [PATCH -next RFC v3 0/8] improve tag allocation under heavy load

On 4/25/22 16:28, yukuai (C) wrote:
> 在 2022/04/25 15:06, Damien Le Moal 写道:
> 
>>>> By the way, did you check that doing something like:
>>>>
>>>> echo 2048 > /sys/block/sdX/queue/nr_requests
>>>>
>>>> improves performance for your high number of jobs test case ?
>>>
>>> Yes, performance will not degrade when numjobs is not greater than 256
>>> in this case.
>>
>> That is my thinking as well. I am asking if did check that (did you run it ?).
> 
> Hi,
> 
> I'm sure I ran it with 256 jobs before.
> 
> However, I didn't run it with 512 jobs. And following is the result I
> just tested:

What was nr_requests ? The default 64 ?
If you increase that number, do you see better throughput/more requests
being sequential ?


> 
> ratio of sequential io: 49.1%
> 
> Read|Write seek 
> 
> cnt 99338, zero cnt 48753 
> 
>      >=(KB) .. <(KB)     : count       ratio |distribution 
>               |
>           0 .. 1         : 48753       49.1% 
> |########################################|
>           1 .. 2         : 0            0.0% | 
>               |
>           2 .. 4         : 0            0.0% | 
>               |
>           4 .. 8         : 0            0.0% | 
>               |
>           8 .. 16        : 0            0.0% | 
>               |
>          16 .. 32        : 0            0.0% | 
>               |
>          32 .. 64        : 0            0.0% | 
>               |
>          64 .. 128       : 4975         5.0% |##### 
>               |
>         128 .. 256       : 4439         4.5% |#### 
>               |
>         256 .. 512       : 2615         2.6% |### 
>               |
>         512 .. 1024      : 967          1.0% |# 
>               |
>        1024 .. 2048      : 213          0.2% |# 
>               |
>        2048 .. 4096      : 375          0.4% |# 
>               |
>        4096 .. 8192      : 723          0.7% |# 
>               |
>        8192 .. 16384     : 1436         1.4% |## 
>               |
>       16384 .. 32768     : 2626         2.6% |### 
>               |
>       32768 .. 65536     : 4197         4.2% |#### 
>               |
>       65536 .. 131072    : 6431         6.5% |###### 
>               |
>      131072 .. 262144    : 7590         7.6% |####### 
>               |
>      262144 .. 524288    : 6433         6.5% |###### 
>               |
>      524288 .. 1048576   : 4583         4.6% |#### 
>               |
>     1048576 .. 2097152   : 2237         2.3% |## 
>               |
>     2097152 .. 4194304   : 489          0.5% |# 
>               |
>     4194304 .. 8388608   : 83           0.1% |# 
>               |
>     8388608 .. 16777216  : 36           0.0% |# 
>               |
>    16777216 .. 33554432  : 0            0.0% | 
>               |
>    33554432 .. 67108864  : 0            0.0% | 
>               |
>    67108864 .. 134217728 : 137          0.1% |# 
>               |


-- 
Damien Le Moal
Western Digital Research

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ