lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 3 Aug 2021 11:38:44 -0700
From:   Bart Van Assche <bvanassche@....org>
To:     "yukuai (C)" <yukuai3@...wei.com>, axboe@...nel.dk,
        ming.lei@...hat.com
Cc:     linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
        yi.zhang@...wei.com
Subject: Re: [PATCH] blk-mq: allow hardware queue to get more tag while
 sharing a tag set

On 8/2/21 7:57 PM, yukuai (C) wrote:
> The cpu I'm testing is Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz, and
> after switching to io_uring with "--thread --gtod_reduce=1
> --ioscheduler=none", the numbers can increase to 330k, yet still
> far behind 6000k.

On 
https://ark.intel.com/content/www/us/en/ark/products/120485/intel-xeon-gold-6140-processor-24-75m-cache-2-30-ghz.html 
I found the following information about that CPU:
18 CPU cores
36 hyperthreads

so 36 fio jobs should be sufficient. Maybe IOPS are lower than expected 
because of how null_blk has been configured? This is the configuration 
that I used in my test:

modprobe null_blk nr_devices=0 &&
     udevadm settle &&
     cd /sys/kernel/config/nullb &&
     mkdir nullb0 &&
     cd nullb0 &&
     echo 0 > completion_nsec &&
     echo 512 > blocksize &&
     echo 0 > home_node &&
     echo 0 > irqmode &&
     echo 1024 > size &&
     echo 0 > memory_backed &&
     echo 2 > queue_mode &&
     echo 1 > power ||
     exit $?

> The new atomic operations in the hot path is atomic_read() from
> hctx_may_queue(), and the atomic variable will change in two
> situations:
> 
> a. fail to get driver tag with dbusy not set, increase and set dbusy.
> b. if dbusy is set when queue switch from busy to dile, decrease and
> clear dbusy.
> 
> During the period a device "idle -> busy -> idle", the new atomic
> variable can be writen twice at most, which means this is almost
> readonly in the above test situation. So I guess the impact on
> performance is minimal ?

Please measure the performance impact of your patch.

Thanks,

Bart.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ