lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 16 Aug 2022 14:57:07 +0800
From:   Oliver Sang <oliver.sang@...el.com>
To:     John Garry <john.garry@...wei.com>
CC:     Damien Le Moal <damien.lemoal@...nsource.wdc.com>,
        Christoph Hellwig <hch@....de>,
        "Martin K. Petersen" <martin.petersen@...cle.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux Memory Management List <linux-mm@...ck.org>,
        <linux-ide@...r.kernel.org>, <lkp@...ts.01.org>, <lkp@...el.com>,
        <ying.huang@...el.com>, <feng.tang@...el.com>,
        <zhengjun.xing@...ux.intel.com>, <fengwei.yin@...el.com>
Subject: Re: [ata] 0568e61225: stress-ng.copy-file.ops_per_sec -15.0%
 regression

Hi John,

On Fri, Aug 12, 2022 at 03:58:14PM +0100, John Garry wrote:
> On 12/08/2022 12:13, John Garry wrote:
> > > On Tue, Aug 09, 2022 at 07:55:53AM -0700, Damien Le Moal wrote:
> > > > On 2022/08/09 2:58, John Garry wrote:
> > > > > On 08/08/2022 15:52, Damien Le Moal wrote:
> > > > > > On 2022/08/05 1:05, kernel test robot wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > Greeting,
> > > > > > > 
> > > > > > > FYI, we noticed a -15.0% regression of
> > > > > > > stress-ng.copy-file.ops_per_sec due to commit:
> > > > > > > 
> > > > > > > 
> > > > > > > commit: 0568e6122574dcc1aded2979cd0245038efe22b6
> > > > > > > ("ata: libata-scsi: cap ata_device->max_sectors
> > > > > > > according to shost->max_sectors")
> > > > > > > https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git
> > > > > > > master
> > > > > > > 
> > > > > > > in testcase: stress-ng
> > > > > > > on test machine: 96 threads 2 sockets Ice Lake with 256G memory
> > > > > > > with following parameters:
> > > > > > > 
> > > > > > >     nr_threads: 10%
> > > > > > >     disk: 1HDD
> > > > > > >     testtime: 60s
> > > > > > >     fs: f2fs
> > > > > > >     class: filesystem
> > > > > > >     test: copy-file
> > > > > > >     cpufreq_governor: performance
> > > > > > >     ucode: 0xb000280
> > > > > > 
> > > > > > Without knowing what the device adapter is, hard to say
> > > > > > where the problem is. I
> > > > > > suspect that with the patch applied, we may be ending up
> > > > > > with a small default
> > > > > > max_sectors value, causing overhead due to more commands
> > > > > > than necessary.
> > > > > > 
> > > > > > Will check what I see with my test rig.
> > > > > 
> > > > > As far as I can see, this patch should not make a difference unless the
> > > > > ATA shost driver is setting the max_sectors value unnecessarily low.
> > > > 
> > > > That is my hunch too, hence my question about which host driver
> > > > is being used
> > > > for this test... That is not apparent from the problem report.
> > > 
> > > we noticed the commit is already in mainline now, and in our tests,
> > > there is
> > > still similar regression and also on other platforms.
> > > could you guide us how to check "which host driver is being used for this
> > > test"? hope to supply some useful information.
> > > 
> > 
> > For me, a complete kernel log may help.
> 
> and since only 1HDD, the output of the following would be helpful:
> 
> /sys/block/sda/queue/max_sectors_kb
> /sys/block/sda/queue/max_hw_sectors_kb
> 
> And for 5.19, if possible.

for commit
0568e61225 ("ata: libata-scsi: cap ata_device->max_sectors according to shost->max_sectors")

root@...-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb
512
root@...-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb
512

for both commit
4cbfca5f77 ("scsi: scsi_transport_sas: cap shost opt_sectors according to DMA optimal limit")
and v5.19

root@...-icl-2sp1 ~# cat /sys/block/sda/queue/max_sectors_kb
1280
root@...-icl-2sp1 ~# cat /sys/block/sda/queue/max_hw_sectors_kb
32767

> 
> Thanks!
> 
> > 
> > > > 
> > > > > 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ