lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y4F3XG3lMCCKlLAr@T590>
Date:   Sat, 26 Nov 2022 10:18:04 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     Yu Kuai <yukuai1@...weicloud.com>
Cc:     John Garry <john.g.garry@...cle.com>, kashyap.desai@...adcom.com,
        sumit.saxena@...adcom.com, shivasharan.srikanteshwara@...adcom.com,
        jejb@...ux.ibm.com, martin.petersen@...cle.com,
        megaraidlinux.pdl@...adcom.com, linux-scsi@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        linux-block <linux-block@...r.kernel.org>,
        "zhangyi (F)" <yi.zhang@...wei.com>,
        "yukuai (C)" <yukuai3@...wei.com>, ming.lei@...hat.com
Subject: Re: Why is MEGASAS_SAS_QD set to 256?

On Sat, Nov 26, 2022 at 09:15:53AM +0800, Yu Kuai wrote:
> Hi,
> 
> 在 2022/11/25 20:33, John Garry 写道:
> > On 24/11/2022 03:45, Yu Kuai wrote:
> > > Hi,
> > > 
> > > While upgrading kernel from 4.19 to 5.10, I found that fio 1 thread 4k
> > > sequential io performance is dropped(160Mib -> 100 Mib), root cause is
> > > that queue_depth is changed from 64 to 256.
> > > 
> > > commit 6e73550670ed1c07779706bb6cf61b99c871fc42
> > > scsi: megaraid_sas: Update optimal queue depth for SAS and NVMe devices
> > > 
> > > diff --git a/drivers/scsi/megaraid/megaraid_sas.h
> > > b/drivers/scsi/megaraid/megaraid_sas.h
> > > index bd8184072bed..ddfbe6f6667a 100644
> > > --- a/drivers/scsi/megaraid/megaraid_sas.h
> > > +++ b/drivers/scsi/megaraid/megaraid_sas.h
> > > @@ -2233,9 +2233,9 @@ enum MR_PD_TYPE {
> > > 
> > >   /* JBOD Queue depth definitions */
> > >   #define MEGASAS_SATA_QD        32
> > > -#define MEGASAS_SAS_QD 64
> > > +#define MEGASAS_SAS_QD 256
> > >   #define MEGASAS_DEFAULT_PD_QD  64
> > > -#define MEGASAS_NVME_QD                32
> > > +#define MEGASAS_NVME_QD        64
> > > 
> > > 
> > > And with the default nr_requests 256, 256 queue_depth will make the
> > > elevator has no effect, specifically io can't be merged in this test
> > > case. Hence it doesn't make sense to me to set default queue_depth to
> > > 256.
> > > 
> > > Is there any reason why MEGASAS_SAS_QD is changed to 64?
> > > 
> > > Thanks,
> > > Kuai
> > > 
> > 
> > Which type of drive do you use?
> 
> SAS SSDs
> 
> BTW, I also test with nvme as well, the default elevator is deadline and
> queue_depth seems too small, and performance is far from optimal.
> 
> Current default values don't seem good to me... 😒

If you want aggressive merge on sequential IO workload, the queue depth need
to be a bit less, then more requests can be staggered into scheduler queue,
and merge chance is increased.

If you want good perf on random IO perf, the queue depth needs to
be deep enough to have enough parallelism for saturating SSD internal.

But we don't recognize sequential/random IO pattern, and usually fixed
queue depth is used.

Thanks,
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ