lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <200da9c5-ea20-819e-2b11-7f45e8b05cbb@acm.org>
Date:   Wed, 2 Jun 2021 17:18:51 -0700
From:   Bart Van Assche <bvanassche@....org>
To:     Bean Huo <huobean@...il.com>, Can Guo <cang@...eaurora.org>,
        asutoshd@...eaurora.org, nguyenb@...eaurora.org,
        hongwus@...eaurora.org, linux-scsi@...r.kernel.org,
        kernel-team@...roid.com
Cc:     Stanley Chu <stanley.chu@...iatek.com>,
        Alim Akhtar <alim.akhtar@...sung.com>,
        Avri Altman <avri.altman@....com>,
        "James E.J. Bottomley" <jejb@...ux.ibm.com>,
        "Martin K. Petersen" <martin.petersen@...cle.com>,
        Matthias Brugger <matthias.bgg@...il.com>,
        Bean Huo <beanhuo@...ron.com>,
        Jaegeuk Kim <jaegeuk@...nel.org>,
        Adrian Hunter <adrian.hunter@...el.com>,
        Kiwoong Kim <kwmad.kim@...sung.com>,
        Satya Tangirala <satyat@...gle.com>,
        open list <linux-kernel@...r.kernel.org>,
        "moderated list:ARM/Mediatek SoC support" 
        <linux-arm-kernel@...ts.infradead.org>,
        "moderated list:ARM/Mediatek SoC support" 
        <linux-mediatek@...ts.infradead.org>
Subject: Re: [PATCH v1 2/3] scsi: ufs: Optimize host lock on transfer requests
 send/compl paths

On 5/31/21 9:04 AM, Bean Huo wrote:
> I did a UFS queue limitation test before, observed that once the
> queue is full, then the active task number in the queue will get
> down. For the Nvme, the scenario is the same. You can refer to the
> slide 23, and slide 24 in the pdf: 
> https://elinux.org/images/6/6c/Linux_Storage_System_Bottleneck_Exploration_V0.3.pdf
> I don't know if your patch can fix this issue.

Hi Bean,

That's a very interesting presentation. Unfortunately the overhead for
SCSI in that presentation includes the SCSI core + UFS driver. Given the
use of the host lock in the version of the UFS driver that was used to
prepare that presentation, the overhead introduced by the UFS driver may
be significant. Maybe someone should measure the overhead of the
scsi_debug driver and compare it with the NVMe loopback driver to get a
better idea of how the overhead of these two subsystems compare to each
other?

Bart.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ