lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 20 May 2024 18:42:30 +0800
From: Keguang Zhang <keguang.zhang@...il.com>
To: Miquel Raynal <miquel.raynal@...tlin.com>
Cc: Keguang Zhang via B4 Relay <devnull+keguang.zhang.gmail.com@...nel.org>, 
	Richard Weinberger <richard@....at>, Vignesh Raghavendra <vigneshr@...com>, Rob Herring <robh@...nel.org>, 
	Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>, Conor Dooley <conor+dt@...nel.org>, 
	linux-mtd@...ts.infradead.org, linux-kernel@...r.kernel.org, 
	linux-mips@...r.kernel.org, devicetree@...r.kernel.org
Subject: Re: [PATCH v7 2/3] mtd: rawnand: Enable monolithic read when reading subpages

On Mon, May 6, 2024 at 3:17 PM Miquel Raynal <miquel.raynal@...tlincom> wrote:
>
> Hi,
>
> devnull+keguang.zhang.gmail.com@...nel.org wrote on Tue, 30 Apr 2024
> 19:11:11 +0800:
>
> > From: Keguang Zhang <keguang.zhang@...il.com>
> >
> > nand_read_subpage() reads data and ECC data by two separate
> > operations.
> > This patch allows the NAND controllers who support
> > monolithic page read to do subpage read by a single operation,
> > which is more effective than nand_read_subpage().
>
> I am a bit puzzled by this change. Usually nand_read_subpage is used
> for optimizations (when less data than a full page must be retrieved).
> I know it may be used in other cases (because it's easier for the core
> in order to support a wide range of controllers). Can you please show a
> speed test showing the results before I consider merging this patch?
>
With this patch:
# flash_speed -c 128 -d /dev/mtd1
scanning for bad eraseblocks
scanned 128 eraseblocks, 0 are bad
testing eraseblock write speed
eraseblock write speed is 2112 KiB/s
testing eraseblock read speed
eraseblock read speed is 3454 KiB/s
testing page write speed
page write speed is 1915 KiB/s
testing page read speed
page read speed is 2999 KiB/s
testing 2 page write speed
2 page write speed is 2000 KiB/s
testing 2 page read speed
2 page read speed is 3207 KiB/s
Testing erase speed
erase speed is 72495 KiB/s
Testing 2x multi-block erase speed
2x multi-block erase speed is 74135 KiB/s
Testing 4x multi-block erase speed
4x multi-block erase speed is 74812 KiB/s
Testing 8x multi-block erase speed
8x multi-block erase speed is 75502 KiB/s
Testing 16x multi-block erase speed
16x multi-block erase speed is 75851 KiB/s
Testing 32x multi-block erase speed
32x multi-block erase speed is 75851 KiB/s
Testing 64x multi-block erase speed
64x multi-block erase speed is 76204 KiB/s
finished

Without this patch:
# flash_speed -c 128 -d /dev/mtd1
scanning for bad eraseblocks
scanned 128 eraseblocks, 0 are bad
testing eraseblock write speed
eraseblock write speed is 2074 KiB/s
testing eraseblock read speed
eraseblock read speed is 2895 KiB/s
testing page write speed
page write speed is 998 KiB/s
testing page read speed
page read speed is 1499 KiB/s
testing 2 page write speed
2 page write speed is 1002 KiB/s
testing 2 page read speed
2 page read speed is 1554 KiB/s
Testing erase speed
erase speed is 76560 KiB/s
Testing 2x multi-block erase speed
2x multi-block erase speed is 74019 KiB/s
Testing 4x multi-block erase speed
4x multi-block erase speed is 74769 KiB/s
Testing 8x multi-block erase speed
8x multi-block erase speed is 75149 KiB/s
Testing 16x multi-block erase speed
16x multi-block erase speed is 75921 KiB/s
Testing 32x multi-block erase speed
32x multi-block erase speed is 75921 KiB/s
Testing 64x multi-block erase speed
64x multi-block erase speed is 75921 KiB/s
finished

The throughput of the former is twice that of the latter.

> The monolithic thing was not supposed to improve throughput but to help
> with very limited controllers.
>
> Thanks,
> Miquèl



-- 
Best regards,

Keguang Zhang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ