lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87a5crabz3.fsf@bootlin.com>
Date: Thu, 19 Dec 2024 17:12:16 +0100
From: Miquel Raynal <miquel.raynal@...tlin.com>
To: Bastien Curutchet <bastien.curutchet@...tlin.com>
Cc: Richard Weinberger <richard@....at>,  Vignesh Raghavendra
 <vigneshr@...com>,  linux-mtd@...ts.infradead.org,
  linux-kernel@...r.kernel.org,  Thomas Petazzoni
 <thomas.petazzoni@...tlin.com>,  Herve Codina <herve.codina@...tlin.com>,
  Christopher Cordahi <christophercordahi@...ometrics.ca>
Subject: Re: [PATCH v2] mtd: rawnand: davinci: Reduce polling interval in
 NAND_OP_WAITRDY_INSTR

Hello Bastien,

On 19/12/2024 at 15:58:10 +01, Bastien Curutchet <bastien.curutchet@...tlin.com> wrote:

> For each NAND_OP_WAITRDY_INSTR operation, the NANDFSR register is
> polled only once every 100 us to check for the EMA_WAIT pin. This
> isn't frequent enough and causes delays in NAND accesses.
>
> Set the polling interval to 0s. It increases the page read speed
> reported by flash_speed by ~40% (~30% on page writes).

...

>  	case NAND_OP_WAITRDY_INSTR:
>  		timeout_us = instr->ctx.waitrdy.timeout_ms * 1000;
>  		ret = readl_relaxed_poll_timeout(info->base + NANDFSR_OFFSET,
> -						 status, status & BIT(0), 100,
> +						 status, status & BIT(0), 0,

This kind of optimization is very tempting but has an impact on the
system. I am fine reducing this polling delay, but maybe not down to 0
which means you busy wait the entire time. For reads it might be fine
because tR is rather short, but for writes it is a bit more impacting
and for erases it will have a true system wide impact. So what you see
in the benchmark is specific to the NAND driver performances, but fails
to give you the system-wide big picture which I think is worth keeping
in mind.

As this value will be NAND specific we cannot fine tune it too much, but
I would suggest to try finding a lower value without reaching 0. Like 5
or 10 us maybe.

Thanks,
Miquèl

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ