lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 16 May 2019 10:47:07 -0400
From:   Kamal Dasu <>
To:     Richard Weinberger <>
Cc:     Kamal Dasu <>,
        MTD Maling List <>,
        Boris Brezillon <>,
        Richard Weinberger <>,
        LKML <>,
        Marek Vasut <>,,
        Miquel Raynal <>,
        Brian Norris <>,
        David Woodhouse <>
Subject: Re: [PATCH] mtd: nand: raw: brcmnand: When oops in progress use pio
 and interrupt polling

On Mon, May 6, 2019 at 12:01 PM Richard Weinberger
<> wrote:
> On Wed, May 1, 2019 at 7:52 PM Kamal Dasu <> wrote:
> >
> > If mtd_oops is in progress switch to polling for nand command completion
> > interrupts and use PIO mode wihtout DMA so that the mtd_oops buffer can
> > be completely written in the assinged nand partition. This is needed in
> > cases where the panic does not happen on cpu0 and there is only one online
> > CPU and the panic is not on cpu0.
> This optimization is highly specific to your hardware and AFAIK cannot
> be applied
> in general to brcmnand.
> So the problem you see is that depending on the oops you can no longer use dma
> or interrupts in the driver?
> How about adding a new flag to panic_nand_write() which tells the nand
> driver that
> this is a panic write?
> That way you can fall back to pio and polling mode without checking cpu numbers
> and oops_in_progress.

Thanks for your review  Richard. Will add flag to let low level
controller drivers know that that its a panic_write and make brcmnand
code more generic and simply fallback to pio and polling in such a
case. Will send a V2 patch with these recommended changes.


> --
> Thanks,
> //richard

Powered by blists - more mailing lists