[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151026212346.GJ13239@google.com>
Date: Mon, 26 Oct 2015 14:23:46 -0700
From: Brian Norris <computersforpeace@...il.com>
To: Roger Quadros <rogerq@...com>
Cc: tony@...mide.com, dwmw2@...radead.org,
ezequiel@...guardiasur.com.ar, javier@...hile0.org, fcooper@...com,
nsekhar@...com, linux-mtd@...ts.infradead.org,
linux-omap@...r.kernel.org, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 00/27] memory: omap-gpmc: mtd: nand: Support GPMC NAND
on non-OMAP platforms
Hi Roger,
I'm not too familiar with OMAP platforms, and I might have missed out on
prior discussions/context, so please forgive if I'm asking silly or old
questions here.
On Fri, Sep 18, 2015 at 05:53:22PM +0300, Roger Quadros wrote:
> - Remove NAND IRQ handling from omap-gpmc driver, share the GPMC IRQ
> with the omap2-nand driver and handle NAND IRQ events in the NAND driver.
> This causes performance increase when using prefetch-irq mode.
> 30% increase in read, 17% increase in write in prefetch-irq mode.
Have you pinpointed the exact causes for the performance increase, or
can you give an educated guess? AIUI, you're reducing the number of
interrupts needed for NAND prefetch mode, but you're also removing a bit
of abstraction and implementing hooks that look awfully like the
existing abstractions:
+ int (*nand_irq_enable)(enum gpmc_nand_irq irq);
+ int (*nand_irq_disable)(enum gpmc_nand_irq irq);
+ void (*nand_irq_clear)(enum gpmc_nand_irq irq);
+ u32 (*nand_irq_status)(void);
That's not really a problem if there's a good reason for them (brcmnand
implements similar hooks because of quirks in the implementation of
interrupts across various BRCM SoCs, and it's not worth writing irqchip
drivers for those cases). I'm mainly curious for an explanation.
Regards,
Brian
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists