[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK8P3a1KJpWtyUOyyyab03tFYwkeROQXXV1+jM7EJ2d8So2bbA@mail.gmail.com>
Date: Thu, 14 Mar 2019 12:40:19 +0100
From: Arnd Bergmann <arnd@...db.de>
To: "Rizvi, Mohammad Faiz Abbas" <faiz_abbas@...com>
Cc: Adrian Hunter <adrian.hunter@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
DTML <devicetree@...r.kernel.org>,
linux-mmc <linux-mmc@...r.kernel.org>,
linux-omap <linux-omap@...r.kernel.org>,
Ulf Hansson <ulf.hansson@...aro.org>,
Rob Herring <robh+dt@...nel.org>,
Mark Rutland <mark.rutland@....com>, Kishon <kishon@...com>,
Chunyan Zhang <zhang.chunyan@...aro.org>
Subject: Re: [PATCH v2 1/8] mmc: sdhci: Get rid of finish_tasklet
On Tue, Mar 12, 2019 at 6:32 PM Rizvi, Mohammad Faiz Abbas
<faiz_abbas@...com> wrote:
> On 3/8/2019 7:06 PM, Adrian Hunter wrote:
> > On 6/03/19 12:00 PM, Faiz Abbas wrote:
> > It is a performance drop that can be avoided, so it might as well be.
> > Splitting the success path from the failure path is common for I/O drivers
> > for similar reasons as here: the success path can be optimized whereas the
> > failure path potentially needs to sleep.
>
> Understood. You wanna keep the success path as fast as possible.
I looked at the sdhci_request_done() function and found that almost all
of it is executed inside of a 'spin_lock_irqsave()', including the potentially
expensive dma_sync_single_for_cpu() calls.
This means there is very little benefit in using the tasklet in the first place,
it could just as well run in the hwirq context that triggered it.
The part that is actually run with interrupts enabled in the tasklet
is mmc_blk_cqe_req_done(), but other drivers also call this from
IRQ context.
Arnd
Powered by blists - more mailing lists