[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190411143430.GA17371@lst.de>
Date: Thu, 11 Apr 2019 16:34:30 +0200
From: Christoph Hellwig <hch@....de>
To: Ulf Hansson <ulf.hansson@...aro.org>
Cc: Christoph Hellwig <hch@....de>,
Russell King <linux@...linux.org.uk>,
"linux-mmc@...r.kernel.org" <linux-mmc@...r.kernel.org>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
"list@....net:IOMMU DRIVERS <iommu@...ts.linux-foundation.org>, Joerg
Roedel <joro@...tes.org>," <iommu@...ts.linux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] mmc: let the dma map ops handle bouncing
On Thu, Apr 11, 2019 at 11:00:56AM +0200, Ulf Hansson wrote:
> > blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
> > if (mmc_can_erase(card))
> > mmc_queue_setup_discard(mq->queue, card);
> >
> > - blk_queue_bounce_limit(mq->queue, limit);
> > + if (!mmc_dev(host)->dma_mask || !*mmc_dev(host)->dma_mask)
> > + blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_HIGH);
>
> So this means we are not going to set a bounce limit for the queue, in
> case we have a dma mask.
>
> Why isn't that needed no more? Whats has changed?
On most architectures it was never needed, the major hold out was x86-32
with PAE. In general the dma_mask tells the DMA API layer what is
supported, and if the physical addressing doesn't support that it has to
use bounce buffering like swiotlb (or dmabounce on arm32). A couple
month ago I finally fixes x86-32 to also properly set up swiotlb,
and remove the block layerer bounce buffering that wasn't for highmem
(which is about having a kernel mapping, not addressing), and ISA DMA
(which is not handled like everything else, but we'll get there).
But for some reason I missed mmc back then, so mmc right now is the
only remaining user of address based block layer bouncing.
Powered by blists - more mailing lists