[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180427071843.GB17484@dhcp22.suse.cz>
Date: Fri, 27 Apr 2018 09:18:43 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Christoph Hellwig <hch@...radead.org>
Cc: "Luis R. Rodriguez" <mcgrof@...nel.org>, linux-mm@...ck.org,
cl@...ux.com, Jan Kara <jack@...e.cz>, matthew@....cx,
x86@...nel.org, luto@...capital.net, martin.petersen@...cle.com,
jthumshirn@...e.de, broonie@...nel.org, linux-spi@...r.kernel.org,
linux-scsi@...r.kernel.org, linux-kernel@...r.kernel.org,
"lsf-pc@...ts.linux-foundation.org"
<lsf-pc@...ts.linux-foundation.org>
Subject: Re: [LSF/MM TOPIC NOTES] x86 ZONE_DMA love
On Thu 26-04-18 22:35:56, Christoph Hellwig wrote:
> On Thu, Apr 26, 2018 at 09:54:06PM +0000, Luis R. Rodriguez wrote:
> > In practice if you don't have a floppy device on x86, you don't need ZONE_DMA,
>
> I call BS on that, and you actually explain later why it it BS due
> to some drivers using it more explicitly. But even more importantly
> we have plenty driver using it through dma_alloc_* and a small DMA
> mask, and they are in use - we actually had a 4.16 regression due to
> them.
Well, but do we need a zone for that purpose? The idea was to actually
replace the zone by a CMA pool (at least on x86). With the current
implementation of the CMA we would move the range [0-16M] pfn range into
zone_movable so it can be used and we would get rid of all of the
overhead each zone brings (a bit in page flags, kmalloc caches and who
knows what else)
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists