[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <dc80ad61dbd52a3fda5cd47ab5e60e45009b511d.camel@gmail.com>
Date: Tue, 31 Aug 2021 17:49:36 -0300
From: Leonardo Brás <leobras.c@...il.com>
To: David Christensen <drc@...ux.vnet.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Alexey Kardashevskiy <aik@...abs.ru>,
David Gibson <david@...son.dropbear.id.au>,
kernel test robot <lkp@...el.com>,
Nicolin Chen <nicoleotsuka@...il.com>,
Frederic Barrat <fbarrat@...ux.ibm.com>
Cc: linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 00/11] DDW + Indirect Mapping
On Tue, 2021-08-31 at 13:39 -0700, David Christensen wrote:
> >
> > This series allow Indirect DMA using DDW when available, which
> > usually
> > means bigger pagesizes and more TCEs, and so more DMA space.
>
> How is the mapping method selected? LPAR creation via the HMC, Linux
> kernel load parameter, or some other method?
At device/bus probe, if there is enough DMA space available for Direct
DMA, then it's used. If not, it uses indirect DMA.
>
> The hcall overhead doesn't seem too worrisome when mapping 1GB pages
> so
> the Indirect DMA method might be best in my situation (DPDK).
Well, it depends on usage.
I mean, the recommended use of IOMMU is to map, transmit and then
unmap, but this will vary on the implementation of the driver.
If, for example, there is some reuse of the DMA mapping, as in a
previous patchset I sent (IOMMU Pagecache), then the hcall overhead can
be reduced drastically.
>
> Dave
Best regards,
Leonardo
Powered by blists - more mailing lists