lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e43ab7b9-22f5-75c3-c9e6-f1eb18d57148@arm.com>
Date:   Mon, 29 Jun 2020 14:15:16 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     Christoph Hellwig <hch@....de>,
        Jonathan Lemon <jonathan.lemon@...il.com>
Cc:     netdev@...r.kernel.org, iommu@...ts.linux-foundation.org,
        Björn Töpel <bjorn.topel@...el.com>
Subject: Re: the XSK buffer pool needs be to reverted

On 2020-06-27 08:02, Christoph Hellwig wrote:
> On Fri, Jun 26, 2020 at 01:54:12PM -0700, Jonathan Lemon wrote:
>> On Fri, Jun 26, 2020 at 09:47:25AM +0200, Christoph Hellwig wrote:
>>>
>>> Note that this is somewhat urgent, as various of the APIs that the code
>>> is abusing are slated to go away for Linux 5.9, so this addition comes
>>> at a really bad time.
>>
>> Could you elaborate on what is upcoming here?
> 
> Moving all these calls out of line, and adding a bypass flag to avoid
> the indirect function call for IOMMUs in direct mapped mode.
> 
>> Also, on a semi-related note, are there limitations on how many pages
>> can be left mapped by the iommu?  Some of the page pool work involves
>> leaving the pages mapped instead of constantly mapping/unmapping them.
> 
> There are, but I think for all modern IOMMUs they are so big that they
> don't matter.  Maintaines of the individual IOMMU drivers might know
> more.

Right - I don't know too much about older and more esoteric stuff like 
POWER TCE, but for modern pagetable-based stuff like Intel VT-d, AMD-Vi, 
and Arm SMMU, the only "limits" are such that legitimate DMA API use 
should never get anywhere near them (you'd run out of RAM for actual 
buffers long beforehand). The most vaguely-realistic concern might be a 
pathological system topology where some old 32-bit PCI device doesn't 
have ACS isolation from your high-performance NIC such that they have to 
share an address space, where the NIC might happen to steal all the low 
addresses and prevent the soundcard or whatever from being able to map a 
usable buffer.

With an IOMMU, you typically really *want* to keep a full working set's 
worth of pages mapped, since dma_map/unmap are expensive while dma_sync 
is somewhere between relatively cheap and free. With no IOMMU it makes 
no real difference from the DMA API perspective since map/unmap are 
effectively no more than the equivalent sync operations anyway (I'm 
assuming we're not talking about the kind of constrained hardware that 
might need SWIOTLB).

>> On a heavily loaded box with iommu enabled, it seems that quite often
>> there is contention on the iova_lock.  Are there known issues in this
>> area?
> 
> I'll have to defer to the IOMMU maintainers, and for that you'll need
> to say what code you are using.  Current mainlaine doesn't even have
> an iova_lock anywhere.

Again I can't speak for non-mainstream stuff outside drivers/iommu, but 
it's been over 4 years now since merging the initial scalability work 
for the generic IOVA allocator there that focused on minimising lock 
contention, and it's had considerable evaluation and tweaking since. But 
if we can achieve the goal of efficiently recycling mapped buffers then 
we shouldn't need to go anywhere near IOVA allocation either way except 
when expanding the pool.

Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ