[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170214.122231.2022548659001388286.davem@davemloft.net>
Date: Tue, 14 Feb 2017 12:22:31 -0500 (EST)
From: David Miller <davem@...emloft.net>
To: David.Laight@...LAB.COM
Cc: ttoukan.linux@...il.com, edumazet@...gle.com, brouer@...hat.com,
alexander.duyck@...il.com, netdev@...r.kernel.org,
tariqt@...lanox.com, kafai@...com, saeedm@...lanox.com,
willemb@...gle.com, bblanco@...mgrid.com, ast@...nel.org,
eric.dumazet@...il.com, linux-mm@...ck.org
Subject: Re: [PATCH v3 net-next 08/14] mlx4: use order-0 pages for RX
From: David Laight <David.Laight@...LAB.COM>
Date: Tue, 14 Feb 2017 17:17:22 +0000
> From: David Miller
>> Sent: 14 February 2017 17:04
> ...
>> One path I see around all of this is full integration. Meaning that
>> we can free pages into the page allocator which are still DMA mapped.
>> And future allocations from that device are prioritized to take still
>> DMA mapped objects.
> ...
>
> For systems with 'expensive' iommu has anyone tried separating the
> allocation of iommu resource (eg page table slots) from their
> assignment to physical pages?
>
> Provided the page sizes all match, setting up a receive buffer might
> be as simple as writing the physical address into the iommu slot
> that matches the ring entry.
>
> Or am I thinking about hardware that is much simpler than real life?
You still will eat an expensive MMIO or hypervisor call to setup the
mapping.
IOMMU is expensive because of two operations, the slot allocation
(which takes locks) and the modification of the IOMMU PTE to setup
or teardown the mapping.
This is why attempts to preallocate slots (which people have looked
into) never really takes off. You really have to eliminate the
entire operation to get worthwhile gains.
Powered by blists - more mailing lists