[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200627070236.GA11854@lst.de>
Date: Sat, 27 Jun 2020 09:02:36 +0200
From: Christoph Hellwig <hch@....de>
To: Jonathan Lemon <jonathan.lemon@...il.com>
Cc: Christoph Hellwig <hch@....de>,
Björn Töpel <bjorn.topel@...el.com>,
Alexander Duyck <alexander.duyck@...il.com>,
netdev@...r.kernel.org, iommu@...ts.linux-foundation.org
Subject: Re: the XSK buffer pool needs be to reverted
On Fri, Jun 26, 2020 at 01:54:12PM -0700, Jonathan Lemon wrote:
> On Fri, Jun 26, 2020 at 09:47:25AM +0200, Christoph Hellwig wrote:
> >
> > Note that this is somewhat urgent, as various of the APIs that the code
> > is abusing are slated to go away for Linux 5.9, so this addition comes
> > at a really bad time.
>
> Could you elaborate on what is upcoming here?
Moving all these calls out of line, and adding a bypass flag to avoid
the indirect function call for IOMMUs in direct mapped mode.
> Also, on a semi-related note, are there limitations on how many pages
> can be left mapped by the iommu? Some of the page pool work involves
> leaving the pages mapped instead of constantly mapping/unmapping them.
There are, but I think for all modern IOMMUs they are so big that they
don't matter. Maintaines of the individual IOMMU drivers might know
more.
> On a heavily loaded box with iommu enabled, it seems that quite often
> there is contention on the iova_lock. Are there known issues in this
> area?
I'll have to defer to the IOMMU maintainers, and for that you'll need
to say what code you are using. Current mainlaine doesn't even have
an iova_lock anywhere.
Powered by blists - more mailing lists