[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200121205403.GC75374@Konrads-MacBook-Pro.local>
Date: Tue, 21 Jan 2020 15:54:03 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To: Ashish Kalra <ashish.kalra@....com>
Cc: Konrad Rzeszutek Wilk <konrad@...nok.org>, hch@....de,
tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, hpa@...or.com,
x86@...nel.org, luto@...nel.org, peterz@...radead.org,
dave.hansen@...ux-intel.com, iommu@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, brijesh.singh@....com,
Thomas.Lendacky@....com
Subject: Re: [PATCH v2] swiotlb: Adjust SWIOTBL bounce buffer size for SEV
guests.
On Tue, Jan 21, 2020 at 08:09:47PM +0000, Ashish Kalra wrote:
> On Thu, Dec 19, 2019 at 08:52:45PM -0500, Konrad Rzeszutek Wilk wrote:
> > On Mon, Dec 09, 2019 at 11:13:46PM +0000, Ashish Kalra wrote:
> > > From: Ashish Kalra <ashish.kalra@....com>
> > >
> > > For SEV, all DMA to and from guest has to use shared
> > > (un-encrypted) pages. SEV uses SWIOTLB to make this happen
> > > without requiring changes to device drivers. However,
> > > depending on workload being run, the default 64MB of SWIOTLB
> > > might not be enough and SWIOTLB may run out of buffers to
> > > use for DMA, resulting in I/O errors.
> > >
> > > Increase the default size of SWIOTLB for SEV guests using
> > > a minimum value of 128MB and a maximum value of 512MB,
> > > determining on amount of provisioned guest memory.
> > >
> > > The SWIOTLB default size adjustment is added as an
> > > architecture specific interface/callback to allow
> > > architectures such as those supporting memory encryption
> > > to adjust/expand SWIOTLB size for their use.
> >
> > What if this was made dynamic? That is if there is a memory
> > pressure you end up expanding the SWIOTLB dynamically?
>
> As of now we want to keep it as simple as possible and more
> like a stop-gap arrangement till something more elegant is
> available.
That is nice. But past experience has shown that stop-gap arrangments
end up being the defacto solution.
>
> >
> >> Also is it worth doing this calculation based on memory or
> >> more on the # of PCI devices + their MMIO ranges size?
>
> Additional memory calculations based on # of PCI devices and
> their memory ranges will make it more complicated with so
> many other permutations and combinations to explore, it is
> essential to keep this patch as simple as possible by
> adjusting the bounce buffer size simply by determining it
> from the amount of provisioned guest memory.
Please rework the patch to:
- Use a log solution instead of the multiplication.
Feel free to cap it at a sensible value.
- Also the code depends on SWIOTLB calling in to the
adjust_swiotlb_default_size which looks wrong.
You should not adjust io_tlb_nslabs from swiotlb_size_or_default.
That function's purpose is to report a value.
- Make io_tlb_nslabs be visible outside of the SWIOTLB code.
- Can you utilize the IOMMU_INIT APIs and have your own detect which would
modify the io_tlb_nslabs (and set swiotbl=1?).
Actually you seem to be piggybacking on pci_swiotlb_detect_4gb - so
perhaps add in this code ? Albeit it really should be in it's own
file, not in arch/x86/kernel/pci-swiotlb.c
- Tweak the code in the swiotlb code to make sure it can deal
with io_tlb_nslabs being modified outside of the code at
the start. It should have no trouble, but only testing will
tell for sure.
>
> Thanks,
> Ashish
Powered by blists - more mailing lists