[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191210051244.GB5702@oc0525413822.ibm.com>
Date: Mon, 9 Dec 2019 21:12:44 -0800
From: Ram Pai <linuxram@...ibm.com>
To: Alexey Kardashevskiy <aik@...abs.ru>
Cc: mpe@...erman.id.au, linuxppc-dev@...ts.ozlabs.org,
benh@...nel.crashing.org, david@...son.dropbear.id.au,
paulus@...abs.org, mdroth@...ux.vnet.ibm.com, hch@....de,
andmike@...ibm.com, sukadev@...ux.vnet.ibm.com, mst@...hat.com,
ram.n.pai@...il.com, cai@....pw, tglx@...utronix.de,
bauerman@...ux.ibm.com, linux-kernel@...r.kernel.org,
leonardo@...ux.ibm.com
Subject: RE: [PATCH v5 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with
the hypervisor.
On Tue, Dec 10, 2019 at 02:07:36PM +1100, Alexey Kardashevskiy wrote:
>
>
> On 07/12/2019 12:12, Ram Pai wrote:
> > H_PUT_TCE_INDIRECT hcall uses a page filled with TCE entries, as one of
> > its parameters. On secure VMs, hypervisor cannot access the contents of
> > this page since it gets encrypted. Hence share the page with the
> > hypervisor, and unshare when done.
>
>
> I thought the idea was to use H_PUT_TCE and avoid sharing any extra
> pages. There is small problem that when DDW is enabled,
> FW_FEATURE_MULTITCE is ignored (easy to fix); I also noticed complains
> about the performance on slack but this is caused by initial cleanup of
> the default TCE window (which we do not use anyway) and to battle this
> we can simply reduce its size by adding
something that takes hardly any time with H_PUT_TCE_INDIRECT, takes
13secs per device for H_PUT_TCE approach, during boot. This is with a
30GB guest. With larger guest, the time will further detoriate.
>
> -global
> spapr-pci-host-bridge.dma_win_size=0x4000000
This option, speeds it up tremendously. But than should this option be
enabled in qemu by default? only for secure VMs? for both VMs?
RP
Powered by blists - more mailing lists