lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191210153542.GB5709@oc0525413822.ibm.com>
Date:   Tue, 10 Dec 2019 07:35:42 -0800
From:   Ram Pai <linuxram@...ibm.com>
To:     Alexey Kardashevskiy <aik@...abs.ru>
Cc:     mpe@...erman.id.au, linuxppc-dev@...ts.ozlabs.org,
        benh@...nel.crashing.org, david@...son.dropbear.id.au,
        paulus@...abs.org, mdroth@...ux.vnet.ibm.com, hch@....de,
        andmike@...ibm.com, sukadev@...ux.vnet.ibm.com, mst@...hat.com,
        ram.n.pai@...il.com, cai@....pw, tglx@...utronix.de,
        bauerman@...ux.ibm.com, linux-kernel@...r.kernel.org,
        leonardo@...ux.ibm.com
Subject: RE: [PATCH v5 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with
 the hypervisor.

On Tue, Dec 10, 2019 at 04:32:10PM +1100, Alexey Kardashevskiy wrote:
> 
> 
> On 10/12/2019 16:12, Ram Pai wrote:
> > On Tue, Dec 10, 2019 at 02:07:36PM +1100, Alexey Kardashevskiy wrote:
> >>
> >>
> >> On 07/12/2019 12:12, Ram Pai wrote:
> >>> H_PUT_TCE_INDIRECT hcall uses a page filled with TCE entries, as one of
> >>> its parameters.  On secure VMs, hypervisor cannot access the contents of
> >>> this page since it gets encrypted.  Hence share the page with the
> >>> hypervisor, and unshare when done.
> >>
> >>
> >> I thought the idea was to use H_PUT_TCE and avoid sharing any extra
> >> pages. There is small problem that when DDW is enabled,
> >> FW_FEATURE_MULTITCE is ignored (easy to fix); I also noticed complains
> >> about the performance on slack but this is caused by initial cleanup of
> >> the default TCE window (which we do not use anyway) and to battle this
> >> we can simply reduce its size by adding
> > 
> > something that takes hardly any time with H_PUT_TCE_INDIRECT,  takes
> > 13secs per device for H_PUT_TCE approach, during boot. This is with a
> > 30GB guest. With larger guest, the time will further detoriate.
> 
> 
> No it will not, I checked. The time is the same for 2GB and 32GB guests-
> the delay is caused by clearing the small DMA window which is small by
> the space mapped (1GB) but quite huge in TCEs as it uses 4K pages; and
> for DDW window + emulated devices the IOMMU page size will be 2M/16M/1G
> (depends on the system) so the number of TCEs is much smaller.

I cant get your results.  What changes did you make to get it?

> 
> 
> > 
> >>
> >> -global
> >> spapr-pci-host-bridge.dma_win_size=0x4000000
> > 
> > This option, speeds it up tremendously.  But than should this option be
> > enabled in qemu by default?  only for secure VMs? for both VMs?
> 
> 
> As discussed in slack, by default we do not need to clear the entire TCE
> table and we only have to map swiotlb buffer using the small window. It
> is a guest kernel change only. Thanks,

Can you tell me what code you are talking about here.  Where is the TCE
table getting cleared? What code needs to be changed to not clear it?

Is the code in tce_buildmulti_pSeriesLP(), the one that does the clear
aswell?

>
>

But before I close, you have not told me clearly, what is the problem
with;  'share the page, make the H_PUT_INDIRECT_TCE hcall, unshare the page'.


Remember this is the same page that is earmarked for doing
H_PUT_INDIRECT_TCE, not by my patch, but its already earmarked by the
existing code. So it not some random buffer that is picked. Second 
this page is temporarily shared and unshared, it does not stay shared
for life.  It does not slow the boot. it does not need any
special command line options on the qemu.

Shared pages technology was put in place, exactly for the purpose of
sharing data with the hypervisor.  We are using this technology exactly
for that purpose.  And finally I agreed with your concern of having
shared pages staying around.  Hence i addressed that concern, by
unsharing the page.  At this point, I fail to understand your concern.


RP

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ