[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20191206231013.GA5709@oc0525413822.ibm.com>
Date: Fri, 6 Dec 2019 15:10:13 -0800
From: Ram Pai <linuxram@...ibm.com>
To: Alexey Kardashevskiy <aik@...abs.ru>
Cc: David Gibson <david@...son.dropbear.id.au>,
linuxppc-dev@...ts.ozlabs.org, mpe@...erman.id.au,
benh@...nel.crashing.org, paulus@...abs.org,
mdroth@...ux.vnet.ibm.com, hch@....de, andmike@...ibm.com,
sukadev@...ux.vnet.ibm.com, mst@...hat.com, ram.n.pai@...il.com,
cai@....pw, tglx@...utronix.de, bauerman@...ux.ibm.com,
linux-kernel@...r.kernel.org
Subject: RE: [PATCH v4 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with
the hypervisor
On Thu, Dec 05, 2019 at 09:26:14AM +1100, Alexey Kardashevskiy wrote:
>
>
> On 05/12/2019 07:42, Ram Pai wrote:
.snip...
> >>>> Do you still think, secure-VM should use H_PUT_TCE and not
> >>>> H_PUT_TCE_INDIRECT? And normal VM should use H_PUT_TCE_INDIRECT?
> >>>> Is there any advantage of special casing it for secure-VMs.
> >>>
> >>>
> >>> Reducing the amount of insecure memory at random location.
> >>
> >> The other approach we could use for that - which would still allow
> >> H_PUT_TCE_INDIRECT, would be to allocate the TCE buffer page from the
> >> same pool that we use for the bounce buffers. I assume there must
> >> already be some sort of allocator for that?
> >
> > The allocator for swiotlb is buried deep in the swiotlb code. It is
> > not exposed to the outside-swiotlb world. Will have to do major surgery
> > to expose it.
> >
> > I was thinking, maybe we share the page, finish the INDIRECT_TCE call,
> > and unshare the page. This will address Alexey's concern of having
> > shared pages at random location, and will also give me my performance
> > optimization. Alexey: ok?
>
>
> I really do not see the point. I really think we should to 1:1 mapping
> of swtiotlb buffers using the default 32bit window using H_PUT_TCE and
> this should be more than enough, I do not think the amount of code will
> be dramatically different compared to unsecuring and securing a page or
> using one of swtiotlb pages for this purpose. Thanks,
Ok. there are three different issues to be addressed here.
(a) How to map the TCE entries in the TCE table? Should we use H_PUT_TCE,
or H_PUT_INDIRECT_TCE?
(b) How much of the guest memory must be mapped in the TCE table? Should
it be the entire guest memory? or the memory used by the SWIOTLB?
(c) What mapping window must to be used? Is it the 64bit ddw? or the
default 32 bit ddw?
Regardless of how we resolve issue (b) and (c), we still need to
resolve (a). The main concern you have about solution (a) is
that, random pages are permanently shared, something that can be
exploited and can cause security issues. I tend to agree, this
is possible, though I am not sure how. But yes we need to address
this concern, since security is paramount to Secure Virtual Machines.
The way to resolve (a) is --
(i) grab a page from the SWIOTLB pool and use H_PUT_INDIRECT_TCE
OR
(ii) simply use H_PUT_TCE.
OR
(iii) share the page prior to H_PUT_INDIRECT_TCE, and
unshare the page once done.
Solution (i) has layering violation; as Christoph alluded to in his
previous reply. The swiotlb buffers are meant for I/O and DMA related
actitivy. We will be abusing these swiotlb pages to communicate TCE
entries to the hypervisor. Secondly IOMMU code has no idea where its
pages are sourced from and should not know either. I am uncomfortable
going this route. There is some upstream discussion about having a seperate
pool of shared pages on secure VM, https://lkml.org/lkml/2019/11/14/381.
That solution; when ready, may be exploitable here.
I have coded solution (ii) and it works. But boot path slows down
significantly. Huge amount H_PUT_TCE hcalls. Very hurtful.
I strongly think, solution (iii) is the right way to go. I have coded
it, it works and bootpath is much faster. However I am not sure if you
have a concern with this solution.
In any case, I am sending my next version of the patch based on solution
(iii).
Once this is addressed, I will address (b) and (c).
RP
Powered by blists - more mailing lists