[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190724184015.ye6gjoikowiyh63f@box>
Date: Wed, 24 Jul 2019 21:40:15 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: "Lendacky, Thomas" <Thomas.Lendacky@....com>
Cc: Robin Murphy <robin.murphy@....com>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>, Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H . Peter Anvin" <hpa@...or.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Lianbo Jiang <lijiang@...hat.com>
Subject: Re: [PATCH] dma-direct: Force unencrypted DMA under SME for certain
DMA masks
On Wed, Jul 24, 2019 at 06:30:21PM +0000, Lendacky, Thomas wrote:
> On 7/24/19 1:11 PM, Kirill A. Shutemov wrote:
> > On Wed, Jul 24, 2019 at 05:34:26PM +0000, Lendacky, Thomas wrote:
> >> On 7/24/19 12:06 PM, Robin Murphy wrote:
> >>> On 24/07/2019 17:42, Lendacky, Thomas wrote:
> >>>> On 7/24/19 10:55 AM, Kirill A. Shutemov wrote:
> >>>>> On Wed, Jul 10, 2019 at 07:01:19PM +0000, Lendacky, Thomas wrote:
> >>>>>> @@ -351,6 +355,32 @@ bool sev_active(void)
> >>>>>> }
> >>>>>> EXPORT_SYMBOL(sev_active);
> >>>>>> +/* Override for DMA direct allocation check -
> >>>>>> ARCH_HAS_FORCE_DMA_UNENCRYPTED */
> >>>>>> +bool force_dma_unencrypted(struct device *dev)
> >>>>>> +{
> >>>>>> + /*
> >>>>>> + * For SEV, all DMA must be to unencrypted addresses.
> >>>>>> + */
> >>>>>> + if (sev_active())
> >>>>>> + return true;
> >>>>>> +
> >>>>>> + /*
> >>>>>> + * For SME, all DMA must be to unencrypted addresses if the
> >>>>>> + * device does not support DMA to addresses that include the
> >>>>>> + * encryption mask.
> >>>>>> + */
> >>>>>> + if (sme_active()) {
> >>>>>> + u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask));
> >>>>>> + u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask,
> >>>>>> + dev->bus_dma_mask);
> >>>>>> +
> >>>>>> + if (dma_dev_mask <= dma_enc_mask)
> >>>>>> + return true;
> >>>>>
> >>>>> Hm. What is wrong with the dev mask being equal to enc mask? IIUC, it
> >>>>> means that device mask is wide enough to cover encryption bit, doesn't it?
> >>>>
> >>>> Not really... it's the way DMA_BIT_MASK works vs bit numbering. Let's say
> >>>> that sme_me_mask has bit 47 set. __ffs64 returns 47 and DMA_BIT_MASK(47)
> >>>> will generate a mask without bit 47 set (0x7fffffffffff). So the check
> >>>> will catch anything that does not support at least 48-bit DMA.
> >>>
> >>> Couldn't that be expressed as just:
> >>>
> >>> if (sme_me_mask & dma_dev_mask == sme_me_mask)
> >>
> >> Actually !=, but yes, it could have been done like that, I just didn't
> >> think of it.
> >
> > I'm looking into generalizing the check to cover MKTME.
> >
> > Leaving off the Kconfig changes and moving the check to other file, doest
> > the change below look reasonable to you. It's only build tested so far.
> >
> > diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> > index fece30ca8b0c..6c86adcd02da 100644
> > --- a/arch/x86/mm/mem_encrypt.c
> > +++ b/arch/x86/mm/mem_encrypt.c
> > @@ -355,6 +355,8 @@ EXPORT_SYMBOL(sev_active);
> > /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
> > bool force_dma_unencrypted(struct device *dev)
> > {
> > + u64 dma_enc_mask;
> > +
> > /*
> > * For SEV, all DMA must be to unencrypted addresses.
> > */
> > @@ -362,18 +364,20 @@ bool force_dma_unencrypted(struct device *dev)
> > return true;
> >
> > /*
> > - * For SME, all DMA must be to unencrypted addresses if the
> > - * device does not support DMA to addresses that include the
> > - * encryption mask.
> > + * For SME and MKTME, all DMA must be to unencrypted addresses if the
> > + * device does not support DMA to addresses that include the encryption
> > + * mask.
> > */
> > - if (sme_active()) {
> > - u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask));
> > - u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask,
> > - dev->bus_dma_mask);
> > + if (!sme_active() && !mktme_enabled())
> > + return false;
> >
> > - if (dma_dev_mask <= dma_enc_mask)
> > - return true;
> > - }
> > + dma_enc_mask = sme_me_mask | mktme_keyid_mask();
> > +
> > + if (dev->coherent_dma_mask && (dev->coherent_dma_mask & dma_enc_mask) != dma_enc_mask)
> > + return true;
> > +
> > + if (dev->bus_dma_mask && (dev->bus_dma_mask & dma_enc_mask) != dma_enc_mask)
> > + return true;
>
> Do you want to err on the side of caution and return true if both masks
> are zero? You could do the min_not_zero step and then return true if the
> result is zero. Then just make the one comparison against dma_enc_mask.
Something like this?
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index fece30ca8b0c..173d68b08c55 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -355,6 +355,8 @@ EXPORT_SYMBOL(sev_active);
/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
bool force_dma_unencrypted(struct device *dev)
{
+ u64 dma_enc_mask, dma_dev_mask;
+
/*
* For SEV, all DMA must be to unencrypted addresses.
*/
@@ -362,20 +364,17 @@ bool force_dma_unencrypted(struct device *dev)
return true;
/*
- * For SME, all DMA must be to unencrypted addresses if the
- * device does not support DMA to addresses that include the
- * encryption mask.
+ * For SME and MKTME, all DMA must be to unencrypted addresses if the
+ * device does not support DMA to addresses that include the encryption
+ * mask.
*/
- if (sme_active()) {
- u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask));
- u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask,
- dev->bus_dma_mask);
+ if (!sme_active() && !mktme_enabled())
+ return false;
- if (dma_dev_mask <= dma_enc_mask)
- return true;
- }
+ dma_enc_mask = sme_me_mask | mktme_keyid_mask();
+ dma_dev_mask = min_not_zero(dev->coherent_dma_mask, dev->bus_dma_mask);
- return false;
+ return (dma_dev_mask & dma_enc_mask) != dma_enc_mask;
}
/* Architecture __weak replacement functions */
--
Kirill A. Shutemov
Powered by blists - more mailing lists