[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200121210405.GA12692@linux.intel.com>
Date: Tue, 21 Jan 2020 13:04:05 -0800
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Ben Gardon <bgardon@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Subject: Re: [PATCH] KVM: x86: fix overlap between SPTE_MMIO_MASK and
generation
On Tue, Jan 21, 2020 at 09:24:07AM -0800, Ben Gardon wrote:
> On Tue, Jan 21, 2020 at 8:11 AM Paolo Bonzini <pbonzini@...hat.com> wrote:
> >
> > The SPTE_MMIO_MASK overlaps with the bits used to track MMIO
> > generation number. A high enough generation number would overwrite the
> > SPTE_SPECIAL_MASK region and cause the MMIO SPTE to be misinterpreted;
> > likewise, setting bits 52 and 53 would also cause an incorrect generation
> > number to be read from the PTE.
> >
> > Fixes: 6eeb4ef049e7 ("KVM: x86: assign two bits to track SPTE kinds")
> > Reported-by: Ben Gardon <bgardon@...gle.com>
> > Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> > ---
> > arch/x86/kvm/mmu/mmu.c | 9 ++++++---
> > 1 file changed, 6 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 57e4dbddba72..e34ca43d9166 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -418,22 +418,25 @@ static inline bool is_access_track_spte(u64 spte)
> > * requires a full MMU zap). The flag is instead explicitly queried when
> > * checking for MMIO spte cache hits.
> > */
> > -#define MMIO_SPTE_GEN_MASK GENMASK_ULL(18, 0)
> > +#define MMIO_SPTE_GEN_MASK GENMASK_ULL(17, 0)
>
> I see you're shifting the MMIO high gen mask region to avoid having to
> shift it by 2. Looking at the SDM, I believe using bit 62 for the
> generation number is safe, but I don't recall why it wasn't used
> before.
This patch does use bit 62, see MMIO_SPTE_GEN_HIGH_END below. It's bit
63 that is being avoided because it would collide with NX and EPT suppress
#VE.
> >
> > #define MMIO_SPTE_GEN_LOW_START 3
> > #define MMIO_SPTE_GEN_LOW_END 11
> > #define MMIO_SPTE_GEN_LOW_MASK GENMASK_ULL(MMIO_SPTE_GEN_LOW_END, \
> > MMIO_SPTE_GEN_LOW_START)
> >
> > -#define MMIO_SPTE_GEN_HIGH_START 52
> > -#define MMIO_SPTE_GEN_HIGH_END 61
> > +/* Leave room for SPTE_SPECIAL_MASK. */
> > +#define MMIO_SPTE_GEN_HIGH_START 54
> > +#define MMIO_SPTE_GEN_HIGH_END 62
> > #define MMIO_SPTE_GEN_HIGH_MASK GENMASK_ULL(MMIO_SPTE_GEN_HIGH_END, \
> > MMIO_SPTE_GEN_HIGH_START)
> > +
> > static u64 generation_mmio_spte_mask(u64 gen)
> > {
> > u64 mask;
> >
> > WARN_ON(gen & ~MMIO_SPTE_GEN_MASK);
> > + BUILD_BUG_ON(MMIO_SPTE_GEN_HIGH_START < PT64_SECOND_AVAIL_BITS_SHIFT);
>
> Would it be worth defining the MMIO_SPTE_GEN masks, SPTE_SPECIAL_MASK,
> SPTE_AD masks, and SPTE_MMIO_MASK in terms of
> PT64_SECOND_AVAIL_BITS_SHIFT? It seems like that might be a more
> robust assertion here.
That was Paolo's original proposal, I (successfully) lobbied for using a
BUILG_BUG_ON so that bugs result in a build failure instead of random
runtime issues, e.g. if PT64_SECOND_AVAIL_BITS_SHIFT was changed to a value
that caused the MMIO gen to overlap NX or even shift beyond bit 63.
https://lkml.kernel.org/r/20191212002902.GM5044@linux.intel.com
> Alternatively, BUILD_BUG_ON((MMIO_SPTE_GEN_HIGH_MASK |
> MMIO_SPTE_GEN_LOW_MASK) & SPTE_(MMIO and/or SPECIAL)_MASK)
Or add both BUILD_BUG_ONs.
> >
> > mask = (gen << MMIO_SPTE_GEN_LOW_START) & MMIO_SPTE_GEN_LOW_MASK;
> > mask |= (gen << MMIO_SPTE_GEN_HIGH_START) & MMIO_SPTE_GEN_HIGH_MASK;
> > --
> > 1.8.3.1
> >
Powered by blists - more mailing lists