lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAC_TJvep6n74PJARzR9J9_pHZEGUnehb0W50f64m-16UGJE_Cg@mail.gmail.com>
Date:   Mon, 14 Feb 2022 14:03:38 -0800
From:   Kalesh Singh <kaleshsingh@...gle.com>
To:     Marc Zyngier <maz@...nel.org>
Cc:     Will Deacon <will@...nel.org>, Quentin Perret <qperret@...gle.com>,
        Fuad Tabba <tabba@...gle.com>,
        Suren Baghdasaryan <surenb@...gle.com>,
        "Cc: Android Kernel" <kernel-team@...roid.com>,
        Catalin Marinas <catalin.marinas@....com>,
        James Morse <james.morse@....com>,
        Alexandru Elisei <alexandru.elisei@....com>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        Ard Biesheuvel <ardb@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Pasha Tatashin <pasha.tatashin@...een.com>,
        Joey Gouly <joey.gouly@....com>,
        Peter Collingbourne <pcc@...gle.com>,
        Andrew Scull <ascull@...gle.com>,
        "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" 
        <linux-arm-kernel@...ts.infradead.org>,
        LKML <linux-kernel@...r.kernel.org>, kvmarm@...ts.cs.columbia.edu
Subject: Re: [PATCH 4/7] KVM: arm64: Allocate guard pages near hyp stacks

On Mon, Feb 14, 2022 at 6:06 AM Marc Zyngier <maz@...nel.org> wrote:
>
> On Thu, 10 Feb 2022 22:41:45 +0000,
> Kalesh Singh <kaleshsingh@...gle.com> wrote:
> >
> > From: Quentin Perret <qperret@...gle.com>
> >
> > Allocate unbacked VA space underneath each stack page to ensure stack
> > overflows get trapped and don't corrupt memory silently.
> >
> > The stack is aligned to twice its size (PAGE_SIZE), meaning that any
> > valid stack address has PAGE_SHIFT bit as 0. This allows us to easily
> > check for overflow in the exception entry without corrupting any GPRs.
> >
> > Signed-off-by: Quentin Perret <qperret@...gle.com>
> > [ Kalesh - Update commit text and comments,
> >            refactor, add overflow handling ]
> > Signed-off-by: Kalesh Singh <kaleshsingh@...gle.com>
> > ---
> >  arch/arm64/kvm/hyp/nvhe/host.S   | 16 ++++++++++++++++
> >  arch/arm64/kvm/hyp/nvhe/setup.c  | 19 ++++++++++++++++++-
> >  arch/arm64/kvm/hyp/nvhe/switch.c |  5 +++++
> >  3 files changed, 39 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
> > index 3d613e721a75..78e4b612ac06 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/host.S
> > +++ b/arch/arm64/kvm/hyp/nvhe/host.S
> > @@ -153,6 +153,10 @@ SYM_FUNC_END(__host_hvc)
> >
> >  .macro invalid_host_el2_vect
> >       .align 7
> > +
> > +     /* Test stack overflow without corrupting GPRs */
> > +     test_sp_overflow PAGE_SHIFT, .L__hyp_sp_overflow\@
> > +
>
> I am definitely concerned with this in a system not using pKVM (which
> is on average 100% of the upstream users so far! ;-). This is more or
> less guaranteed to do the wrong thing 50% of the times, depending on
> the alignment of the stack.

Thanks for pointing this out Marc. I'll rework this to work for both
protected and non-protected modes.

>
> >       /* If a guest is loaded, panic out of it. */
> >       stp     x0, x1, [sp, #-16]!
> >       get_loaded_vcpu x0, x1
> > @@ -165,6 +169,18 @@ SYM_FUNC_END(__host_hvc)
> >        * been partially clobbered by __host_enter.
> >        */
> >       b       hyp_panic
> > +
> > +.L__hyp_sp_overflow\@:
> > +     /*
> > +      * Reset SP to the top of the stack, to allow handling the hyp_panic.
> > +      * This corrupts the stack but is ok, since we won't be attempting
> > +      * any unwinding here.
> > +      */
> > +     ldr_this_cpu    x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1
> > +     mov     sp, x0
> > +
> > +     bl      hyp_panic_bad_stack
> > +     ASM_BUG()
> >  .endm
> >
> >  .macro invalid_host_el1_vect
> > diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> > index 99e178cf4249..114053dff228 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> > @@ -105,7 +105,24 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
> >               if (ret)
> >                       return ret;
> >
> > -             /* Map stack pages in the 'private' VA range */
> > +             /*
> > +              * Allocate 'private' VA range for stack guard pages.
> > +              *
> > +              * The 'private' VA range grows upward and stacks downwards, so
> > +              * allocate the guard page first. But make sure to align the
> > +              * stack itself with PAGE_SIZE * 2 granularity to ease overflow
> > +              * detection in the entry assembly code.
> > +              */
> > +             do {
> > +                     start = (void *)hyp_alloc_private_va_range(PAGE_SIZE);
> > +                     if (IS_ERR(start))
> > +                             return PTR_ERR(start);
> > +             } while (IS_ALIGNED((u64) start, PAGE_SIZE * 2));
>
> This seems cumbersome. Can't we tweak hyp_alloc_private_va_range() to
> perform the required alignment? It could easily be convinced to return
> an address that is aligned on the size of the region, which would
> avoid this sort of loop.

Ack. I'll update it for v2.

- Kalesh

>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ