lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 31 Aug 2018 08:38:52 +0800
From:   Wanpeng Li <kernellwp@...il.com>
To:     Radim Krcmar <rkrcmar@...hat.com>
Cc:     LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Liran Alon <liran.alon@...cle.com>,
        Dan Carpenter <dan.carpenter@...cle.com>
Subject: Re: [PATCH v2] KVM: LAPIC: Fix pv ipis out-of-bounds access

On Thu, 30 Aug 2018 at 19:01, Radim Krčmář <rkrcmar@...hat.com> wrote:
>
> 2018-08-30 10:03+0800, Wanpeng Li:
> > From: Wanpeng Li <wanpengli@...cent.com>
> >
> > Dan Carpenter reported that the untrusted data returns from kvm_register_read()
> > results in the following static checker warning:
> >   arch/x86/kvm/lapic.c:576 kvm_pv_send_ipi()
> >   error: buffer underflow 'map->phys_map' 's32min-s32max'
> >
> > KVM guest can easily trigger this by executing the following assembly sequence
> > in Ring0:
> >
> > mov $10, %rax
> > mov $0xFFFFFFFF, %rbx
> > mov $0xFFFFFFFF, %rdx
> > mov $0, %rsi
> > vmcall
> >
> > As this will cause KVM to execute the following code-path:
> > vmx_handle_exit() -> handle_vmcall() -> kvm_emulate_hypercall() -> kvm_pv_send_ipi()
> > which will reach out-of-bounds access.
> >
> > This patch fixes it by adding a check to kvm_pv_send_ipi() against map->max_apic_id,
> > ignoring destinations that are not present and delivering the rest. We also check
> > whether or not map->phys_map[min + i] is NULL since the max_apic_id is set to the
> > max apic id, some phys_map maybe NULL when apic id is sparse, especially kvm
> > unconditionally set max_apic_id to 255 to reserve enough space for any xAPIC ID.
> >
> > Reported-by: Dan Carpenter <dan.carpenter@...cle.com>
> > Reviewed-by: Liran Alon <liran.alon@...cle.com>
> > Cc: Paolo Bonzini <pbonzini@...hat.com>
> > Cc: Radim Krčmář <rkrcmar@...hat.com>
> > Cc: Liran Alon <liran.alon@...cle.com>
> > Cc: Dan Carpenter <dan.carpenter@...cle.com>
> > Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
> > ---
> > v1 -> v2:
> >  * add min > map->max_apic_id check
> >  * change min to u32
> >  * add min((u32)BITS_PER_LONG, (map->max_apic_id - min + 1))
> >
> > diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> > @@ -548,7 +548,7 @@ int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq,
> >  }
> >
> >  int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low,
> > -                 unsigned long ipi_bitmap_high, int min,
> > +                 unsigned long ipi_bitmap_high, u32 min,
> >                   unsigned long icr, int op_64_bit)
> >  {
> >       int i;
> > @@ -571,18 +571,27 @@ int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low,
> >       rcu_read_lock();
> >       map = rcu_dereference(kvm->arch.apic_map);
> >
> > +     if (min > map->max_apic_id)
> > +             goto out;
> >       /* Bits above cluster_size are masked in the caller.  */
> > -     for_each_set_bit(i, &ipi_bitmap_low, BITS_PER_LONG) {
> > -             vcpu = map->phys_map[min + i]->vcpu;
> > -             count += kvm_apic_set_irq(vcpu, &irq, NULL);
> > +     for_each_set_bit(i, &ipi_bitmap_low,
> > +             min((u32)BITS_PER_LONG, (map->max_apic_id - min + 1))) {
> > +             if (map->phys_map[min + i]) {
> > +                     vcpu = map->phys_map[min + i]->vcpu;
> > +                     count += kvm_apic_set_irq(vcpu, &irq, NULL);
> > +             }
> >       }
> >
> >       min += cluster_size;
>
> We need a second
>
>   if (min > map->max_apic_id)
>         goto out;
>
> here.  I will add it while applying if there are no other change
> requests.

Thanks Radim. :)

Regards,
Wanpeng Li

Powered by blists - more mailing lists