lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2df0b6d18115fb7f2701587b7937d8ddae38e36a.camel@redhat.com>
Date:   Tue, 24 Aug 2021 19:07:58 +0300
From:   Maxim Levitsky <mlevitsk@...hat.com>
To:     Vitaly Kuznetsov <vkuznets@...hat.com>,
        Eduardo Habkost <ehabkost@...hat.com>
Cc:     kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
        Sean Christopherson <seanjc@...gle.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        "Dr. David Alan Gilbert" <dgilbert@...hat.com>,
        Nitesh Narayan Lal <nitesh@...hat.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 4/4] KVM: x86: Fix stack-out-of-bounds memory access
 from ioapic_write_indirect()

On Tue, 2021-08-24 at 16:42 +0200, Vitaly Kuznetsov wrote:
> Eduardo Habkost <ehabkost@...hat.com> writes:
> 
> > On Tue, Aug 24, 2021 at 3:13 AM Vitaly Kuznetsov <vkuznets@...hat.com> wrote:
> > > Eduardo Habkost <ehabkost@...hat.com> writes:
> > > 
> > > > On Mon, Aug 23, 2021 at 04:30:28PM +0200, Vitaly Kuznetsov wrote:
> > > > > KASAN reports the following issue:
> > > > > 
> > > > >  BUG: KASAN: stack-out-of-bounds in kvm_make_vcpus_request_mask+0x174/0x440 [kvm]
> > > > >  Read of size 8 at addr ffffc9001364f638 by task qemu-kvm/4798
> > > > > 
> > > > >  CPU: 0 PID: 4798 Comm: qemu-kvm Tainted: G               X --------- ---
> > > > >  Hardware name: AMD Corporation DAYTONA_X/DAYTONA_X, BIOS RYM0081C 07/13/2020
> > > > >  Call Trace:
> > > > >   dump_stack+0xa5/0xe6
> > > > >   print_address_description.constprop.0+0x18/0x130
> > > > >   ? kvm_make_vcpus_request_mask+0x174/0x440 [kvm]
> > > > >   __kasan_report.cold+0x7f/0x114
> > > > >   ? kvm_make_vcpus_request_mask+0x174/0x440 [kvm]
> > > > >   kasan_report+0x38/0x50
> > > > >   kasan_check_range+0xf5/0x1d0
> > > > >   kvm_make_vcpus_request_mask+0x174/0x440 [kvm]
> > > > >   kvm_make_scan_ioapic_request_mask+0x84/0xc0 [kvm]
> > > > >   ? kvm_arch_exit+0x110/0x110 [kvm]
> > > > >   ? sched_clock+0x5/0x10
> > > > >   ioapic_write_indirect+0x59f/0x9e0 [kvm]
> > > > >   ? static_obj+0xc0/0xc0
> > > > >   ? __lock_acquired+0x1d2/0x8c0
> > > > >   ? kvm_ioapic_eoi_inject_work+0x120/0x120 [kvm]
> > > > > 
> > > > > The problem appears to be that 'vcpu_bitmap' is allocated as a single long
> > > > > on stack and it should really be KVM_MAX_VCPUS long. We also seem to clear
> > > > > the lower 16 bits of it with bitmap_zero() for no particular reason (my
> > > > > guess would be that 'bitmap' and 'vcpu_bitmap' variables in
> > > > > kvm_bitmap_or_dest_vcpus() caused the confusion: while the later is indeed
> > > > > 16-bit long, the later should accommodate all possible vCPUs).
> > > > > 
> > > > > Fixes: 7ee30bc132c6 ("KVM: x86: deliver KVM IOAPIC scan request to target vCPUs")
> > > > > Fixes: 9a2ae9f6b6bb ("KVM: x86: Zero the IOAPIC scan request dest vCPUs bitmap")
> > > > > Reported-by: Dr. David Alan Gilbert <dgilbert@...hat.com>
> > > > > Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> > > > > ---
> > > > >  arch/x86/kvm/ioapic.c | 10 +++++-----
> > > > >  1 file changed, 5 insertions(+), 5 deletions(-)
> > > > > 
> > > > > diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
> > > > > index ff005fe738a4..92cd4b02e9ba 100644
> > > > > --- a/arch/x86/kvm/ioapic.c
> > > > > +++ b/arch/x86/kvm/ioapic.c
> > > > > @@ -319,7 +319,7 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
> > > > >      unsigned index;
> > > > >      bool mask_before, mask_after;
> > > > >      union kvm_ioapic_redirect_entry *e;
> > > > > -    unsigned long vcpu_bitmap;
> > > > > +    unsigned long vcpu_bitmap[BITS_TO_LONGS(KVM_MAX_VCPUS)];
> > > > 
> > > > Is there a way to avoid this KVM_MAX_VCPUS-sized variable on the
> > > > stack?  This might hit us back when we increase KVM_MAX_VCPUS to
> > > > a few thousand VCPUs (I was planning to submit a patch for that
> > > > soon).
> > > 
> > > What's the short- or mid-term target?
> > 
> > Short term target is 2048 (which was already tested). Mid-term target
> > (not tested yet) is 4096, maybe 8192.
> > 
> > > Note, we're allocating KVM_MAX_VCPUS bits (not bytes!) here, this means
> > > that for e.g. 2048 vCPUs we need 256 bytes of the stack only. In case
> > > the target much higher than that, we will need to either switch to
> > > dynamic allocation or e.g. use pre-allocated per-CPU variables and make
> > > this a preempt-disabled region. I, however, would like to understand if
> > > the problem with allocating this from stack is real or not first.
> > 
> > Is 256 bytes too much here, or would that be OK?
> > 
> 
> AFAIR, on x86_64 stack size (both reqular and irq) is 16k, eating 256
> bytes of it is probably OK. I'd start worrying when we go to 1024 (8k
> vCPUs) and above (but this is subjective of course).
> 
Hi,

Not a classical review but,
I did some digital archaeology with this one, trying to understand what is going on:


I think that 16 bit vcpu bitmap is due to the fact that IOAPIC spec states that
it can address up to 16 cpus in physical destination mode.
 
In logical destination mode, assuming flat addressing and that logical id = 1 << physical id
which KVM hardcodes, it is also only possible to address 8 CPUs.
 
However(!) in flat cluster mode, the logical apic id is split in two.
We have 16 clusters and each have 4 CPUs, so it is possible to address 64 CPUs,
and unlike the logical ID, the KVM does honour cluster ID, 
thus one can stick say cluster ID 0 to any vCPU.
 
 
Let's look at ioapic_write_indirect.
It does:
 
    -> bitmap_zero(&vcpu_bitmap, 16);
    -> kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq, &vcpu_bitmap);
    -> kvm_make_scan_ioapic_request_mask(ioapic->kvm, &vcpu_bitmap); // use of the above bitmap
 
 
When we call kvm_bitmap_or_dest_vcpus, we can already overflow the bitmap,
since we pass all 8 bit of the destination even when it is physical.
 
 
Lets examine the kvm_bitmap_or_dest_vcpus:
 
  -> It calls the kvm_apic_map_get_dest_lapic which 
 
       -> for physical destinations, it just sets the bitmap, which can overflow
          if we pass it 8 bit destination (which basically includes reserved bits + 4 bit destination).
 
 
       -> For logical apic ID, it seems to truncate the result to 16 bit, which isn't correct as I explained
          above, but should not overflow the result.
 
  
   -> If call to kvm_apic_map_get_dest_lapic fails, it goes over all vcpus and tries to match the destination
       This can overflow as well.
 
 
I also don't like that ioapic_write_indirect calls the kvm_bitmap_or_dest_vcpus twice,
and second time with 'old_dest_id'
 
I am not 100%  sure why old_dest_id/old_dest_mode are needed as I don't see anything in the
function changing them.
I think only the guest can change them, so maybe the code deals with the guest changing them
while the code is running from a different vcpu?
 
The commit that introduced this code is 7ee30bc132c683d06a6d9e360e39e483e3990708
Nitesh Narayan Lal, maybe you remember something about it?
 

Also I worry a lot about other callers of kvm_apic_map_get_dest_lapic
 
It is also called from kvm_irq_delivery_to_apic_fast, and from kvm_intr_is_single_vcpu_fast
and both seem to also use 'unsigned long' for bitmap, and then only use 16 bits of it.
 
I haven't dug into them, but these don't seem to be IOAPIC related and I think
can overwrite the stack as well.
 

The whole thing seems a bit busted IMHO.

On the topic of enlarging these bitmaps to cover all vCPUs.

I also share the worry of having the whole bitmap on kernel stack for very large number of vcpus.
Maybe we need to abstract and use a bitmap for a sane number of vcpus, 
and use otherwise a 'kmalloc'ed buffer?

Also in theory large bitmaps might affect performance a bit.

My 0.2 cents.

Best regards,
	Maxim Levitsky

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ