lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 25 Aug 2023 07:54:55 -0600
From:   Peter Gonda <pgonda@...gle.com>
To:     "Gupta, Pankaj" <pankaj.gupta@....com>
Cc:     Sean Christopherson <seanjc@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] KVM: SVM: Get source vCPUs from source VM for SEV-ES
 intrahost migration

On Fri, Aug 25, 2023 at 4:23 AM Gupta, Pankaj <pankaj.gupta@....com> wrote:
>
> On 8/25/2023 4:23 AM, Sean Christopherson wrote:
> > Fix a goof where KVM tries to grab source vCPUs from the destination VM
> > when doing intrahost migration.  Grabbing the wrong vCPU not only hoses
> > the guest, it also crashes the host due to the VMSA pointer being left
> > NULL.
> >
> >    BUG: unable to handle page fault for address: ffffe38687000000
> >    #PF: supervisor read access in kernel mode
> >    #PF: error_code(0x0000) - not-present page
> >    PGD 0 P4D 0
> >    Oops: 0000 [#1] SMP NOPTI
> >    CPU: 39 PID: 17143 Comm: sev_migrate_tes Tainted: GO       6.5.0-smp--fff2e47e6c3b-next #151
> >    Hardware name: Google, Inc. Arcadia_IT_80/Arcadia_IT_80, BIOS 34.28.0 07/10/2023
> >    RIP: 0010:__free_pages+0x15/0xd0
> >    RSP: 0018:ffff923fcf6e3c78 EFLAGS: 00010246
> >    RAX: 0000000000000000 RBX: ffffe38687000000 RCX: 0000000000000100
> >    RDX: 0000000000000100 RSI: 0000000000000000 RDI: ffffe38687000000
> >    RBP: ffff923fcf6e3c88 R08: ffff923fcafb0000 R09: 0000000000000000
> >    R10: 0000000000000000 R11: ffffffff83619b90 R12: ffff923fa9540000
> >    R13: 0000000000080007 R14: ffff923f6d35d000 R15: 0000000000000000
> >    FS:  0000000000000000(0000) GS:ffff929d0d7c0000(0000) knlGS:0000000000000000
> >    CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> >    CR2: ffffe38687000000 CR3: 0000005224c34005 CR4: 0000000000770ee0
> >    PKRU: 55555554
> >    Call Trace:
> >     <TASK>
> >     sev_free_vcpu+0xcb/0x110 [kvm_amd]
> >     svm_vcpu_free+0x75/0xf0 [kvm_amd]
> >     kvm_arch_vcpu_destroy+0x36/0x140 [kvm]
> >     kvm_destroy_vcpus+0x67/0x100 [kvm]
> >     kvm_arch_destroy_vm+0x161/0x1d0 [kvm]
> >     kvm_put_kvm+0x276/0x560 [kvm]
> >     kvm_vm_release+0x25/0x30 [kvm]
> >     __fput+0x106/0x280
> >     ____fput+0x12/0x20
> >     task_work_run+0x86/0xb0
> >     do_exit+0x2e3/0x9c0
> >     do_group_exit+0xb1/0xc0
> >     __x64_sys_exit_group+0x1b/0x20
> >     do_syscall_64+0x41/0x90
> >     entry_SYSCALL_64_after_hwframe+0x63/0xcd
> >     </TASK>
> >    CR2: ffffe38687000000
> >
> > Fixes: 6defa24d3b12 ("KVM: SEV: Init target VMCBs in sev_migrate_from")
> > Cc: stable@...r.kernel.org
> > Cc: Peter Gonda <pgonda@...gle.com>
> > Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> > ---
> >   arch/x86/kvm/svm/sev.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> > index 2cd15783dfb9..acc700bcb299 100644
> > --- a/arch/x86/kvm/svm/sev.c
> > +++ b/arch/x86/kvm/svm/sev.c
> > @@ -1739,7 +1739,7 @@ static void sev_migrate_from(struct kvm *dst_kvm, struct kvm *src_kvm)
> >                * Note, the source is not required to have the same number of
> >                * vCPUs as the destination when migrating a vanilla SEV VM.
> >                */
> > -             src_vcpu = kvm_get_vcpu(dst_kvm, i);
> > +             src_vcpu = kvm_get_vcpu(src_kvm, i);
> >               src_svm = to_svm(src_vcpu);
> >
> >               /*
>
> Reviewed-by: Pankaj Gupta <pankaj.gupta@....com>

Thanks Sean.

Reviewed-by: Peter Gonda <pgonda@...gle.com>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ