lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 28 Sep 2022 19:04:36 +0200 From: Paolo Bonzini <pbonzini@...hat.com> To: Emanuele Giuseppe Esposito <eesposit@...hat.com>, kvm@...r.kernel.org Cc: Sean Christopherson <seanjc@...gle.com>, Vitaly Kuznetsov <vkuznets@...hat.com>, Wanpeng Li <wanpengli@...cent.com>, Jim Mattson <jmattson@...gle.com>, Joerg Roedel <joro@...tes.org>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>, David Hildenbrand <david@...hat.com>, Maxim Levitsky <mlevitsk@...hat.com>, x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org Subject: Re: [RFC PATCH 4/9] kvm_main.c: split logic in kvm_set_memslots On 9/9/22 12:45, Emanuele Giuseppe Esposito wrote: > +/* > + * Takes kvm->slots_arch_lock, and releases it only if > + * invalid_slot allocation or kvm_prepare_memory_region failed. > +*/ Much simpler: "kvm->slots_arch_lock is taken on a successful return." This is a small change in phrasing, but it makes a lot more sense: on success you are preparing for the final commit operation, otherwise you just want the caller to return your errno value. [...] > +/* Must be called with kvm->slots_arch_lock held, but releases it. */ s/but/and/. Even better, "and releases it before returning". "But" tells the reader that something strange is going on, "and" tells it that something simple is going on. I would also rename the functions along the lines of my review to patch 3, to highlight that these function prepare/commit a *change* to a memslot. > +static void kvm_finish_memslot(struct kvm *kvm, > + struct kvm_internal_memory_region_list *batch) > +{ > + struct kvm_memory_slot *invalid_slot = batch->invalid; > + struct kvm_memory_slot *old = batch->old; > + struct kvm_memory_slot *new = batch->new; > + enum kvm_mr_change change = batch->change; lockdep_assert_held(&kvm->slots_arch_lock); > > /* > * For DELETE and MOVE, the working slot is now active as the INVALID > @@ -1883,6 +1898,18 @@ static int kvm_set_memslot(struct kvm *kvm, > * responsible for knowing that new->arch may be stale. > */ > kvm_commit_memory_region(kvm, batch); > +} > + > +static int kvm_set_memslot(struct kvm *kvm, > + struct kvm_internal_memory_region_list *batch) > +{ > + int r; > + > + r = kvm_prepare_memslot(kvm, batch); > + if (r) > + return r; > + > + kvm_finish_memslot(kvm, batch); > > return 0; Ok, these are the functions I hinted at in the review of the previous patch, so we're not far away. You should be able to move the kvm_set_memslot call to kvm_set_memory_region in patch 3, and then replace it with the two calls here directly in kvm_set_memory_region. Paolo
Powered by blists - more mailing lists