[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191105135414.GA30717@redhat.com>
Date: Tue, 5 Nov 2019 08:54:14 -0500
From: Andrea Arcangeli <aarcange@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>
Subject: Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
On Tue, Nov 05, 2019 at 11:37:47AM +0100, Paolo Bonzini wrote:
> For the rest, please do this before posting again:
>
> - ensure that everything is bisectable
x86-64 is already bisectable.
All other archs bisectable I didn't check them all anyway.
Even 4/13 is suboptimal and needs to be re-done later in more optimal
way. I prefer all logic changes to happen at later steps so one can at
least bisect to something that functionally works like before. And
4/13 also would need to be merged in the huge patch if one wants to
guarantee bisectability on all CPUs, but it'll just be hidden there in
the huge patch.
Obviously I can squash both 3/13 and 4/13 into 2/13 but I don't feel
like doing the right thing by squashing them just to increase
bisectability.
> - look into how to remove the modpost warnings. A simple (though
> somewhat ugly) way is to keep a kvm.ko module that includes common
> virt/kvm/ code as well as, for x86 only, page_track.o. A few functions,
> such as kvm_mmu_gfn_disallow_lpage and kvm_mmu_gfn_allow_lpage, would
> have to be moved into mmu.h, but that's not a big deal.
I think we should:
1) whitelist to shut off the warnings on demand
2) verify that if two modules are registering the same export symbol
the second one fails to load and the module code is robust about
that, this hopefully should already be the case
Provided verification of 2), the whitelist is more efficient than
losing 4k of ram in all KVM hypervisors out there.
> - provide at least some examples of replacing the NULL kvm_x86_ops
> checks with error codes in the function (or just early "return"s). I
> can help with the others, but remember that for the patch to be merged,
> kvm_x86_ops must be removed completely.
Even if kvm_x86_ops wouldn't be guaranteed to go away, this would
already provide all the performance benefit to the KVM users, so I
wouldn't see a reason not to apply it even if kvm_x86_ops cannot go
away. Said that it will go away and there's no concern about it. It's
just that the patchset seems large enough already and it rejects
heavily already at every port. I simply stopped at the first self
contained step that provides all performance benefits.
If I go ahead and remove kvm_x86_ops how do I know it won't reject
heavily the next day I rebase and I've to redo it all from scratch? If
you explain me how you're going to guarantee that I won't have to do
that work more than once I'd be happy to go ahead.
Thanks,
Andrea
Powered by blists - more mailing lists