[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <330acce5-a527-543b-84c0-f3d8d277a0e2@redhat.com>
Date: Tue, 5 Nov 2019 15:09:37 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Andrea Arcangeli <aarcange@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>
Subject: Re: [PATCH 03/13] kvm: monolithic: fixup x86-32 build
On 05/11/19 14:54, Andrea Arcangeli wrote:
> x86-64 is already bisectable.
>
> All other archs bisectable I didn't check them all anyway.
>
> Even 4/13 is suboptimal and needs to be re-done later in more optimal
> way. I prefer all logic changes to happen at later steps so one can at
> least bisect to something that functionally works like before. And
> 4/13 also would need to be merged in the huge patch if one wants to
> guarantee bisectability on all CPUs, but it'll just be hidden there in
> the huge patch.
>
> Obviously I can squash both 3/13 and 4/13 into 2/13 but I don't feel
> like doing the right thing by squashing them just to increase
> bisectability.
You can reorder patches so that kvm_x86_ops assignments never happen.
That way, 4/13 for example would be moved to the very beginning.
>> - look into how to remove the modpost warnings. A simple (though
>> somewhat ugly) way is to keep a kvm.ko module that includes common
>> virt/kvm/ code as well as, for x86 only, page_track.o. A few functions,
>> such as kvm_mmu_gfn_disallow_lpage and kvm_mmu_gfn_allow_lpage, would
>> have to be moved into mmu.h, but that's not a big deal.
>
> I think we should:
>
> 1) whitelist to shut off the warnings on demand
Do you mean adding a whitelist to modpost? That would work, though I am
not sure if the module maintainer (Jessica Yu) would accept that.
> 2) verify that if two modules are registering the same export symbol
> the second one fails to load and the module code is robust about
> that, this hopefully should already be the case
>
> Provided verification of 2), the whitelist is more efficient than
> losing 4k of ram in all KVM hypervisors out there.
I agree.
>> - provide at least some examples of replacing the NULL kvm_x86_ops
>> checks with error codes in the function (or just early "return"s). I
>> can help with the others, but remember that for the patch to be merged,
>> kvm_x86_ops must be removed completely.
>
> Even if kvm_x86_ops wouldn't be guaranteed to go away, this would
> already provide all the performance benefit to the KVM users, so I
> wouldn't see a reason not to apply it even if kvm_x86_ops cannot go
> away.
The answer is maintainability. My suggestion is that we start looking
into removing all assignments and tests of kvm_x86_ops, one step at a
time. Until this is done, unfortunately we won't be able to reap the
performance benefit. But the advantage is that this can be done in many
separate submissions; it doesn't have to be one huge patch.
Once this is done, removing kvm_x86_ops is trivial in the end. It's
okay if the intermediate step has minimal performance regressions, we
know what it will look like. I have to order patches with maintenance
first and performance second, if possible.
By the way, we are already planning to make some module parameters
per-VM instead of global, so this refactoring would also help that effort.
> Said that it will go away and there's no concern about it. It's
> just that the patchset seems large enough already and it rejects
> heavily already at every port. I simply stopped at the first self
> contained step that provides all performance benefits.
That is good enough to prove the feasibility of the idea, so I agree
that was a good plan.
Paolo
> If I go ahead and remove kvm_x86_ops how do I know it won't reject
> heavily the next day I rebase and I've to redo it all from scratch? If
> you explain me how you're going to guarantee that I won't have to do
> that work more than once I'd be happy to go ahead.
Powered by blists - more mailing lists