[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191217162405.GD7258@xz-x1>
Date: Tue, 17 Dec 2019 11:24:05 -0500
From: Peter Xu <peterx@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Alex Williamson <alex.williamson@...hat.com>,
"Tian, Kevin" <kevin.tian@...el.com>
Subject: Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking
On Tue, Dec 17, 2019 at 10:01:40AM +0100, Paolo Bonzini wrote:
> On 16/12/19 19:54, Peter Xu wrote:
> > On Mon, Dec 16, 2019 at 11:08:15AM +0100, Paolo Bonzini wrote:
> >>> Although now because we have kvm_get_running_vcpu() all cases for [&]
> >>> should be fine without changing anything, but I tend to add another
> >>> patch in the next post to convert all the [&] cases explicitly to pass
> >>> vcpu pointer instead of kvm pointer to be clear if no one disagrees,
> >>> then we verify that against kvm_get_running_vcpu().
> >>
> >> This is a good idea but remember not to convert those to
> >> kvm_vcpu_write_guest, because you _don't_ want these writes to touch
> >> SMRAM (most of the addresses are OS-controlled rather than
> >> firmware-controlled).
> >
> > OK. I think I only need to pass in vcpu* instead of kvm* in
> > kvm_write_guest_page() just like kvm_vcpu_write_guest(), however we
> > still keep to only write to address space id==0 for that.
>
> No, please pass it all the way down to the [&] functions but not to
> kvm_write_guest_page. Those should keep using vcpu->kvm.
Actually I even wanted to refactor these helpers. I mean, we have two
sets of helpers now, kvm_[vcpu]_{read|write}*(), so one set is per-vm,
the other set is per-vcpu. IIUC the only difference of these two are
whether we should consider ((vcpu)->arch.hflags & HF_SMM_MASK) or we
just write to address space zero always. Could we unify them into a
single set of helper (I'll just drop the *_vcpu_* helpers because it's
longer when write) but we always pass in vcpu* as the first parameter?
Then we add another parameter "vcpu_smm" to show whether we want to
consider the HF_SMM_MASK flag.
Kvmgt is of course special here because it does not have vcpu context,
but as we're going to rework that, I'd like to know whether you agree
with above refactoring if without the kvmgt caller.
>
> >>> init_rmode_tss or init_rmode_identity_map. But I've marked them as
> >>> unimportant because they should only happen once at boot.
> >>
> >> We need to check if userspace can add an arbitrary number of entries by
> >> calling KVM_SET_TSS_ADDR repeatedly. I think it can; we'd have to
> >> forbid multiple calls to KVM_SET_TSS_ADDR which is not a problem in general.
> >
> > Will do that altogether with the series. I can further change both of
> > these calls to not track dirty at all, which shouldn't be hard, after
> > all userspace didn't even know them, as you mentioned below.
> >
> > Is there anything to explain what KVM_SET_TSS_ADDR is used for? This
> > is the thing I found that is closest to useful (from api.txt):
>
> The best description is probably at https://lwn.net/Articles/658883/:
>
> They are needed for unrestricted_guest=0. Remember that, in that case,
> the VM always runs in protected mode and with paging enabled. In order
> to emulate real mode you put the guest in a vm86 task, so you need some
> place for a TSS and for a page table, and they must be in guest RAM
> because the guest's TR and CR3 points to it. They are invisible to the
> guest, because the STR and MOV-from-CR instructions are invalid in vm86
> mode, but it must be there.
>
> If you don't call KVM_SET_TSS_ADDR you actually get a complaint in
> dmesg, and the TR stays at 0. I am not really sure what kind of bad
> things can happen with unrestricted_guest=0, probably you just get a VM
> Entry failure. The TSS takes 3 pages of memory. An interesting point is
> that you actually don't need to set the TR selector to a valid value (as
> you would do when running in "normal" vm86 mode), you can simply set the
> base and limit registers that are hidden in the processor, and generally
> inaccessible except through VMREAD/VMWRITE or system management mode. So
> KVM needs to set up a TSS but not a GDT.
>
> For paging, instead, 1 page is enough because we have only 4GB of memory
> to address. KVM disables CR4.PAE (page address extensions, aka 8-byte
> entries in each page directory or page table) and enables CR4.PSE (page
> size extensions, aka 4MB huge pages support with 4-byte page directory
> entries). One page then fits 1024 4-byte page directory entries, each
> for a 4MB huge pages, totaling exactly 4GB. Here if you don't set it the
> page table is at address 0xFFFBC000. QEMU changes it to 0xFEFFC000 so
> that the BIOS can be up to 16MB in size (the default only allows 256k
> between 0xFFFC0000 and 0xFFFFFFFF).
>
> The different handling, where only the page table has a default, is
> unfortunate, but so goes life...
>
> > So... has it really popped into existance somewhere? It would be good
> > at least to know why it does not need to be migrated.
>
> It does not need to be migrated just because the contents are constant.
OK, thanks! IIUC they should likely be all zeros then.
Do you think it's time to add most of these to kvm/api.txt? :) I can
do that too if you like.
--
Peter Xu
Powered by blists - more mailing lists