lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191217194114.GG7258@xz-x1>
Date:   Tue, 17 Dec 2019 14:41:14 -0500
From:   Peter Xu <peterx@...hat.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     Christophe de Dinechin <dinechin@...hat.com>,
        Christophe de Dinechin <christophe.de.dinechin@...il.com>,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        Sean Christopherson <sean.j.christopherson@...el.com>,
        "Dr . David Alan Gilbert" <dgilbert@...hat.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        David Hildenbrand <david@...hat.com>,
        Eric Auger <eric.auger@...hat.com>,
        Cornelia Huck <cohuck@...hat.com>
Subject: Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking

On Tue, Dec 17, 2019 at 05:48:58PM +0100, Paolo Bonzini wrote:
> On 17/12/19 17:42, Peter Xu wrote:
> > 
> > However I just noticed something... Note that we still didn't read
> > into non-x86 archs, I think it's the same question as when I asked
> > whether we can unify the kvm[_vcpu]_write() interfaces and you'd like
> > me to read the non-x86 archs - I think it's time I read them, because
> > it's still possible that non-x86 archs will still need the per-vm
> > ring... then that could be another problem if we want to at last
> > spread the dirty ring idea outside of x86.
> 
> We can take a look, but I think based on x86 experience it's okay if we
> restrict dirty ring to arches that do no VM-wide accesses.

Here it is - a quick update on callers of mark_page_dirty_in_slot().
The same reverse trace, but ignoring all common and x86 code path
(which I covered in the other thread):

==================================

   mark_page_dirty_in_slot (non-x86)
        mark_page_dirty
            kvm_write_guest_page
                kvm_write_guest
                    kvm_write_guest_lock
                        vgic_its_save_ite [?]
                        vgic_its_save_dte [?]
                        vgic_its_save_cte [?]
                        vgic_its_save_collection_table [?]
                        vgic_v3_lpi_sync_pending_status [?]
                        vgic_v3_save_pending_tables [?]
                    kvmppc_rtas_hcall [&]
                    kvmppc_st [&]
                    access_guest [&]
                    put_guest_lc [&]
                    write_guest_lc [&]
                    write_guest_abs [&]
            mark_page_dirty
                _kvm_mips_map_page_fast [&]
                kvm_mips_map_page [&]
                kvmppc_mmu_map_page [&]
                kvmppc_copy_guest
                    kvmppc_h_page_init [&]
                kvmppc_xive_native_vcpu_eq_sync [&]
                adapter_indicators_set [?] (from kvm_set_irq)
                kvm_s390_sync_dirty_log [?]
                unpin_guest_page
                    unpin_blocks [&]
                    unpin_scb [&]

Cases with [*]: should not matter much
           [&]: should be able to change to per-vcpu context
           [?]: uncertain...

==================================

This time we've got 8 leaves with "[?]".

I'm starting with these:

        vgic_its_save_ite [?]
        vgic_its_save_dte [?]
        vgic_its_save_cte [?]
        vgic_its_save_collection_table [?]
        vgic_v3_lpi_sync_pending_status [?]
        vgic_v3_save_pending_tables [?]

These come from ARM specific ioctls like KVM_DEV_ARM_ITS_SAVE_TABLES,
KVM_DEV_ARM_ITS_RESTORE_TABLES, KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES.
IIUC ARM needed these to allow proper migration which indeed does not
have a vcpu context.

(Though I'm a bit curious why ARM didn't simply migrate these
 information explicitly from userspace, instead it seems to me that
 ARM guests will dump something into guest ram and then tries to
 recover from that which seems to be a bit weird)
 
Then it's this:

        adapter_indicators_set [?]

This is s390 specific, which should come from kvm_set_irq.  I'm not
sure whether we can remove the mark_page_dirty() call of this, if it's
applied from another kernel structure (which should be migrated
properly IIUC).  But I might be completely wrong.

        kvm_s390_sync_dirty_log [?]
        
This is also s390 specific, should be collecting from the hardware
PGSTE_UC_BIT bit.  No vcpu context for sure.

(I'd be glad too if anyone could hint me why x86 cannot use page table
 dirty bits for dirty tracking, if there's short answer...)

I think my conclusion so far...

  - for s390 I don't think we even need this dirty ring buffer thing,
    because I think hardware trackings should be more efficient, then
    we don't need to care much on that either from design-wise of
    dirty ring,

  - for ARM, those no-vcpu-context dirty tracking probably needs to be
    considered, but hopefully that's a very special path so it rarely
    happen.  The bad thing is I didn't dig how many pages will be
    dirtied when ARM guest starts to dump all these things so it could
    be a burst...  If it is, then there's risk to trigger the ring
    full condition (which we wanted to avoid..)

I'm CCing Eric for ARM, Conny&David for s390, just in case there're
further inputs.

Thanks,

-- 
Peter Xu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ