lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C19CDCC.4060404@cn.fujitsu.com>
Date:	Thu, 17 Jun 2010 15:25:00 +0800
From:	Xiao Guangrong <xiaoguangrong@...fujitsu.com>
To:	Avi Kivity <avi@...hat.com>
CC:	Marcelo Tosatti <mtosatti@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>,
	KVM list <kvm@...r.kernel.org>
Subject: Re: [PATCH 5/6] KVM: MMU: prefetch ptes when intercepted guest #PF



Avi Kivity wrote:

>> +        if (*spte != shadow_trap_nonpresent_pte)
>> +            continue;
>> +
>> +        gfn = sp->gfn + (i<<  ((sp->role.level - 1) * PT64_LEVEL_BITS));
>>    
> 

Avi,

Thanks for your comment.

> Can calculate outside the loop and use +=.
> 

It's nice, will do it in the next version.

> Can this in fact work for level != PT_PAGE_TABLE_LEVEL?  We might start
> at PT_PAGE_DIRECTORY_LEVEL but get 4k pages while iterating.

Ah, i forgot it. We can't assume that the host also support huge page for
next gfn, as Marcelo's suggestion, we should "only map with level > 1 if
the host page matches the size".

Um, the problem is, when we get host page size, we should hold 'mm->mmap_sem',
it can't used in atomic context and it's also a slow path, we hope pte prefetch
path is fast.

How about only allow prefetch for sp.leve = 1 now? i'll improve it in the future,
i think it need more time :-)

> 
>> +
>> +        pfn = gfn_to_pfn_atomic(vcpu->kvm, gfn);
>> +        if (is_error_pfn(pfn)) {
>> +            kvm_release_pfn_clean(pfn);
>> +            break;
>> +        }
>> +        if (pte_prefetch_topup_memory_cache(vcpu))
>> +            break;
>> +
>> +        mmu_set_spte(vcpu, spte, ACC_ALL, ACC_ALL, 0, 0, 1, NULL,
>> +                 sp->role.level, gfn, pfn, true, false);
>> +    }
>> +}
>>    
> 
> Nice.  Direct prefetch should usually succeed.
> 
> Can later augment to call get_users_pages_fast(..., PTE_PREFETCH_NUM,
> ...) to reduce gup overhead.

But we can't assume the gfn's hva is consecutive, for example, gfn and gfn+1
maybe in the different slots.

> 
>>
>> +static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, u64 *sptep)
>> +{
>> +    struct kvm_mmu_page *sp;
>> +    pt_element_t *table = NULL;
>> +    int offset = 0, shift, index, i;
>> +
>> +    sp = page_header(__pa(sptep));
>> +    index = sptep - sp->spt;
>> +
>> +    if (PTTYPE == 32) {
>> +        shift = PAGE_SHIFT - (PT_LEVEL_BITS -
>> +                    PT64_LEVEL_BITS) * sp->role.level;
>> +        offset = sp->role.quadrant<<  shift;
>> +    }
>> +
>> +    for (i = index + 1; i<  min(PT64_ENT_PER_PAGE,
>> +                      index + PTE_PREFETCH_NUM); i++) {
>> +        struct page *page;
>> +        pt_element_t gpte;
>> +        unsigned pte_access;
>> +        u64 *spte = sp->spt + i;
>> +        gfn_t gfn;
>> +        pfn_t pfn;
>> +        int dirty;
>> +
>> +        if (*spte != shadow_trap_nonpresent_pte)
>> +            continue;
>> +
>> +        pte_access = sp->role.access;
>> +        if (sp->role.direct) {
>> +            dirty = 1;
>> +            gfn = sp->gfn + (i<<  ((sp->role.level - 1) *
>> +                          PT64_LEVEL_BITS));
>> +            goto gfn_mapping;
>> +        }
>>    
> 
> Should just call direct_pte_prefetch.
> 

OK, will fix it.

>> +
>> +        if (!table) {
>> +            page = gfn_to_page_atomic(vcpu->kvm, sp->gfn);
>> +            if (is_error_page(page)) {
>> +                kvm_release_page_clean(page);
>> +                break;
>> +            }
>> +            table = kmap_atomic(page, KM_USER0);
>> +            table = (pt_element_t *)((char *)table + offset);
>> +        }
>>    
> 
> Why not kvm_read_guest_atomic()?  Can do it outside the loop.

Do you mean that read all prefetched sptes at one time?
If prefetch one spte fail, the later sptes that we read is waste, so i
choose read next spte only when current spte is prefetched successful.

But i not have strong opinion on it since it's fast to read all sptes at
one time, at the worst case, only 16 * 8 = 128 bytes we need to read.

> 
>> +
>> +        gpte = table[i];
>> +        if (!(gpte&  PT_ACCESSED_MASK))
>> +            continue;
>> +
>> +        if (!is_present_gpte(gpte)) {
>> +            if (!sp->unsync)
>> +                *spte = shadow_notrap_nonpresent_pte;
>>    
> 
> Need __set_spte().

Oops, fix it.

> 
>> +            continue;
>> +        }
>> +        dirty = is_dirty_gpte(gpte);
>> +        gfn = (gpte&  PT64_BASE_ADDR_MASK)>>  PAGE_SHIFT;
>> +        pte_access = pte_access&  FNAME(gpte_access)(vcpu, gpte);
>> +gfn_mapping:
>> +        pfn = gfn_to_pfn_atomic(vcpu->kvm, gfn);
>> +        if (is_error_pfn(pfn)) {
>> +            kvm_release_pfn_clean(pfn);
>> +            break;
>> +        }
>> +
>> +        if (pte_prefetch_topup_memory_cache(vcpu))
>> +            break;
>> +        mmu_set_spte(vcpu, spte, sp->role.access, pte_access, 0, 0,
>> +                 dirty, NULL, sp->role.level, gfn, pfn,
>> +                 true, false);
>> +    }
>> +    if (table)
>> +        kunmap_atomic((char *)table - offset, KM_USER0);
>> +}
>>    
> 
> I think lot of code can be shared with the pte prefetch in invlpg.
> 

Yes, please allow me to cleanup those code after my future patchset:

[PATCH v4 9/9] KVM MMU: optimize sync/update unsync-page

it's the last part in the 'allow multiple shadow pages' patchset.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ