lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5194E3D7.2090808@redhat.com>
Date:	Thu, 16 May 2013 15:49:11 +0200
From:	Paolo Bonzini <pbonzini@...hat.com>
To:	Gleb Natapov <gleb@...hat.com>
CC:	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>,
	avi.kivity@...il.com, mtosatti@...hat.com,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v5 3/8] KVM: MMU: fast invalidate all pages

Il 16/05/2013 15:41, Gleb Natapov ha scritto:
> On Thu, May 16, 2013 at 03:14:35PM +0200, Paolo Bonzini wrote:
>> Il 16/05/2013 14:43, Gleb Natapov ha scritto:
>>>>> +restart:
>>>>> +	list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
>>>>> +		if (!is_obsolete_sp(kvm, sp))
>>>>> +			continue;
>>> What if we save kvm->arch.active_mmu_pages on the stack and init
>>> kvm->arch.active_mmu_pages to be empty at the entrance to
>>> zap_invalid_pages(). This loop will iterate over saved list. This will
>>> allow us to drop the is_obsolete_sp() check and will save time since we
>>> will not be iterating over newly created sps.
>>>
>>
>> But when you add cond_resched_lock a thread may want to zap pages itself
>> (e.g. from prepare_zap_oldest_mmu_page) and it won't find them.
>>
> Yes, this will break mmu pages accounting. We can make
> prepare_zap_oldest_mmu_page() wait while zap_invalid_pages()
> frees needed amount of pages if one is in progress.
> 
>> Here is another proposal...  The idea is to avoid looking at new pages
>> more than necessary after a "goto restart".
>>
>> Basically, you alternate between two phases:
>>
>> - look for pages to be zapped, group them together
>>
>> - zap the pages
>>
>> Something like:
>>
>>       moved = 0;
>> restart:
>>       zapping = true;
>>       for each page in active_mmu_pages [reverse and safe] {
>>              if (!is_obsolete || invalid) {
>>                  /*
>>                   * Found a new page, stop zapping for now and
>>                   * try to segregate the invalid ones at one end
>>                   * of the list.
>>                   */
>>                  zapping = false;
>>                  continue;
>>              }
>>
>>              if (batch > 10 && ...) {
>>                  cond_resched_lock
>>                  batch = 0;
>>                  goto restart;
>>              }
>>
>>              if (!zapping) {
>>                  /*
>>                   * Segregate pages to one end of the list where
>>                   * new pages don't get in the way.
>>                   */
>>                  list_move_tail(page, active_mmu_pages)
>>                  batch++; /* or maybe not? */
>>                  moved++;
>>              } else {
>>                  batch += prepare_zap_page
>>                  goto restart;
>>              }
>>      }
>>
>>      /* Need another pass to look at segregated pages?  */
>>      if (moved) {
>>          moved = 0;
>>          goto restart;
>>      }
> Not sure what are you trying to achieve with "moved" tricks. Just
> walking the list from the end and stopping on first valid sp should be
> enough since active_mmu_pages list is a FIFO right now.

Right, I missed that "sp->role.invalid = 1" will ensure anyway that
pages are visited at most twice.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ