lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 20 Dec 2012 14:02:32 +0900
From:	Takuya Yoshikawa <>
To:	Alex Williamson <>
Cc:	Takuya Yoshikawa <>,,,,
Subject: Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start
 dirty logging

On Wed, 19 Dec 2012 08:42:57 -0700
Alex Williamson <> wrote:

> Please let me know if you can identify one of these as the culprit.
> They're all very simple, but there's always a chance I've missed a hard
> coding of slot numbers somewhere.  Thanks,

I identified the one:
  commit b7f69c555ca430129b6cde81e9f0927531420c5c
  KVM: Minor memory slot optimization

IIUC, the problem was that you did not care about the generation of
slots which was updated by update_memslots():

  Your patch reused the old memory slots which was there before
  doing the update for invalidating the slot, and badly, we did flush
  shadow pages after that before doing the second update for finally
  installing the new slot.  As a result, the generation did not change
  from that of the invalidated one, although the ghc(gfn to hva cache)
  might be stale.

  After that, kvm_write_guest_cached() checked if ghc should be
  initialized by comparing ghc's generation with that old one,
  resulting mark_page_dirty_in_slot() was called with the invalid
  cache contents.

Although we can do something to correct the generation alone, I do not
think such a trick is worth it because this is not a hot path.  Let's
just revert the patch.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists