lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110622165529.GY20843@redhat.com>
Date:	Wed, 22 Jun 2011 18:55:29 +0200
From:	Andrea Arcangeli <aarcange@...hat.com>
To:	Rik van Riel <riel@...hat.com>
Cc:	Izik Eidus <izik.eidus@...ellosystems.com>,
	Avi Kivity <avi@...hat.com>, nai.xia@...il.com,
	Andrew Morton <akpm@...ux-foundation.org>,
	Hugh Dickins <hughd@...gle.com>,
	Chris Wright <chrisw@...s-sol.org>,
	linux-mm <linux-mm@...ck.org>,
	Johannes Weiner <hannes@...xchg.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	kvm <kvm@...r.kernel.org>
Subject: Re: [PATCH] mmu_notifier, kvm: Introduce dirty bit tracking in spte
 and mmu notifier to help KSM dirty bit tracking

On Wed, Jun 22, 2011 at 11:39:40AM -0400, Rik van Riel wrote:
> On 06/22/2011 07:19 AM, Izik Eidus wrote:
> 
> > So what we say here is: it is better to have little junk in the unstable
> > tree that get flushed eventualy anyway, instead of make the guest
> > slower....
> > this race is something that does not reflect accurate of ksm anyway due
> > to the full memcmp that we will eventualy perform...
> 
> With 2MB pages, I am not convinced they will get "flushed eventually",
> because there is a good chance at least one of the 4kB pages inside
> a 2MB page is in active use at all times.
> 
> I worry that the proposed changes may end up effectively preventing
> KSM from scanning inside 2MB pages, when even one 4kB page inside
> is in active use.  This could mean increased swapping on systems
> that run low on memory, which can be a much larger performance penalty
> than ksmd CPU use.
> 
> We need to scan inside 2MB pages when memory runs low, regardless
> of the accessed or dirty bits.

I guess we could fallback to the cksum when a THP is encountered
(repeating the test_and_clear_dirty also wouldn't give the expected
result if it's repeated on the same hugepmd for the next 4k virtual
address candidate for unstable tree insertion, so it'd need special
handling during the virtual walk anyway).

So it's getting a little hairy, skip on THP, skip on EPT, then I
wonder what is the common case that would be left using it...

Or we could evaluate with statistic how many less pages are inserted
into the unstable tree using the 2m dirty bit but clearly it'd be less
reliable, the algorithm really is meant to track the volatility of
what is later merged, not of a bigger chunk with unrelated data in it.

On a side note, khugepaged should also be changed to preserve the
dirty bit if at least one dirty bit of the ptes is dirty (currently
the hugepmd is always created dirty, it can never happen for an
hugepmd to be clean today so it wasn't preserved in khugepaged so far).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ