[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <53160932.6060200@sr71.net>
Date: Tue, 04 Mar 2014 09:11:14 -0800
From: Dave Hansen <dave@...1.net>
To: Pradeep Sawlani <pradeep.sawlani@...il.com>,
Hugh Dickins <hughd@...gle.com>,
Izik Eidus <izik.eidus@...ellosystems.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Chris Wright <chrisw@...s-sol.org>
CC: LKML <linux-kernel@...r.kernel.org>,
MEMORY MANAGEMENT <linux-mm@...ck.org>,
Arjan van de Ven <arjan@...ux.intel.com>,
Suri Maddhula <surim@...zon.com>, Matt Wilson <msw@...zon.com>,
Anthony Liguori <aliguori@...zon.com>,
Pradeep Sawlani <sawlani@...zon.com>
Subject: Re: [PATCH RFC 0/1] ksm: check and skip page, if it is already scanned
On 03/03/2014 06:48 PM, Pradeep Sawlani wrote:
> Patch uses two bits to detect if page is scanned, one bit for odd cycle
> and other for even cycle. This adds one more bit in page flags and
> overloads existing bit (PG_owner_priv_1).
> Changes are based of 3.4.79 kernel, since I have used that for verification.
> Detail discussion can be found at https://lkml.org/lkml/2014/2/13/624
> Suggestion(s) are welcome for alternative solution in order to avoid one more
> bit in page flags.
Allocate a big bitmap (depends on how many pages you are scanning).
Hash the page's pfn and index in to the bitmap. If the bit is set,
don't scan the page. If not set, then set it. Vary the hash for each
scanning pass to reduce the same collision happening repeatedly. Clear
the bitmap before each scan.
You'll get plenty of collisions, especially for a small table, but who
cares?
The other option is to bloat anon_vma instead, and only do one scan for
each anon_vma that shares the same root. That's a bit more invasive though.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists