lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 22 Jan 2007 23:34:27 -0500
From:	Rik van Riel <riel@...hat.com>
To:	Nick Piggin <nickpiggin@...oo.com.au>
CC:	balbir@...ibm.com, Andrea Arcangeli <andrea@...e.de>,
	Niki Hammler <mailinglists@...aq.net>,
	linux-kernel@...r.kernel.org,
	Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>
Subject: Re: Why active list and inactive list?

Nick Piggin wrote:

> The other nice thing about it was that it didn't have a hard
> cutoff that the current reclaim_mapped toggle does -- you could
> opt to scan the mapped list at a lower ratio than the unmapped
> one. Of course, it also has some downsides too, and would
> require retuning...

Here's a simple idea for tuning.

For each list we keep track of:
1) the size of the list
2) the rate at which we scan the list
3) the fraction of (non new) pages that get
    referenced

That way we can determine which list has the largest
fraction of "idle" pages sitting around and consequently
which list should be scanned more aggressively.

For each list we can calculate how frequently the pages
in the list are being used:

pressure = referenced percentage * scan rate / list size

The VM can equalize the pressure by scanning the list with
lower usage less than the other list.  This way the VM can
give the right amount of memory to each type.

Of course, each list needs to be divided into inactive and
active like the current VM, in order to make sure that the
pages which are used once cannot push the real working set
of that list out of memory.

There is a more subtle problem when the list's working set
is larger than the amount of memory the list has.  In that
situation the VM will be faulting pages back in just after
they got evicted.  Something like my /proc/refaults code
can detect that and adjust the size of the undersized list
accordingly.

Of course, once we properly distinguish between the more
frequently and less frequently accessed pages within each
of the page sets (mapped/anonymous vs. unmapped) and have
the pressure between the lists equalized, why do we need
to keep them separate again?

-- 
Politics is the struggle between those who want to make their country
the best in the world, and those who believe it already is.  Each group
calls the other unpatriotic.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ