lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 12 Apr 2013 10:34:20 +0100
From:	Mel Gorman <mgorman@...e.de>
To:	Ric Mason <ric.masonn@...il.com>
Cc:	Will Huck <will.huckk@...il.com>, Rik van Riel <riel@...hat.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Linux-MM <linux-mm@...ck.org>, Jiri Slaby <jslaby@...e.cz>,
	Valdis Kletnieks <Valdis.Kletnieks@...edu>,
	Zlatko Calusic <zcalusic@...sync.net>,
	dormando <dormando@...ia.net>,
	Satoru Moriya <satoru.moriya@....com>,
	Michal Hocko <mhocko@...e.cz>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 01/10] mm: vmscan: Limit the number of pages kswapd
 reclaims at each priority

On Fri, Apr 12, 2013 at 01:46:22PM +0800, Ric Mason wrote:
> Ping Rik, I also want to know the answer. ;-)

This question, like a *lot* of list traffic recently, is a "how long is a
piece of string" with hints that it is an important question but really
is just going to waste a developers time because the question lacks any
relevant meaning. The Inter-Reference Distance (IRD) is mentioned as a
problem but gives no context as to why it is perceived as a problem. IRD is
the distance in time or events between two references of the same page and
is a function of the workload and an arbitrary page, not the page reclaim
algorithm. A page algorithm may take IRD into account but IRD is not and
cannot be a "problem" so the question framing is already confusing.

Furthermore, the upsides and downsides of any given page reclaim algorithm
are complex but in most cases are discussed in the academic pages describing
them. People who are interested need to research and read these papers
and then see how it might apply to the algorithm implemented in Linux or
alternatively investigate what important workloads Linux treats badly
and addressing the problem. The result of such research (and patches)
is then a relevant discussion.

This question asks what the "downside" is versus anonymous pages.  To me
the question lacks any meaning because how can a page reclaim algorithm
"against" anonymous pages? As the question lacks meaning, answering it is
impossible and it is effectively asking a developer to write a small paper
to try and discover the meaning of the question before then answering it.

I do not speak for Rik but I at least am ignoring most of these questions
because there is not enough time in the day already. Pings are not
likely to change my opinion.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ