lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 24 Mar 2014 13:33:39 -0400
From:	Rik van Riel <riel@...hat.com>
To:	John Stultz <john.stultz@...aro.org>,
	LKML <linux-kernel@...r.kernel.org>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Android Kernel Team <kernel-team@...roid.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Robert Love <rlove@...gle.com>, Mel Gorman <mel@....ul.ie>,
	Hugh Dickins <hughd@...gle.com>, Dave Hansen <dave@...1.net>,
	Dmitry Adamushko <dmitry.adamushko@...il.com>,
	Neil Brown <neilb@...e.de>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Mike Hommey <mh@...ndium.org>, Taras Glek <tglek@...illa.com>,
	Jan Kara <jack@...e.cz>,
	KOSAKI Motohiro <kosaki.motohiro@...il.com>,
	Michel Lespinasse <walken@...gle.com>,
	Minchan Kim <minchan@...nel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH 5/5] vmscan: Age anonymous memory even when swap is off.

On 03/21/2014 05:17 PM, John Stultz wrote:
> Currently we don't shrink/scan the anonymous lrus when swap is off.
> This is problematic for volatile range purging on swapless systems/
>
> This patch naievely changes the vmscan code to continue scanning
> and shrinking the lrus even when there is no swap.
>
> It obviously has performance issues.
>
> Thoughts on how best to implement this would be appreciated.
>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Android Kernel Team <kernel-team@...roid.com>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Robert Love <rlove@...gle.com>
> Cc: Mel Gorman <mel@....ul.ie>
> Cc: Hugh Dickins <hughd@...gle.com>
> Cc: Dave Hansen <dave@...1.net>
> Cc: Rik van Riel <riel@...hat.com>
> Cc: Dmitry Adamushko <dmitry.adamushko@...il.com>
> Cc: Neil Brown <neilb@...e.de>
> Cc: Andrea Arcangeli <aarcange@...hat.com>
> Cc: Mike Hommey <mh@...ndium.org>
> Cc: Taras Glek <tglek@...illa.com>
> Cc: Jan Kara <jack@...e.cz>
> Cc: KOSAKI Motohiro <kosaki.motohiro@...il.com>
> Cc: Michel Lespinasse <walken@...gle.com>
> Cc: Minchan Kim <minchan@...nel.org>
> Cc: linux-mm@...ck.org <linux-mm@...ck.org>
> Signed-off-by: John Stultz <john.stultz@...aro.org>
> ---
>   mm/vmscan.c | 26 ++++----------------------
>   1 file changed, 4 insertions(+), 22 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 34f159a..07b0a8c 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -155,9 +155,8 @@ static unsigned long zone_reclaimable_pages(struct zone *zone)
>   	nr = zone_page_state(zone, NR_ACTIVE_FILE) +
>   	     zone_page_state(zone, NR_INACTIVE_FILE);
>
> -	if (get_nr_swap_pages() > 0)
> -		nr += zone_page_state(zone, NR_ACTIVE_ANON) +
> -		      zone_page_state(zone, NR_INACTIVE_ANON);
> +	nr += zone_page_state(zone, NR_ACTIVE_ANON) +
> +	      zone_page_state(zone, NR_INACTIVE_ANON);
>
>   	return nr;

Not all of the anonymous pages will be reclaimable.

Is there some counter that keeps track of how many
volatile range pages there are in each zone?


> @@ -1764,13 +1763,6 @@ static int inactive_anon_is_low_global(struct zone *zone)
>    */
>   static int inactive_anon_is_low(struct lruvec *lruvec)
>   {
> -	/*
> -	 * If we don't have swap space, anonymous page deactivation
> -	 * is pointless.
> -	 */
> -	if (!total_swap_pages)
> -		return 0;
> -
>   	if (!mem_cgroup_disabled())
>   		return mem_cgroup_inactive_anon_is_low(lruvec);

This part is correct, and needed.

> @@ -1880,12 +1872,6 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
>   	if (!global_reclaim(sc))
>   		force_scan = true;
>
> -	/* If we have no swap space, do not bother scanning anon pages. */
> -	if (!sc->may_swap || (get_nr_swap_pages() <= 0)) {
> -		scan_balance = SCAN_FILE;
> -		goto out;
> -	}
> -
>   	/*

This part is too.

> @@ -2181,8 +2166,8 @@ static inline bool should_continue_reclaim(struct zone *zone,
>   	 */
>   	pages_for_compaction = (2UL << sc->order);
>   	inactive_lru_pages = zone_page_state(zone, NR_INACTIVE_FILE);
> -	if (get_nr_swap_pages() > 0)
> -		inactive_lru_pages += zone_page_state(zone, NR_INACTIVE_ANON);
> +	inactive_lru_pages += zone_page_state(zone, NR_INACTIVE_ANON);
> +
>   	if (sc->nr_reclaimed < pages_for_compaction &&
>   			inactive_lru_pages > pages_for_compaction)

Not sure this is a good idea, since the pages may not actually
be reclaimable, and the inactive list will continue to be
refilled indefinitely...

If there was a counter of the number of volatile range pages
in a zone, this would be easier.

Of course, the overhead of keeping such a counter might be
too high for what volatile ranges are designed for...

>   		return true;
> @@ -2726,9 +2711,6 @@ static void age_active_anon(struct zone *zone, struct scan_control *sc)
>   {
>   	struct mem_cgroup *memcg;
>
> -	if (!total_swap_pages)
> -		return;
> -

This bit is correct and needed.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ