lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110713110246.GF7529@suse.de>
Date:	Wed, 13 Jul 2011 12:02:46 +0100
From:	Mel Gorman <mgorman@...e.de>
To:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] mm: page allocator: Initialise ZLC for first zone
 eligible for zone_reclaim

On Wed, Jul 13, 2011 at 10:15:15AM +0900, KOSAKI Motohiro wrote:
> (2011/07/11 22:01), Mel Gorman wrote:
> > The zonelist cache (ZLC) is used among other things to record if
> > zone_reclaim() failed for a particular zone recently. The intention
> > is to avoid a high cost scanning extremely long zonelists or scanning
> > within the zone uselessly.
> > 
> > Currently the zonelist cache is setup only after the first zone has
> > been considered and zone_reclaim() has been called. The objective was
> > to avoid a costly setup but zone_reclaim is itself quite expensive. If
> > it is failing regularly such as the first eligible zone having mostly
> > mapped pages, the cost in scanning and allocation stalls is far higher
> > than the ZLC initialisation step.
> > 
> > This patch initialises ZLC before the first eligible zone calls
> > zone_reclaim(). Once initialised, it is checked whether the zone
> > failed zone_reclaim recently. If it has, the zone is skipped. As the
> > first zone is now being checked, additional care has to be taken about
> > zones marked full. A zone can be marked "full" because it should not
> > have enough unmapped pages for zone_reclaim but this is excessive as
> > direct reclaim or kswapd may succeed where zone_reclaim fails. Only
> > mark zones "full" after zone_reclaim fails if it failed to reclaim
> > enough pages after scanning.
> > 
> > Signed-off-by: Mel Gorman <mgorman@...e.de>
> 
> If I understand correctly this patch's procs/cons is,
> 
> pros.
>  1) faster when zone reclaim doesn't work effectively
> 

Yes.

> cons.
>  2) slower when zone reclaim is off

How is it slower with zone_reclaim off?

Before

	if (zone_reclaim_mode == 0)
		goto this_zone_full;
	...
	this_zone_full:
	if (NUMA_BUILD)
		zlc_mark_zone_full(zonelist, z);
	if (NUMA_BUILD && !did_zlc_setup && nr_online_nodes > 1) {
		...
	}

After
	if (NUMA_BUILD && !did_zlc_setup && nr_online_nodes > 1) {
		...
	}
	if (zone_reclaim_mode == 0)
		goto this_zone_full;
	this_zone_full:
	if (NUMA_BUILD)
		zlc_mark_zone_full(zonelist, z);

Bear in mind that if the watermarks are met on the first zone, the zlc
setup does not occur.

>  3) slower when zone recliam works effectively
> 

Marginally slower. It's now calling zlc setup so once a second it's
zeroing a bitmap and calling zlc_zone_worth_trying() on the first
zone testing a bit on a cache-hot structure.

As the ineffective case can be triggered by a simple cp, I think the
cost is justified. Can you think of a better way of doing this?

> (2) and (3) are frequently happen than (1), correct?

Yes. I'd still expect zone_reclaim to be off on the majority of
machines and even when enabled, I think it's relatively rare we hit the
case where the workload is regularly falling over to the other node
except in the case where it's a file server. Still, a cp is not to
uncommon that the kernel should slow to a crawl as a result.

> At least, I think we need to keep zero impact when zone reclaim mode is off.
> 

I agree with this but I'm missing where we are taking the big hit with
zone_reclaim==0.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ