lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141008153329.GF4592@dhcp22.suse.cz>
Date:	Wed, 8 Oct 2014 17:33:29 +0200
From:	Michal Hocko <mhocko@...e.cz>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Greg Thelen <gthelen@...gle.com>,
	Vladimir Davydov <vdavydov@...allels.com>,
	Dave Hansen <dave@...1.net>, linux-mm@...ck.org,
	cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 3/3] mm: memcontrol: fix transparent huge page
 allocations under pressure

[I do not have time to get over all points here and will be offline
until Monday - will get back to the rest then]

On Tue 07-10-14 21:11:06, Johannes Weiner wrote:
> On Tue, Oct 07, 2014 at 03:59:50PM +0200, Michal Hocko wrote:
[...]
> > I am completely missing any notes about potential excessive
> > swapouts or longer reclaim stalls which are a natural side effect of direct
> > reclaim with a larger target (or is this something we do not agree on?).
> 
> Yes, we disagree here.  Why is reclaiming 2MB once worse than entering
> reclaim 16 times to reclaim SWAP_CLUSTER_MAX?

You can enter DEF_PRIORITY reclaim 16 times and reclaim your target but
you need at least 512<<DEF_PRIORITY pages on your LRUs to do it in a
single run on that priority. So especially small groups will pay more
and would be subject to mentioned problems (e.g. over-reclaim).

> There is no inherent difference in reclaiming a big chunk and
> reclaiming many small chunks that add up to the same size.
 
[...]

> > Another part that matters is the size. Memcgs might be really small and
> > that changes the math. Large reclaim target will get to low prio reclaim
> > and thus the excessive reclaim.
> 
> I already addressed page size vs. memcg size before.
> 
> However, low priority reclaim does not result in excessive reclaim.
> The reclaim goal is checked every time it scanned SWAP_CLUSTER_MAX
> pages, and it exits if the goal has been met.  See shrink_lruvec(),
> shrink_zone() etc.

Now I am confused. shrink_zone will bail out but shrink_lruvec will loop
over nr[...] until they are empty and only updates the numbers to be
roughly proportional once:

                if (nr_reclaimed < nr_to_reclaim || scan_adjusted)
                        continue;

                /*
                 * For kswapd and memcg, reclaim at least the number of pages
                 * requested. Ensure that the anon and file LRUs are scanned
                 * proportionally what was requested by get_scan_count(). We
                 * stop reclaiming one LRU and reduce the amount scanning
                 * proportional to the original scan target.
                 */
		[...]
		scan_adjusted = true;

Or do you rely on
                /*
                 * It's just vindictive to attack the larger once the smaller
                 * has gone to zero.  And given the way we stop scanning the
                 * smaller below, this makes sure that we only make one nudge
                 * towards proportionality once we've got nr_to_reclaim.
                 */
                if (!nr_file || !nr_anon)
                        break;

and SCAN_FILE because !inactive_file_is_low?

[...]
-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ