lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 18 Aug 2016 11:48:45 +0200
From:	Michal Hocko <mhocko@...nel.org>
To:	Vlastimil Babka <vbabka@...e.cz>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mgorman@...hsingularity.net>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	David Rientjes <rientjes@...gle.com>,
	Rik van Riel <riel@...hat.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 06/11] mm, compaction: more reliably increase direct
 compaction priority

On Thu 18-08-16 11:44:00, Vlastimil Babka wrote:
> On 08/18/2016 11:10 AM, Michal Hocko wrote:
> > On Wed 10-08-16 11:12:21, Vlastimil Babka wrote:
> > > During reclaim/compaction loop, compaction priority can be increased by the
> > > should_compact_retry() function, but the current code is not optimal. Priority
> > > is only increased when compaction_failed() is true, which means that compaction
> > > has scanned the whole zone. This may not happen even after multiple attempts
> > > with a lower priority due to parallel activity, so we might needlessly
> > > struggle on the lower priorities and possibly run out of compaction retry
> > > attempts in the process.
> > > 
> > > After this patch we are guaranteed at least one attempt at the highest
> > > compaction priority even if we exhaust all retries at the lower priorities.
> > 
> > I expect we will tend to do some special handling at the highest
> > priority so guaranteeing at least one run with that prio seems sensible to me. The only
> > question is whether we really want to enforce the highest priority for
> > costly orders as well. I think we want to reserve the highest (maybe add
> > one more) prio for !costly orders as those invoke the OOM killer and the
> > failure are quite disruptive.
> 
> Costly orders are already ruled out of reaching the highest priority unless
> they are __GFP_REPEAT, so I assumed that if they are allocations with
> __GFP_REPEAT, they really would like to succeed, so let them use the highest
> priority.

But even when __GFP_REPEAT is set then we do not want to be too
aggressive. E.g. hugetlb pages are better to fail than the cause
excessive reclaim or cause some long term fragmentation issues which
might be a result of the skipped heuristics. costly orders are IMHO
simply second class citizens even with they ask to try harder with
__GFP_REPEAT.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ