lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAmzW4M7ZT7+vUsW3SrTRSv6Q80B2NdAS+OX7PrnpdrV+=R19A@mail.gmail.com>
Date:	Wed, 4 May 2016 23:32:31 +0900
From:	Joonsoo Kim <js1304@...il.com>
To:	Michal Hocko <mhocko@...nel.org>
Cc:	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Johannes Weiner <hannes@...xchg.org>,
	Mel Gorman <mgorman@...e.de>,
	David Rientjes <rientjes@...gle.com>,
	Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
	Hillf Danton <hillf.zj@...baba-inc.com>,
	Vlastimil Babka <vbabka@...e.cz>,
	Linux Memory Management List <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0.14] oom detection rework v6

2016-05-04 17:47 GMT+09:00 Michal Hocko <mhocko@...nel.org>:
> On Wed 04-05-16 14:45:02, Joonsoo Kim wrote:
>> On Wed, Apr 20, 2016 at 03:47:13PM -0400, Michal Hocko wrote:
>> > Hi,
>> >
>> > This is v6 of the series. The previous version was posted [1]. The
>> > code hasn't changed much since then. I have found one old standing
>> > bug (patch 1) which just got much more severe and visible with this
>> > series. Other than that I have reorganized the series and put the
>> > compaction feedback abstraction to the front just in case we find out
>> > that parts of the series would have to be reverted later on for some
>> > reason. The premature oom killer invocation reported by Hugh [2] seems
>> > to be addressed.
>> >
>> > We have discussed this series at LSF/MM summit in Raleigh and there
>> > didn't seem to be any concerns/objections to go on with the patch set
>> > and target it for the next merge window.
>>
>> I still don't agree with some part of this patchset that deal with
>> !costly order. As you know, there was two regression reports from Hugh
>> and Aaron and you fixed them by ensuring to trigger compaction. I
>> think that these show the problem of this patchset. Previous kernel
>> doesn't need to ensure to trigger compaction and just works fine in
>> any case. Your series make compaction necessary for all. OOM handling
>> is essential part in MM but compaction isn't. OOM handling should not
>> depend on compaction. I tested my own benchmark without
>> CONFIG_COMPACTION and found that premature OOM happens.
>
> High order allocations without compaction are basically a lost game. You

I don't think that order 1 or 2 allocation has a big trouble without compaction.
They can be made by buddy algorithm that keeps high order freepages
as long as possible.

> can wait unbounded amount of time and still have no guarantee of any

I know that it has no guarantee. But, it doesn't mean that it's better to
give up early. Since OOM could causes serious problem, if there is
reclaimable memory, we need to reclaim all of them at least once
with praying for high order page before triggering OOM. Optimizing
this situation by incomplete guessing is a dangerous idea.

> progress. What is the usual reason to disable compaction in the first
> place?

I don't disable it. But, who knows who disable compaction? It's been *not*
a long time that CONFIG_COMPACTION is default enable. Maybe, 3 years?

> Anyway if this is _really_ a big issue then we can do something like the
> following to emulate the previous behavior. We are losing the
> determinism but if you really thing that the !COMPACTION workloads
> already reconcile with it I can live with that.
> ---
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2e7e26c5d3ba..f48b9e9b1869 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3319,6 +3319,24 @@ should_compact_retry(struct alloc_context *ac, unsigned int order, int alloc_fla
>                      enum migrate_mode *migrate_mode,
>                      int compaction_retries)
>  {
> +       struct zone *zone;
> +       struct zoneref *z;
> +
> +       if (order > PAGE_ALLOC_COSTLY_ORDER)
> +               return false;
> +
> +       /*
> +        * There are setups with compaction disabled which would prefer to loop
> +        * inside the allocator rather than hit the oom killer prematurely. Let's
> +        * give them a good hope and keep retrying while the order-0 watermarks
> +        * are OK.
> +        */
> +       for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,
> +                                       ac->nodemask) {
> +               if(zone_watermark_ok(zone, 0, min_wmark_pages(zone),
> +                                       ac->high_zoneidx, alloc_flags))
> +                       return true;
> +       }
>         return false;

I hope that this kind of logic is added to should_reclaim_retry() so
that this logic is
applied in any setup. should_compact_retry() should not become a fundamental
criteria to determine OOM. What compaction does can be changed in the future
and it's undesirable that it's change affects OOM condition greatly.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ