lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201504300227.JCJ81217.FHOLSQVOFFJtMO@I-love.SAKURA.ne.jp>
Date:	Thu, 30 Apr 2015 02:27:44 +0900
From:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To:	mhocko@...e.cz, hannes@...xchg.org
Cc:	akpm@...ux-foundation.org, aarcange@...hat.com,
	david@...morbit.com, rientjes@...gle.com, vbabka@...e.cz,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/9] mm: improve OOM mechanism v2

Michal Hocko wrote:
> On Wed 29-04-15 08:55:06, Johannes Weiner wrote:
> > What we can do to mitigate this is tie the timeout to the setting of
> > TIF_MEMDIE so that the wait is not 5s from the point of calling
> > out_of_memory() but from the point of where TIF_MEMDIE was set.
> > Subsequent allocations will then go straight to the reserves.
> 
> That would deplete the reserves very easily. Shouldn't we rather
> go other way around? Allow OOM killer context to dive into memory
> reserves some more (ALLOC_OOM on top of current ALLOC flags and
> __zone_watermark_ok would allow an additional 1/4 of the reserves) and
> start waiting for the victim after that reserve is depleted. We would
> still have some room for TIF_MEMDIE to allocate, the reserves consumption
> would be throttled somehow and the holders of resources would have some
> chance to release them and allow the victim to die.

Does OOM killer context mean memory allocations which can call out_of_memory()?
If yes, there is no guarantee that such memory reserve is used by threads which
the OOM victim is waiting for, for they might do only !__GFP_FS allocations.
Likewise, there is possibility that such memory reserve is used by threads
which the OOM victim is not waiting for, for malloc() + memset() causes
__GFP_FS allocations.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ