lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170726114638.GL2981@dhcp22.suse.cz>
Date:   Wed, 26 Jul 2017 13:46:39 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc:     linux-mm@...ck.org, hannes@...xchg.org, rientjes@...gle.com,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] oom_reaper: close race without using oom_lock

On Wed 26-07-17 20:33:21, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Sun 23-07-17 09:41:50, Tetsuo Handa wrote:
> > > So, how can we verify the above race a real problem?
> > 
> > Try to simulate a _real_ workload and see whether we kill more tasks
> > than necessary. 
> 
> Whether it is a _real_ workload or not cannot become an answer.
> 
> If somebody is trying to allocate hundreds/thousands of pages after memory of
> an OOM victim was reaped, avoiding this race window makes no sense; next OOM
> victim will be selected anyway. But if somebody is trying to allocate only one
> page and then is planning to release a lot of memory, avoiding this race window
> can save somebody from being OOM-killed needlessly. This race window depends on
> what the threads are about to do, not whether the workload is natural or
> artificial.

And with a desparate lack of crystal ball we cannot do much about that
really.

> My question is, how can users know it if somebody was OOM-killed needlessly
> by allowing MMF_OOM_SKIP to race.

Is it really important to know that the race is due to MMF_OOM_SKIP?
Isn't it sufficient to see that we kill too many tasks and then debug it
further once something hits that?

[...]
> Is it guaranteed that __node_reclaim() never (even indirectly) waits for
> __GFP_DIRECT_RECLAIM && !__GFP_NORETRY memory allocation?

this is a direct reclaim which can go down to slab shrinkers with all
the usual fun...

> >                                      Such races are unfortunate but
> > unavoidable unless we synchronize oom kill with any memory freeing which
> > smells like a no-go to me. We can try a last allocation attempt right
> > before we go and kill something (which still wouldn't be race free) but
> > that might cause other issues - e.g. prolonged trashing without ever
> > killing something - but I haven't evaluated those to be honest.
> 
> Yes, postpone last get_page_from_freelist() attempt till oom_kill_process()
> will be what we would afford at best.

as I've said this would have to be evaluated very carefully and a strong
usecase would have to be shown.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ