lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.2003101556270.177273@chino.kir.corp.google.com>
Date:   Tue, 10 Mar 2020 16:02:23 -0700 (PDT)
From:   David Rientjes <rientjes@...gle.com>
To:     Michal Hocko <mhocko@...nel.org>
cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [patch] mm, oom: prevent soft lockup on memcg oom for UP
 systems

On Tue, 10 Mar 2020, Michal Hocko wrote:

> > When a process is oom killed as a result of memcg limits and the victim
> > is waiting to exit, nothing ends up actually yielding the processor back
> > to the victim on UP systems with preemption disabled.  Instead, the
> > charging process simply loops in memcg reclaim and eventually soft
> > lockups.
> > 
> > Memory cgroup out of memory: Killed process 808 (repro) total-vm:41944kB, anon-rss:35344kB, file-rss:504kB, shmem-rss:0kB, UID:0 pgtables:108kB oom_score_adj:0
> > watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [repro:806]
> > CPU: 0 PID: 806 Comm: repro Not tainted 5.6.0-rc5+ #136
> > RIP: 0010:shrink_lruvec+0x4e9/0xa40
> > ...
> > Call Trace:
> >  shrink_node+0x40d/0x7d0
> >  do_try_to_free_pages+0x13f/0x470
> >  try_to_free_mem_cgroup_pages+0x16d/0x230
> >  try_charge+0x247/0xac0
> >  mem_cgroup_try_charge+0x10a/0x220
> >  mem_cgroup_try_charge_delay+0x1e/0x40
> >  handle_mm_fault+0xdf2/0x15f0
> >  do_user_addr_fault+0x21f/0x420
> >  page_fault+0x2f/0x40
> > 
> > Make sure that something ends up actually yielding the processor back to
> > the victim to allow for memory freeing.  Most appropriate place appears to
> > be shrink_node_memcgs() where the iteration of all decendant memcgs could
> > be particularly lengthy.
> 
> There is a cond_resched in shrink_lruvec and another one in
> shrink_page_list. Why doesn't any of them hit? Is it because there are
> no pages on the LRU list? Because rss data suggests there should be
> enough pages to go that path. Or maybe it is shrink_slab path that takes
> too long?
> 

I think it can be a number of cases, most notably mem_cgroup_protected() 
checks which is why the cond_resched() is added above it.  Rather than add 
cond_resched() only for MEMCG_PROT_MIN and for certain MEMCG_PROT_LOW, the 
cond_resched() is added above the switch clause because the iteration 
itself may be potentially very lengthy.

We could also do it in shrink_zones() or the priority based 
do_try_to_free_pages() loop, but I'd be nervous about the lengthy memcg 
iteration in shrink_node_memcgs() independent of this.

Any other ideas on how to ensure we actually try to resched for the 
benefit of an oom victim to prevent this soft lockup?

> The patch itself makes sense to me but I would like to see more
> explanation on how that happens.
> 
> Thanks.
> 
> > Cc: Vlastimil Babka <vbabka@...e.cz>
> > Cc: Michal Hocko <mhocko@...nel.org>
> > Cc: stable@...r.kernel.org
> > Signed-off-by: David Rientjes <rientjes@...gle.com>
> > ---
> >  mm/vmscan.c | 2 ++
> >  1 file changed, 2 insertions(+)
> > 
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2637,6 +2637,8 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
> >  		unsigned long reclaimed;
> >  		unsigned long scanned;
> >  
> > +		cond_resched();
> > +
> >  		switch (mem_cgroup_protected(target_memcg, memcg)) {
> >  		case MEMCG_PROT_MIN:
> >  			/*
> 
> -- 
> Michal Hocko
> SUSE Labs
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ