lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200721133835.GL4061@dhcp22.suse.cz>
Date:   Tue, 21 Jul 2020 15:38:35 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Qian Cai <cai@....pw>
Cc:     linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Tim Chen <tim.c.chen@...ux.intel.com>
Subject: Re: [RFC PATCH] mm: silence soft lockups from unlock_page

On Tue 21-07-20 09:23:44, Qian Cai wrote:
> On Tue, Jul 21, 2020 at 02:17:52PM +0200, Michal Hocko wrote:
> > On Tue 21-07-20 07:44:07, Qian Cai wrote:
> > > 
> > > 
> > > > On Jul 21, 2020, at 7:25 AM, Michal Hocko <mhocko@...nel.org> wrote:
> > > > 
> > > > Are these really important? I believe I can dig that out from the bug
> > > > report but I didn't really consider that important enough.
> > > 
> > > Please dig them out. We have also been running those things on
> > > “large” powerpc as well and never saw such soft-lockups. Those
> > > details may give us some clues about the actual problem.
> > 
> > I strongly suspect this is not really relevant but just FYI this is
> > 16Node, 11.9TB with 1536CPUs system.
> 
> Okay, we are now talking about the HPC special case. Just brain-storming some
> ideas here.
> 
> 
> 1) What about increase the soft-lockup threshold early at boot and restore
> afterwards? As far as I can tell, those soft-lockups are just a few bursts of
> things and then cure itself after the booting.

Is this really better option than silencing soft lockup from the code
itself? What if the same access pattern happens later on?

> 2) Reading through the comments above page_waitqueue(), it said rare hash
> collisions could happen, so sounds like in this HPC case, it is rather easy to
> hit those hash collisons. Thus, need to deal with that instead?

As all of those seem to be the same class of process I suspect it is
more likely that many processes are hitting the page fault on the same
file page. E.g. a code/library.

> 3) The commit 62906027091f ("mm: add PageWaiters indicating tasks are waiting
> for a page bit") mentioned that,
> 
> "Putting two bits in the same word opens the opportunity to remove the memory
> barrier between clearing the lock bit and testing the waiters bit, after some
> work on the arch primitives (e.g., ensuring memory operand widths match and
> cover both bits)."
> 
> Do you happen to know if this only happen on powerpc?

I have only seen this single instance on that machine. I do not think
this is very much HW specific but ppc platform is likely more prone to
that. Just think of the memory itself. Each memory block is notified via
udev and ppc has very small memblocks (16M to 256M). X86 will use 2G
blocks on large machines.

> Also, probably need to
> dig out if those memory barrier is still there that could be removed to speed
> up things.

I would be really suprised if memory barriers matter much. It sounds
much more likely that there is the same underlying problem as
11a19c7b099f. There are just too many waiters on the page. The commit
prevents just the hard lockup part of the problem by dropping the lock
and continuing after the bookmark. But, as mentioned in the changelog,
cond_resched is not really an option because this path is called from
atomic context as well. So !PREEMPT kernels are still in the same boat.

I might have misunderstood something, of course, and would like to hear
where is my thinking wrong.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ