lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 15 Nov 2018 15:32:04 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Baoquan He <bhe@...hat.com>
Cc:     David Hildenbrand <david@...hat.com>, linux-mm@...ck.org,
        pifang@...hat.com, linux-kernel@...r.kernel.org,
        akpm@...ux-foundation.org, aarcange@...hat.com
Subject: Re: Memory hotplug softlock issue

On Thu 15-11-18 21:38:40, Baoquan He wrote:
> On 11/15/18 at 02:19pm, Michal Hocko wrote:
> > On Thu 15-11-18 21:12:11, Baoquan He wrote:
> > > On 11/15/18 at 09:30am, Michal Hocko wrote:
> > [...]
> > > > It would be also good to find out whether this is fs specific. E.g. does
> > > > it make any difference if you use a different one for your stress
> > > > testing?
> > > 
> > > Created a ramdisk and put stress bin there, then run stress -m 200, now
> > > seems it's stuck in libc-2.28.so migrating. And it's still xfs. So now xfs
> > > is a big suspect. At bottom I paste numactl printing, you can see that it's
> > > the last 4G.
> > > 
> > > Seems it's trying to migrate libc-2.28.so, but stress program keeps trying to
> > > access and activate it.
> > 
> > Is this still with faultaround disabled? I have seen exactly same
> > pattern in the bug I am working on. It was ext4 though.
> 
> After a long time struggling, the last 2nd block where libc-2.28.so is
> located is reclaimed, now it comes to the last memory block, still
> stress program itself. swap migration entry has been made and trying to
> unmap, now it's looping there.
> 
> [  +0.004445] migrating pfn 190ff2bb0 failed 
> [  +0.000013] page:ffffea643fcaec00 count:203 mapcount:201 mapping:ffff888dfb268f48 index:0x0
> [  +0.012809] shmem_aops 
> [  +0.000011] name:"stress" 
> [  +0.002550] flags: 0x1dfffffc008004e(referenced|uptodate|dirty|workingset|swapbacked)
> [  +0.010715] raw: 01dfffffc008004e ffffea643fcaec48 ffffea643fc714c8 ffff888dfb268f48
> [  +0.007828] raw: 0000000000000000 0000000000000000 000000cb000000c8 ffff888e72e92000
> [  +0.007810] page->mem_cgroup:ffff888e72e92000
[...]
> [  +0.004455] migrating pfn 190ff2bb0 failed 
> [  +0.000018] page:ffffea643fcaec00 count:203 mapcount:201 mapping:ffff888dfb268f48 index:0x0
> [  +0.014392] shmem_aops 
> [  +0.000010] name:"stress" 
> [  +0.002565] flags: 0x1dfffffc008004e(referenced|uptodate|dirty|workingset|swapbacked)
> [  +0.010675] raw: 01dfffffc008004e ffffea643fcaec48 ffffea643fc714c8 ffff888dfb268f48
> [  +0.007819] raw: 0000000000000000 0000000000000000 000000cb000000c8 ffff888e72e92000
> [  +0.007808] page->mem_cgroup:ffff888e72e92000

OK, so this is tmpfs backed code of your stree test. This just tells us
that this is not fs specific. Reference count is 2 more than the map
count which is the expected state. So the reference count must have been
elevated at the time when the migration was attempted. Shmem supports
fault around so this might be still possible (assuming it is enabled).
If not we really need to dig deeper. I will think of a debugging patch.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ