lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20171123143759.ja2qmsqbjxh4u36e@armageddon.cambridge.arm.com>
Date:   Thu, 23 Nov 2017 14:38:00 +0000
From:   Catalin Marinas <catalin.marinas@....com>
To:     Yisheng Xie <xieyisheng1@...wei.com>
Cc:     akpm@...ux-foundation.org, mhocko@...nel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] kmemleak: add scheduling point to kmemleak_scan

On Thu, Nov 23, 2017 at 08:23:08PM +0800, Yisheng Xie wrote:
> kmemleak_scan will scan struct page for each node and it can be really
> large and resulting in a soft lockup. We have seen a soft lockup when do
> scan while compile kernel:
> 
>  [  220.561051] watchdog: BUG: soft lockup - CPU#53 stuck for 22s! [bash:10287]
>  [...]
>  [  220.753837] Call Trace:
>  [  220.756296]  kmemleak_scan+0x21a/0x4c0
>  [  220.760034]  kmemleak_write+0x312/0x350
>  [  220.763866]  ? do_wp_page+0x147/0x4c0
>  [  220.767521]  full_proxy_write+0x5a/0xa0
>  [  220.771351]  __vfs_write+0x33/0x150
>  [  220.774833]  ? __inode_security_revalidate+0x4c/0x60
>  [  220.779782]  ? selinux_file_permission+0xda/0x130
>  [  220.784479]  ? _cond_resched+0x15/0x30
>  [  220.788221]  vfs_write+0xad/0x1a0
>  [  220.791529]  SyS_write+0x52/0xc0
>  [  220.794758]  do_syscall_64+0x61/0x1a0
>  [  220.798411]  entry_SYSCALL64_slow_path+0x25/0x25
> 
> Fix this by adding cond_resched every MAX_SCAN_SIZE.
> 
> Suggested-by: Catalin Marinas <catalin.marinas@....com>
> Signed-off-by: Yisheng Xie <xieyisheng1@...wei.com>

Acked-by: Catalin Marinas <catalin.marinas@....com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ