lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Apr 2011 21:34:31 +0200
From:	Bruno Prémont <bonbons@...ux-vserver.org>
To:	Pádraig Brady <P@...igBrady.com>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	paulmck@...ux.vnet.ibm.com, Mike Frysinger <vapier.adi@...il.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	linux-fsdevel@...r.kernel.org,
	"Paul E. McKenney" <paul.mckenney@...aro.org>,
	Pekka Enberg <penberg@...nel.org>
Subject: Re: 2.6.39-rc4+: Kernel leaking memory during FS scanning,
 regression?

On Wed, 27 April 2011 Pádraig Brady wrote:
> On 27/04/11 19:41, Bruno Prémont wrote:
> > On Wed, 27 April 2011 Bruno Prémont wrote:
> >> On Wed, 27 Apr 2011 00:28:37 +0200 (CEST) Thomas Gleixner wrote:
> >>> On Tue, 26 Apr 2011, Linus Torvalds wrote:
> >>>> On Tue, Apr 26, 2011 at 10:09 AM, Bruno Prémont wrote:
> >>>>> Just in case, /proc/$(pidof rcu_kthread)/status shows ~20k voluntary
> >>>>> context switches and exactly one non-voluntary one.
> >>>>>
> >>>>> In addition when rcu_kthread has stopped doing its work
> >>>>> `swapoff $(swapdevice)` seems to block forever (at least normal shutdown
> >>>>> blocks on disabling swap device).
> > 
> > Apparently it's not swapoff but `umount -a -t tmpfs` that's getting
> > stuck here. Manual swapoff worked.
> 
> Anything to do with this?
> http://thread.gmane.org/gmane.linux.kernel.mm/60953/

I don't think so, if it is, it is only loosely related.

>From the trace you omitted to keep it's visible that it gets hit by
non-operating RCU kthread.
Maybe existence of RCU barrier in this trace has some relation to
above thread but I don't see it at first glance.

[ 1714.960735] umount          D 5a000040  5668 20331  20324 0x00000000
[ 1714.960735]  c3c99e5c 00000086 dd407900 5a000040 dd25a1a8 dd407900 dd25a120 c3c99e0c
[ 1714.960735]  c3c99e24 c10c1be2 c14d9f20 c3c99e5c c3c8c680 c3c8c680 000000bb c3c99e24
[ 1714.960735]  c10c0b88 dd25a120 dd407900 ddfd4b40 c3c99e4c ddfc9d20 dd402380 5a000010
[ 1714.960735] Call Trace:
[ 1714.960735]  [<c10c1be2>] ? check_object+0x92/0x210
[ 1714.960735]  [<c10c0b88>] ? init_object+0x38/0x70
[ 1714.960735]  [<c10c1be2>] ? check_object+0x92/0x210
[ 1714.960735]  [<c13cb37d>] schedule_timeout+0x16d/0x280
[ 1714.960735]  [<c10c0b88>] ? init_object+0x38/0x70
[ 1714.960735]  [<c10c2122>] ? free_debug_processing+0x112/0x1f0
[ 1714.960735]  [<c10a3791>] ? shmem_put_super+0x11/0x20
[ 1714.960735]  [<c13cae9c>] wait_for_common+0x9c/0x150
[ 1714.960735]  [<c102c890>] ? try_to_wake_up+0x170/0x170
[ 1714.960735]  [<c13caff2>] wait_for_completion+0x12/0x20
[ 1714.960735]  [<c1075ad7>] rcu_barrier_sched+0x47/0x50
                             ^^^^^^^^^^^^^^^^^
[ 1714.960735]  [<c104d3c0>] ? alloc_pid+0x370/0x370
[ 1714.960735]  [<c10ce74a>] deactivate_locked_super+0x3a/0x60
[ 1714.960735]  [<c10ce948>] deactivate_super+0x48/0x70
[ 1714.960735]  [<c10e7427>] mntput_no_expire+0x87/0xe0
[ 1714.960735]  [<c10e7800>] sys_umount+0x60/0x320
[ 1714.960735]  [<c10b231a>] ? remove_vma+0x3a/0x50
[ 1714.960735]  [<c10b3b22>] ? do_munmap+0x212/0x2f0
[ 1714.960735]  [<c10e7ad9>] sys_oldumount+0x19/0x20
[ 1714.960735]  [<c13cce10>] sysenter_do_call+0x12/0x26

Bruno
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ