lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 28 Oct 2015 20:57:28 +0900
From:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To:	cl@...ux.com, htejun@...il.com
Cc:	akpm@...ux-foundation.org, mhocko@...nel.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
	hannes@...xchg.org, mgorman@...e.de
Subject: Re: [patch 3/3] vmstat: Create our own workqueue

Christoph Lameter wrote:
> On Wed, 28 Oct 2015, Tejun Heo wrote:
> 
> > The only thing necessary here is WQ_MEM_RECLAIM.  I don't see how
> > WQ_SYSFS and WQ_FREEZABLE make sense here.
> 
I can still trigger silent livelock with this patchset applied.

----------
[  272.283217] MemAlloc-Info: 9 stalling task, 0 dying task, 0 victim task.
[  272.285089] MemAlloc: a.out(11325) gfp=0x24280ca order=0 delay=19164
[  272.286817] MemAlloc: a.out(11326) gfp=0x242014a order=0 delay=19104
[  272.288512] MemAlloc: vmtoolsd(1897) gfp=0x242014a order=0 delay=19072
[  272.290280] MemAlloc: kworker/1:3(11286) gfp=0x2400000 order=0 delay=19056
[  272.292114] MemAlloc: sshd(11202) gfp=0x242014a order=0 delay=18927
[  272.293908] MemAlloc: tuned(2073) gfp=0x242014a order=0 delay=18799
[  272.297360] MemAlloc: nmbd(4752) gfp=0x242014a order=0 delay=16532
[  272.299115] MemAlloc: auditd(529) gfp=0x242014a order=0 delay=13073
[  272.302248] MemAlloc: irqbalance(1696) gfp=0x242014a order=0 delay=10529
(...snipped...)
[  272.851035] Showing busy workqueues and worker pools:
[  272.852583] workqueue events: flags=0x0
[  272.853942]   pwq 6: cpus=3 node=0 flags=0x0 nice=0 active=1/256
[  272.855781]     pending: vmw_fb_dirty_flush [vmwgfx]
[  272.857500]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256
[  272.859359]     pending: vmpressure_work_fn
[  272.860840] workqueue events_freezable_power_: flags=0x84
[  272.862461]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=2/256
[  272.864479]     in-flight: 11286:disk_events_workfn
[  272.866065]     pending: disk_events_workfn
[  272.867587] workqueue vmstat: flags=0x8
[  272.868942]   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256
[  272.870785]     pending: vmstat_update
[  272.872248] pool 2: cpus=1 node=0 flags=0x0 nice=0 workers=4 idle: 14 218 43
----------

> 2. Create a separate workqueue so that the vmstat updater
>    is not blocked by other work requeusts. This creates a
>    new kernel thread <sigh> and avoids the issue of
>    differentials not folded in a timely fashion.

Did you really mean "the vmstat updater is not blocked by other
work requeusts"?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ