lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 23 Dec 2011 10:17:25 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Tejun Heo <tj@...nel.org>, avi@...hat.com, nate@...nel.net,
	cl@...ux-foundation.org, oleg@...hat.com, axboe@...nel.dk,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCHSET] block, mempool, percpu: implement percpu mempool and
 fix blkcg percpu alloc deadlock

On Thu, Dec 22, 2011 at 05:14:32PM -0800, Andrew Morton wrote:

[..]
> Separately...
> 
> Mixing mempools and percpu_alloc() in the proposed fashion seems a
> pretty poor fit.  mempools are for high-frequency low-level allocations
> which have key characteristics: there are typically a finite number of
> elements in flight and we *know* that elements are being freed in a
> timely manner.
> 
> This doesn't fit with percpu_alloc(), which is a very heavyweight
> operation requiring GFP_KERNEL and it doesn't fit with
> blkio_group_stats_cpu because blkio_group_stats_cpu does not have the
> "freed in a timely manner" behaviour.
> 
> To resolve these things you've added the workqueue to keep the pool
> populated, which turns percpu_mempool into a quite different concept
> which happens to borrow some mempool code (not necessarily a bad thing).
> This will result in some memory wastage, keeping that pool full.

Tejun seems to have created one global pool for all the device with
minimum number of objects at 8. So keeping just 8 objects aside for
the whole system is not a significant memory wastage, IMHO. This is
much better then creating all the possible cgroup, device pair cgroup
irrespective of the fact whether IO is happening on that pair or not.

> 
> More significantly, it's pretty unreliable: if the allocations outpace
> the kernel thread's ability to refill the pool, all we can do is to
> wait for the kernel thread to do some work.  But we're holding
> low-level locks while doing that wait, which will block the kernel
> thread.  Deadlock.

Sorry, I did not understand this point. In the IO path we don't wait
for kernel thread to do some work. If there are no elements in the pool,
we will simply return and not create the group and IO will be accounted to
root group. Once the new IO comes in, it will try again and if by
that time worker thread succeeded in filling the pool, we will allocate
the group.

So in tight memory situation we will not account the IO to actual cgroup
but to root cgroup. Once the memory situation eases, IO will be acccounted
to right cgroup.

Can't see any deadlock happening. Did I miss something?

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ