lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 22 Dec 2011 15:54:55 -0800
From:	Tejun Heo <tj@...nel.org>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	avi@...hat.com, nate@...nel.net, cl@...ux-foundation.org,
	oleg@...hat.com, axboe@...nel.dk, vgoyal@...hat.com,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCHSET] block, mempool, percpu: implement percpu mempool
 and fix blkcg percpu alloc deadlock

Hello,

On Thu, Dec 22, 2011 at 03:41:38PM -0800, Andrew Morton wrote:
> All the code I'm looking at assumes that blkio_group.stats_cpu is
> non-zero.  Won't the kerenl just go splat if that allocation failed?
> 
> If the code *does* correctly handle ->stats_cpu == NULL then we have
> options.

I think it's supposed to just skip creating whole blk_group if percpu
allocation fails, so ->stats_cpu of existing groups are guaranteed to
be !%NULL.

> a) Give userspace a new procfs/debugfs file to start stats gathering
>    on a particular cgroup/request_queue pair.  Allocate the stats
>    memory in that.
> 
> b) Or allocate stats_cpu on the first call to blkio_read_stat_cpu()
>    and return zeroes for this first call.

Hmmm... IIRC, the stats aren't exported per cgroup-request_queue pair,
so reads are issued per cgroup.  We can't tell which request_queues
userland is actually interested in.

> c) Or change the low-level code to do
>    blkio_group.want_stats_cpu=true, then test that at the top level
>    after we've determined that blkio_group.stats_cpu is NULL.

Not following.  Where's the "top level"?

> d) Or, worse, punt the allocation into a workqueue thread.

I would much prefer using mempool to this.  They are essentially the
same approach.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ