lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 22 Dec 2011 15:54:55 -0800
From:	Tejun Heo <>
To:	Andrew Morton <>
Subject: Re: [PATCHSET] block, mempool, percpu: implement percpu mempool
 and fix blkcg percpu alloc deadlock


On Thu, Dec 22, 2011 at 03:41:38PM -0800, Andrew Morton wrote:
> All the code I'm looking at assumes that blkio_group.stats_cpu is
> non-zero.  Won't the kerenl just go splat if that allocation failed?
> If the code *does* correctly handle ->stats_cpu == NULL then we have
> options.

I think it's supposed to just skip creating whole blk_group if percpu
allocation fails, so ->stats_cpu of existing groups are guaranteed to
be !%NULL.

> a) Give userspace a new procfs/debugfs file to start stats gathering
>    on a particular cgroup/request_queue pair.  Allocate the stats
>    memory in that.
> b) Or allocate stats_cpu on the first call to blkio_read_stat_cpu()
>    and return zeroes for this first call.

Hmmm... IIRC, the stats aren't exported per cgroup-request_queue pair,
so reads are issued per cgroup.  We can't tell which request_queues
userland is actually interested in.

> c) Or change the low-level code to do
>    blkio_group.want_stats_cpu=true, then test that at the top level
>    after we've determined that blkio_group.stats_cpu is NULL.

Not following.  Where's the "top level"?

> d) Or, worse, punt the allocation into a workqueue thread.

I would much prefer using mempool to this.  They are essentially the
same approach.


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists