lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7b8d68c6-9a1c-dc19-e430-e044e4c4f210@linux.alibaba.com>
Date:   Fri, 10 Sep 2021 10:12:32 +0800
From:   "taoyi.ty" <escape@...ux.alibaba.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     gregkh@...uxfoundation.org, lizefan.x@...edance.com,
        hannes@...xchg.org, mcgrof@...nel.org, keescook@...omium.org,
        yzaikin@...gle.com, linux-kernel@...r.kernel.org,
        cgroups@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        shanpeic@...ux.alibaba.com
Subject: Re: [RFC PATCH 0/2] support cgroup pool in v1

I am glad to receive your reply.

cgroup pool is a relatively simple solution that I think can

solve the problem.

I have tried making locking more granular, but in the end found

it too diffcult. cgroup_mutex protects almost all operation related

to cgroup. If not use cgroup_mutex, I have no idea how to design

lock mechanism to take both concurrent performance and

existing interfaces into account. Do you have any good advice?


thanks,


Yi Tao


On 2021/9/9 上午12:35, Tejun Heo wrote:
> Hello,
>
> On Wed, Sep 08, 2021 at 08:15:11PM +0800, Yi Tao wrote:
>> In order to solve this long-tail delay problem, we designed a cgroup
>> pool. The cgroup pool will create a certain number of cgroups in advance.
>> When a user creates a cgroup through the mkdir system call, a clean cgroup
>> can be quickly obtained from the pool. Cgroup pool draws on the idea of
>> cgroup rename. By creating pool and rename in advance, it reduces the
>> critical area of cgroup creation, and uses a spinlock different from
>> cgroup_mutex, which reduces scheduling overhead on the one hand, and eases
>> competition with attaching processes on the other hand.
> I'm not sure this is the right way to go about it. There are more
> conventional ways to improve scalability - making locking more granular and
> hunting down specific operations which take long time. I don't think cgroup
> management operations need the level of scalability which requires front
> caching.
>
> Thanks.
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ