lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 6 Sep 2022 14:33:33 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Zhongkun He <hezhongkun.hzk@...edance.com>
Cc:     hannes@...xchg.org, roman.gushchin@...ux.dev,
        linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
        linux-mm@...ck.org, lizefan.x@...edance.com,
        wuyun.abel@...edance.com
Subject: Re: [External] Re: [PATCH] cgroup/cpuset: Add a new isolated
 mems.policy type.

On Tue 06-09-22 18:37:40, Zhongkun He wrote:
> > On Mon 05-09-22 18:30:55, Zhongkun He wrote:
> > > Hi Michal, thanks for your reply.
> > > 
> > > The current 'mempolicy' is hierarchically independent. The default value of
> > > the child is to inherit from the parent. The modification of the child
> > > policy will not be restricted by the parent.
> > 
> > This breaks cgroup fundamental property of hierarchical enforcement of
> > each property. And as such it is a no go.
> > 
> > > Of course, there are other options, such as the child's policy mode must be
> > > the same as the parent's. node can be the subset of parent's, but the
> > > interleave type will be complicated, that's why hierarchy independence is
> > > used. It would be better if you have other suggestions?
> > 
> > Honestly, I am not really sure cgroup cpusets is a great fit for this
> > usecase. It would be probably better to elaborate some more what are the
> > existing shortcomings and what you would like to achieve. Just stating
> > the syscall is a hard to use interface is not quite clear on its own.
> > 
> > Btw. have you noticed this question?
> > 
> > > > What is the hierarchical behavior of the policy? Say parent has a
> > > > stronger requirement (say bind) than a child (prefer)?
> > > > > How to use the mempolicy interface:
> > > > > 	echo prefer:2 > /sys/fs/cgroup/zz/cpuset.mems.policy
> > > > > 	echo bind:1-3 > /sys/fs/cgroup/zz/cpuset.mems.policy
> > > > >           echo interleave:0,1,2,3 >/sys/fs/cgroup/zz/cpuset.mems.policy
> > > > 
> > > > Am I just confused or did you really mean to combine all these
> > > > together?
> > 
> 
> Hi Michal, thanks for your reply.
> 
> >>Say parent has a stronger requirement (say bind) than a child(prefer)?
> 
> Yes, combine all these together.

What is the semantic of the resulting policy?

> The parent's task will use 'bind', child's
> use 'prefer'.This is the current implementation, and we can discuss and
> modify it together if there are other suggestions.
>
> 1:Existing shortcomings
> 
> In our use case, the application and the control plane are two separate
> systems. When the application is created, it doesn't know how to use memory,
> and it doesn't care. The control plane will decide the memory usage policy
> based on different reasons (the attributes of the application itself, the
> priority, the remaining resources of the system). Currently, numactl is used
> to set it at program startup, and the child process will inherit the
> mempolicy.

Yes this is common practice I have seen so far.

> But we can't dynamically adjust the memory policy, except
> restart, the memory policy will not change.

Do you really need to change the policy itself or only the effective
nodemask? I mean what is your usecase to go from say mbind to preferred
policy?  Do you need any other policy than bind and preferred?
 
> 2:Our goals
> 
> For the above reasons, we want to create a mempolicy at the cgroup level.
> Usually processes under a cgroup have the same priority and attributes, and
> we can dynamically adjust the memory allocation strategy according to the
> remaining resources of the system. For example, a low-priority cgroup uses
> the 'bind:2-3' policy, and a high-priority cgroup uses bind:0-1. When
> resources are insufficient, it will be changed to bind:3, bind:0-2 by
> control plane, etc.Further more, more mempolicy can be extended, such as
> allocating memory according to node weight, etc.

Yes, I do understand that you want to change the node affinity and that
is already possible with cpuset cgroup. The existing constrain is that
the policy is hardcoded mbind IIRC. So you cannot really implement a dynamic
preferred policy which would make some sense to me. The question is how
to implement that with a sensible semantic. It is hard to partition the
system into several cgroups if subset allows to spill over to others.
Say something like the following
	root (nodes=0-3)
       /    \
A (0, 1)     B (2, 3)

if both are MBIND then this makes sense because they are kinda isolated
(at least for user allocations) but if B is PREFERRED and therefore
allowed to use nodes 0 and 1 then it can deplete the memory from A and
therefore isolation doesn't work at all.

I can imagine that the all cgroups would use PREFERRED policy and then
nobody can expect anything and the configuration is mostly best effort.
But it feels like this is an abuse of the cgroup interface and a proper
syscall interface is likely due. Would it make more sense to add
pidfd_set_mempolicy and allow sufficiently privileged process to
manipulate default memory policy of a remote process?
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ