lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 15 Mar 2021 16:40:12 -0700
From:   Jacob Pan <jacob.jun.pan@...el.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     Vipin Sharma <vipinsh@...gle.com>, mkoutny@...e.com,
        rdunlap@...radead.org, thomas.lendacky@....com,
        brijesh.singh@....com, jon.grimm@....com, eric.vantassell@....com,
        pbonzini@...hat.com, hannes@...xchg.org, frankja@...ux.ibm.com,
        borntraeger@...ibm.com, corbet@....net, seanjc@...gle.com,
        vkuznets@...hat.com, wanpengli@...cent.com, jmattson@...gle.com,
        joro@...tes.org, tglx@...utronix.de, mingo@...hat.com,
        bp@...en8.de, hpa@...or.com, gingell@...gle.com,
        rientjes@...gle.com, dionnaglaze@...gle.com, kvm@...r.kernel.org,
        x86@...nel.org, cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org, "Tian, Kevin" <kevin.tian@...el.com>,
        "Liu, Yi L" <yi.l.liu@...el.com>,
        "Raj, Ashok" <ashok.raj@...el.com>,
        Alex Williamson <alex.williamson@...hat.com>,
        Jason Gunthorpe <jgg@...dia.com>,
        Jacob Pan <jacob.jun.pan@...ux.intel.com>,
        "jean-philippe@...aro.org" <jean-philippe@...aro.org>,
        jacob.jun.pan@...el.com
Subject: Re: [RFC v2 2/2] cgroup: sev: Miscellaneous cgroup documentation.

Hi Tejun,

On Mon, 15 Mar 2021 18:19:35 -0400, Tejun Heo <tj@...nel.org> wrote:

> Hello,
> 
> On Mon, Mar 15, 2021 at 03:11:55PM -0700, Jacob Pan wrote:
> > > Migration itself doesn't have restrictions but all resources are
> > > distributed on the same hierarchy, so the controllers are supposed to
> > > follow the same conventions that can be implemented by all
> > > controllers. 
> > Got it, I guess that is the behavior required by the unified hierarchy.
> > Cgroup v1 would be ok? But I am guessing we are not extending on v1?  
> 
> A new cgroup1 only controller is unlikely to be accpeted.
> 
> > The IOASIDs are programmed into devices to generate DMA requests tagged
> > with them. The IOMMU has a per device IOASID table with each entry has
> > two pointers:
> >  - the PGD of the guest process.
> >  - the PGD of the host process
> > 
> > The result of this 2 stage/nested translation is that we can share
> > virtual address (SVA) between guest process and DMA. The host process
> > needs to allocate multiple IOASIDs since one IOASID is needed for each
> > guest process who wants SVA.
> > 
> > The DMA binding among device-IOMMU-process is setup via a series of user
> > APIs (e.g. via VFIO).
> > 
> > If a process calls fork(), the children does not inherit the IOASIDs and
> > their bindings. Children who wish to use SVA has to call those APIs to
> > establish the binding for themselves.
> > 
> > Therefore, if a host process allocates 10 IOASIDs then does a
> > fork()/clone(), it cannot charge 10 IOASIDs in the new cgroup. i.e. the
> > 10 IOASIDs stays with the process wherever it goes.
> > 
> > I feel this fit in the domain model, true?  
> 
> I still don't get where migration is coming into the picture. Who's
> migrating where?
> 
Sorry, perhaps I can explain by an example.

There are two cgroups: cg_A and cg_B with limit set to 20 for both. Process1
is in cg_A. The initial state is:
cg_A/ioasid.current=0, cg_A/ioasid.max=20
cg_B/ioasid.current=0, cg_B/ioasid.max=20

Now, consider the following steps:

1. Process1 allocated 10 IOASIDs,
cg_A/ioasid.current=10,
cg_B/ioasid.current=0

2. then we want to move/migrate Process1 to cg_B. so we need uncharge 10 of
cg_A, charge 10 of cg_B

3. After the migration, I expect
cg_A/ioasid.current=0,
cg_B/ioasid.current=10

We don't enforce the limit during this organizational change since we can't
force free IOASIDs. But any new allocations will be subject to the limit
set in ioasid.max.

> Thanks.
> 


Thanks,

Jacob

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ