lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1508041209360.921@vshiva-Udesk>
Date:	Tue, 4 Aug 2015 19:21:52 -0700 (PDT)
From:	Vikas Shivappa <vikas.shivappa@...el.com>
To:	Tejun Heo <tj@...nel.org>
cc:	Vikas Shivappa <vikas.shivappa@...el.com>,
	Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
	linux-kernel@...r.kernel.org, x86@...nel.org, hpa@...or.com,
	tglx@...utronix.de, mingo@...nel.org, peterz@...radead.org,
	Matt Fleming <matt.fleming@...el.com>,
	"Auld, Will" <will.auld@...el.com>,
	"Williamson, Glenn P" <glenn.p.williamson@...el.com>,
	"Juvva, Kanaka D" <kanaka.d.juvva@...el.com>
Subject: Re: [PATCH 5/9] x86/intel_rdt: Add new cgroup and Class of service
 management



On Tue, 4 Aug 2015, Tejun Heo wrote:

> Hello, Vikas.
>
> On Tue, Aug 04, 2015 at 11:50:16AM -0700, Vikas Shivappa wrote:
>> I will make this more clear in the documentation - We intend this cgroup
>> interface to be used by a root or superuser - more like a system
>> administrator being able to control the allocation of the threads , the one
>> who has the knowledge of the usage and being able to decide.
>
> I get that this would be an easier "bolt-on" solution but isn't a good
> solution by itself in the long term.  As I wrote multiple times
> before, this is a really bad programmable interface.  Unless you're
> sure that this doesn't have to be programmable for threads of an
> individual applications,

Yes, this doesnt have to be a programmable interface for threads. May not be a 
good idea to let the threads decide the cache allocation by themselves using this direct 
interface. We are transfering the decision maker responsibility to the system 
administrator.

- This interface like you said can easily bolt-on. basically an easy to use 
interface without worrying about the architectural details.
- But still does the job. root user can allocate exclusive or overlapping cache 
lines to threads or group of threads.
- No major roadblocks for usage as we can make the allocations like mentioned 
above and still keep the hierarchy etc and use it when needed.
- An important factor is that it can co-exist with other interfaces like #2 and 
#3 for the same easily. So I donot see a reason why we should not use this.
This is not meant to be a programmable interface, however it does not prevent 
co-existence.
- If root user has to set affinity of threads that he is allocating cache, he 
can do so using other cgroups like cpuset or set the masks seperately using 
taskset. This would let him configure the cache allocation on a socket.

this is a pretty bad interface by itself.
>
>> There is already a lot of such usage among different enterprise users at
>> Intel/google/cisco etc who have been testing the patches posted to lkml and
>> academically there is plenty of usage as well.
>
> I mean, that's the tool you gave them.  Of course they'd be using it
> but I suspect most of them would do fine with a programmable interface
> too.  Again, please think of cpu affinity.

All the methodology to support the feature may need an arbitrator/agent to 
decide the allocation.

1. Let the root user or system administrator be the one who decides the
allocation based on the current usage. We assume this to be one with
administrative privileges. He could use the cgroup interface to perform the
task. One way to do the cpu affinity is by mounting cpuset and rdt cgroup 
together.

2. Kernel automatically assigning the cache based on the priority of the apps
etc. This is something which could be designed to co-exist with the #1 above
much like how the cpusets cgroup co-exist with the kernel assigning cpus to 
tasks. (the task could be having a cache capacity mask 
just like the cpu affinity mask)

3. User programmable interface , where say a resource management program
x (and hence apps) could link a library which supports cache alloc/monitoring
etc and then try to control and monitor the resources. The arbitrator could just
be the resource management interface itself or the kernel could decide.

If users use this programmable interface, we need to 
make sure all the apps just cannot allocate resources without some interfacing 
agent (in which case they could interface with #2 ?).

Do you think there are any issues for the user programmable interface to 
co-exist with the cgroup interface ?

Thanks,
Vikas

>
> Thanks.
>
> -- 
> tejun
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ