lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 18 Nov 2015 14:38:22 -0500
From:	Luiz Capitulino <lcapitulino@...hat.com>
To:	Thomas Gleixner <tglx@...utronix.de>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>, x86@...nel.org,
	Marcelo Tosatti <mtosatti@...hat.com>,
	Vikas Shivappa <vikas.shivappa@...el.com>,
	Tejun Heo <tj@...nel.org>, Yu Fenghua <fenghua.yu@...el.com>,
	will.auld@...el.com, donald.d.dugger@...el.com, riel@...hat.com
Subject: Re: [RFD] CAT user space interface revisited

On Wed, 18 Nov 2015 19:25:03 +0100 (CET)
Thomas Gleixner <tglx@...utronix.de> wrote:

> We really need to make this as configurable as possible from userspace
> without imposing random restrictions to it. I played around with it on
> my new intel toy and the restriction to 16 COS ids (that's 8 with CDP
> enabled) makes it really useless if we force the ids to have the same
> meaning on all sockets and restrict it to per task partitioning.
> 
> Even if next generation systems will have more COS ids available,
> there are not going to be enough to have a system wide consistent
> view unless we have COS ids > nr_cpus.
> 
> Aside of that I don't think that a system wide consistent view is
> useful at all.

This is a great writeup! I agree with everything you said.

> So now to the interface part. Unfortunately we need to expose this
> very close to the hardware implementation as there are really no
> abstractions which allow us to express the various bitmap
> combinations. Any abstraction I tried to come up with renders that
> thing completely useless.
> 
> I was not able to identify any existing infrastructure where this
> really fits in. I chose a directory/file based representation. We
> certainly could do the same with a syscall, but that's just an
> implementation detail.
> 
> At top level:
> 
>    xxxxxxx/cat/max_cosids		<- Assume that all CPUs are the same
>    xxxxxxx/cat/max_maskbits		<- Assume that all CPUs are the same
>    xxxxxxx/cat/cdp_enable		<- Depends on CDP availability
> 
> Per socket data:
> 
>    xxxxxxx/cat/socket-0/
>    ...
>    xxxxxxx/cat/socket-N/l3_size
>    xxxxxxx/cat/socket-N/hwsharedbits
> 
> Per socket mask data:
> 
>    xxxxxxx/cat/socket-N/cos-id-0/
>    ...
>    xxxxxxx/cat/socket-N/cos-id-N/inuse
> 				/cat_mask	
> 				/cdp_mask	<- Data mask if CDP enabled
> 
> Per cpu default cos id for the cpus on that socket:
> 
>    xxxxxxx/cat/socket-N/cpu-x/default_cosid
>    ...
>    xxxxxxx/cat/socket-N/cpu-N/default_cosid
> 
> The above allows a simple cpu based partitioning. All tasks which do
> not have a cache partition assigned on a particular socket use the
> default one of the cpu they are running on.
> 
> Now for the task(s) partitioning:
> 
>    xxxxxxx/cat/partitions/
> 
> Under that directory one can create partitions
> 
>    xxxxxxx/cat/partitions/p1/tasks
> 			    /socket-0/cosid
> 			    ...
> 			    /socket-n/cosid
> 
>    The default value for the per socket cosid is COSID_DEFAULT, which
>    causes the task(s) to use the per cpu default id.

I hope I've got all the details right, but this proposal looks awesome.
There's more people who seem to agree with something like this.

Btw, I think it should be possible to implement this with cgroups. But
I too don't care that much on cgroups vs. syscalls.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ