lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 26 Feb 2016 22:24:50 -0800
From:	"H. Peter Anvin" <hpa@...or.com>
To:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <peterz@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Russell King <linux@....linux.org.uk>,
	Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
	linux-api <linux-api@...r.kernel.org>,
	Paul Turner <pjt@...gle.com>, Andrew Hunter <ahh@...gle.com>,
	Andy Lutomirski <luto@...capital.net>,
	Andi Kleen <andi@...stfloor.org>,
	Dave Watson <davejwatson@...com>, Chris Lameter <cl@...ux.com>,
	Ben Maurer <bmaurer@...com>, rostedt <rostedt@...dmis.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Josh Triplett <josh@...htriplett.org>,
	Catalin Marinas <catalin.marinas@....com>,
	Will Deacon <will.deacon@....com>,
	Michael Kerrisk <mtk.manpages@...il.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v4 1/5] getcpu_cache system call: cache CPU number of
 running thread

On 02/26/16 16:40, Mathieu Desnoyers wrote:
>>
>> I think it would be a good idea to make this a general pointer for the kernel to
>> be able to write per thread state to user space, which obviously can't be done
>> with the vDSO.
>>
>> This means the libc per thread startup should query the kernel for the size of
>> this structure and allocate thread local data accordingly.  We can then grow
>> this structure if needed without making the ABI even more complex.
>>
>> This is more than a system call: this is an entirely new way for userspace to
>> interact with the kernel.  Therefore we should make it a general facility.
>
> I'm really glad to see I'm not the only one seeing potential for
> genericity here. :-) This is exactly what I had in mind
> last year when proposing the thread_local_abi() system call:
> a generic way to register an extensible per-thread data structure
> so the kernel can communicate with user-space and vice-versa.
>
> Rather than having the libc query the kernel for size of the structure,
> I would recommend that libc tells the kernel the size of the thread-local
> ABI structure it supports. The idea here is that both the kernel and libc
> need to know about the fields in that structure to allow a two-way
> interaction. Fields known only by either the kernel or userspace
> are useless for a given thread anyway. This way, libc could statically
> define the structure.

Big fat NOPE there.  Why?  Because it means that EVERY interaction with 
this memory, no matter how critical, needs to be conditionalized. 
Furthermore, userspace != libc.  Applications or higher-layer libraries 
might have more information than the running libc about additional 
fields, but with your proposal libc would gate them.

As far as the kernel providing the size in the structure (alone) -- I 
*really* hope you can see what is wrong with that!!  That doesn't mean 
we can't provide it in the structure as well, and that too might avoid 
the skipped libc problem.

> I would be tempted to also add "features" flags, so both user-space
> and the kernel could tell each other what they support: user-space
> would announce the set of features it supports, and it could also
> query the kernel for the set of supported features. One simple approach
> would be to use a uint64_t as type for those feature flags, and
> reserve the last bit for extending to future flags if we ever have
> more than 64.
>
> Thoughts ?

It doesn't seem like it would hurt, although the size of the flags field 
could end up being an issue.

	-hpa

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ