lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <977bf7ee-deb3-0c9f-def5-7bb04d2e78ae@linux.intel.com>
Date:   Thu, 8 Nov 2018 22:03:06 +0800
From:   "Li, Aubrey" <aubrey.li@...ux.intel.com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>
Cc:     Aubrey Li <aubrey.li@...el.com>, tglx@...utronix.de,
        mingo@...hat.com, hpa@...or.com, ak@...ux.intel.com,
        tim.c.chen@...ux.intel.com, arjan@...ux.intel.com,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v1 2/2] proc: add /proc/<pid>/thread_state

On 2018/11/8 18:17, Peter Zijlstra wrote:
> On Thu, Nov 08, 2018 at 07:32:46AM +0100, Ingo Molnar wrote:
>>
>> * Aubrey Li <aubrey.li@...el.com> wrote:
>>
>>> Expose the per-task cpu specific thread state value, it's helpful
>>> for userland to classify and schedule the tasks by different policies
>>
>> That's pretty vague - what exactly would use this information? I'm sure 
>> you have a usecase in mind - could you please describe it?
> 
> Yeah, "thread_state" is a pretty terrible name for this.

task_struct has a CPU specific element "thread", I quote it to here to 
create a generic interface to expose CPU specific state of the task.
Like /proc/<pid>/stat, I plan to use this "thread_state" to host any CPU
specific state, including AVX state(now only), and any other states(may come
soon). So this interface may be extended in future like the following:

#cat /proc/<pid>/thread_state
1 0 0


>
 The use-case is
> detectoring which tasks use AVX3 such that a userspace component (think
> job scheduler) can cluster them together.> 
> The 'problem' is that running AVX2+ code drops the max clock, because
> you light up the massive wide (and thus large area) paths.

Thanks to explain this.

> 
> So maybe something like "simd_active" is a better name, dunno.

As above, maybe the name can be hidden by thread_state?

> 
> Or maybe "simd_class" and we can write out 0,1,2,3 depending on the AVX
> class being used, dunno. It might make sense to look at what other arch
> SIMD stuff looks like to form this interface.
> 
A task may use 1,2,3 simultaneously, as a scheduler hint, it's just cluster
or not, so 0/1 may be good enough.

Thanks,
-Aubrey

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ