[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1902231916540.1666@nanos.tec.linutronix.de>
Date: Sat, 23 Feb 2019 19:17:39 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Aubrey Li <aubrey.li@...ux.intel.com>
cc: mingo@...hat.com, peterz@...radead.org, hpa@...or.com,
ak@...ux.intel.com, tim.c.chen@...ux.intel.com,
dave.hansen@...el.com, arjan@...ux.intel.com, aubrey.li@...el.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v12 3/3] Documentation/filesystems/proc.txt: add
AVX512_elapsed_ms
On Sat, 23 Feb 2019, Thomas Gleixner wrote:
> On Thu, 21 Feb 2019, Aubrey Li wrote:
> Something like this instead of this conglomorate of useful, irrelevant and
> misleading information:
>
> The AVX512_elapsed_ms entry shows the milliseconds elapsed since the last
> time AVX512 usage was recorded. The recording happens on a best effort
> basis when a task is scheduled out. This means that the value depends on
> two factors:
>
> 1) The time which the task spent on the CPU without being scheduled
> out. With CPU isolation and a single runnable task this can take
> several seconds.
>
> 2) The time since the task was scheduled out last. Depending on the
> reason for being scheduled out (time slice exhausted, syscall ...)
> this can be arbitrary long time.
>
> As a consequence the value cannot be considered precise and authoritive
> information. The application which uses this information has to be aware
> of the overall scenario on the system in order to determine whether a
> task is a real AVX512 user or not.
>
> See? No jiffies, no code snippets, no absolute numbers and no magic
> recommendation which might be correct for your test scenario, but
> completely bogus for some other scenario.
>
> Instead it contains the things which a application programmer who wants to
> use that value needs to know. He then has to map it to his scenario and
> build the crystal ball logic which makes it perhaps useful.
And of course the special value -1 needs to be documented as well....
Thanks,
tglx
Powered by blists - more mailing lists