lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1701191855440.5358@nanos>
Date:   Thu, 19 Jan 2017 19:03:04 +0100 (CET)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Stephane Eranian <eranian@...gle.com>
cc:     Shivappa Vikas <vikas.shivappa@...el.com>,
        Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
        David Carrillo Cisneros <davidcc@...gle.com>,
        LKML <linux-kernel@...r.kernel.org>, x86 <x86@...nel.org>,
        "H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        "Shankar, Ravi V" <ravi.v.shankar@...el.com>,
        "Luck, Tony" <tony.luck@...el.com>,
        Fenghua Yu <fenghua.yu@...el.com>,
        "Kleen, Andi" <andi.kleen@...el.com>, h.peter.anvin@...el.com
Subject: Re: [PATCH 00/12] Cqm2: Intel Cache quality monitoring fixes

On Wed, 18 Jan 2017, Stephane Eranian wrote:
> On Wed, Jan 18, 2017 at 12:53 AM, Thomas Gleixner <tglx@...utronix.de> wrote:
> >

> Your use case is specific to HPC and not Web workloads we run.  Jobs run
> in cgroups which may span all the CPUs of the machine.  CAT may be used
> to partition the cache. Cgroups would run inside a partition.  There may
> be multiple cgroups running in the same partition. I can understand the
> value of tracking occupancy per CLOSID, however that granularity is not
> enough for our use case.  Inside a partition, we want to know the
> occupancy of each cgroup to be able to assign blame to the top
> consumer. Thus, there needs to be a way to monitor occupancy per
> cgroup. I'd like to understand how your proposal would cover this use
> case.

The point I'm making as I explained to David is that we need to start from
the allocation angle. Of course can you monitor different tasks or task
groups inside an allocation.

> Another important aspect is that CQM measures new allocations, thus to
> get total occupancy you need to be able to monitor the thread, CPU,
> CLOSid or cgroup from the beginning of execution. In the case of a cgroup
> from the moment where the first thread is scheduled into the cgroup. To
> do this a RMID needs to be assigned from the beginning to the entity to
> be monitored.  It could be by creating a CQM event just to cause an RMID
> to be assigned as discussed earlier on this thread. And then if a perf
> stat is launched later it will get the same RMID and report full
> occupancy. But that requires the first event to remain alive, i.e., some
> process must keep the file descriptor open, i.e., need some daemon or a
> perf stat running in the background.

That's fine, but there must be a less convoluted way to do that. The
currently proposed stuff is simply horrible because it lacks any form of
design and is just hacked into submission.

> There are also use cases where you want CQM without necessarily enabling
> CAT, for instance, if you want to know the cache footprint of a workload
> to estimate how if it could be co-located with others.

That's a subset of the other stuff because it's all bound to CLOSID 0. So
you can again monitor tasks or tasks groups seperately.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ