[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151120141549.GA8797@amt.cnet>
Date: Fri, 20 Nov 2015 12:15:49 -0200
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>, x86@...nel.org,
Luiz Capitulino <lcapitulino@...hat.com>,
Vikas Shivappa <vikas.shivappa@...el.com>,
Tejun Heo <tj@...nel.org>, Yu Fenghua <fenghua.yu@...el.com>
Subject: Re: [RFD] CAT user space interface revisited
On Thu, Nov 19, 2015 at 09:35:34AM +0100, Thomas Gleixner wrote:
> On Wed, 18 Nov 2015, Marcelo Tosatti wrote:
> > On Wed, Nov 18, 2015 at 08:34:07PM -0200, Marcelo Tosatti wrote:
> > > On Wed, Nov 18, 2015 at 07:25:03PM +0100, Thomas Gleixner wrote:
> > > > Assume that you have isolated a CPU and run your important task on
> > > > it. You give that task a slice of cache. Now that task needs kernel
> > > > services which run in kernel threads on that CPU. We really don't want
> > > > to (and cannot) hunt down random kernel threads (think cpu bound
> > > > worker threads, softirq threads ....) and give them another slice of
> > > > cache. What we really want is:
> > > >
> > > > 1 1 1 1 0 0 0 0 <- Default cache
> > > > 0 0 0 0 1 1 1 0 <- Cache for important task
> > > > 0 0 0 0 0 0 0 1 <- Cache for CPU of important task
> > > >
> > > > It would even be sufficient for particular use cases to just associate
> > > > a piece of cache to a given CPU and do not bother with tasks at all.
> >
> > Well any work on behalf of the important task, should have its cache
> > protected as well (example irq handling threads).
>
> Right, but that's nothing you can do automatically and certainly not
> from a random application.
>
> > But for certain kernel tasks for which L3 cache is not beneficial
> > (eg: kernel samepage merging), it might useful to exclude such tasks
> > from the "important, do not flush" L3 cache portion.
>
> Sure it might be useful, but this needs to be done on a case by case
> basis and there is no way to do this in any automated way.
>
> > > > It's hard. Policies are hard by definition, but this one is harder
> > > > than most other policies due to the inherent limitations.
> >
> > That is exactly why it should be allowed for software to automatically
> > configure the policies.
>
> There is nothing you can do automatically.
Every cacheline brought in the L3 has a reaccess time (the time when it
was first brought in to the time it was reaccessed).
Assume you have a single threaded app, a sequence of cacheline
accesses.
Now if there are groups of accesses which have long reaccess times
(meaning that keeping them in L3 is not beneficial), that are large
enough to justify the OS notification, the application can notify the OS
to switch to a constrained COSid (so that L3 misses reclaim from that
small portion of the L3 cache).
> If you want to allow
> applications to set the policies themself, then you need to assign a
> portion of the bitmask space and a portion of the cos id space to that
> application and then let it do with that space what it wants.
Thats why you should specify the requirements independently of each
other (the requirement in this case the size of the reservation and
type, which is tied to the application), and let something else figure
out how they all fit together.
> That's where cgroups come into play. But that does not solve the other
> issues of "global" configuration, i.e. CPU defaults etc.
I don't understand what you mean issues of global configuration.
CPU defaults: A task is associated with a COSid. A COSid points to
a set of CBMs (one CBM per socket). What defaults are you talking about?
But the interfaces do not exclude each other (the ioctl or syscall
interfaces and the manual direct MSR interface can coexist). There is
time pressure to integrate something workable for the present use cases
(none are in the class "applications set reservation themselves").
Peter has some objection against ioctls. So for something workable,
well have to handle the numbered issues pointed in the other e-mail
(2,3,4), in userspace.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists