[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1411232057470.6439@nanos>
Date: Sun, 23 Nov 2014 21:04:50 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Vikas Shivappa <vikas.shivappa@...el.com>
cc: Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
linux-kernel@...r.kernel.org, hpa@...or.com, mingo@...nel.org,
tj@...nel.org, matt.flemming@...el.com, will.auld@...el.com,
peterz@...radead.org
Subject: Re: [PATCH] x86: Intel Cache Allocation Technology support
On Fri, 21 Nov 2014, Vikas Shivappa wrote:
> On Fri, 21 Nov 2014, Thomas Gleixner wrote:
> > On Wed, 19 Nov 2014, Vikas Shivappa wrote:
> > > + rdmsr(IA32_PQR_ASSOC, l, h);
> >
> > Why on earth do we want to read an MSR on every context switch? What's
> > wrong with having
> >
> > DEFINE_PER_CPU(u64, cacheqe_pqr);
> >
> > u64 next_pqr, cur_pqr = this_cpu_read(cacheqe_pqr);
> >
> > cq = task_cacheqe(task);
> > next_pqr = cq ? cq->pqr : 0;
> >
> > if (next_pqr != cur_pqr) {
> > wrmsrl(IA32_PQR_ASSOC, next_pqr);
>
> when cqm(cache monitoring) and cqe are both running together , cant corrupt
> the low 32 bits, the rdmsr was a fix for that.
> If there are better alternatives at low cost , certainly acceptable
Sure there are. If cache monitoring and this runs together then they
should better synchronize the cached PQR bits for the current cpu
instead of enforcing useless MSR reads into the hot path.
What you provided is certainly unacceptable.
> > > +/* Create a new cacheqe cgroup.*/
> > > +static struct cgroup_subsys_state *
> > > +cqe_css_alloc(struct cgroup_subsys_state *parent_css)
> > > +{
> > > + struct cacheqe *parent = css_cacheqe(parent_css);
> > > + struct cacheqe *cq;
> > > +
> > > + /* This is the call before the feature is detected */
> >
> > Huch? We know at early boot that this is available. So why do you need
> > this? The first call to this should never happen before the whole
> > thing is initialized. If it happens, it's a design failure.
> >
> > > + if (!parent) {
> > > + root_cqe_group.clos = 0;
> > > + return &root_cqe_group.css;
> > > + }
> > > +
> > > + /* To check if cqe is enabled.*/
> > > + if (!cqe_genable)
> > > + return ERR_PTR(-ENODEV);
> >
> > So we first check for !parent and return &root_cqe_group.css and later
> > we return an error pointer? Really consistent behaviour.
>
> this is due to how the cgroup init works.
Sorry, I can't follow that argument. Why enforces the cgroup init this
inconsistency? Why is cgroup init BEFORE we detect the feature?
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists