[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1708141412490.3984@vshiva-Udesk>
Date: Mon, 14 Aug 2017 14:17:42 -0700 (PDT)
From: Shivappa Vikas <vikas.shivappa@...el.com>
To: Shivappa Vikas <vikas.shivappa@...el.com>
cc: Thomas Gleixner <tglx@...utronix.de>,
Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
x86@...nel.org, linux-kernel@...r.kernel.org, hpa@...or.com,
peterz@...radead.org, ravi.v.shankar@...el.com,
tony.luck@...el.com, fenghua.yu@...el.com, eranian@...gle.com,
davidcc@...gle.com, ak@...ux.intel.com,
sai.praneeth.prakhya@...el.com
Subject: Re: [PATCH 3/3] x86/intel_rdt/cqm: Improve limbo list processing
On Mon, 14 Aug 2017, Shivappa Vikas wrote:
>
>
> On Mon, 14 Aug 2017, Thomas Gleixner wrote:
>
>> On Wed, 9 Aug 2017, Vikas Shivappa wrote:
>>
>>> @@ -426,6 +426,9 @@ static int domain_setup_mon_state(struct rdt_resource
>>> *r, struct rdt_domain *d)
>>> GFP_KERNEL);
>>> if (!d->rmid_busy_llc)
>>> return -ENOMEM;
>>> + INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo);
>>> + if (has_busy_rmid(r, d))
>>> + cqm_setup_limbo_handler(d);
>>
>> This is beyond silly. d->rmid_busy_llc is allocated a few lines above. How
>> would a bit be set here?
>
> If we logically offline all cpus in a package and bring it back, the worker
> needs to be scheduled on the package if there were busy RMIDs on this
> package. Otherwise that RMID never gets freed as its rmid->busy stays 1..
>
> I needed to scan the limbo list and set the bits for all limbo RMIDs after
> the alloc and before doing the 'has_busy_rmid' check. Will fix
Tony pointed out that there is no guarentee that a domain will come back up once
its down, so the above issue of rmid->busy staying at > 0 can still happen.
So I will delete this -
if (has_busy_rmid(r, d))
cqm_setup_limbo_handler(d);
and add this when a domain is powered down -
for each rmid in d->rmid_busy_llc
if (--entry->busy)
free_rmid(rmid);
We have no way to know if the L3 was indeed flushed (or package was powered
off). This may lead to incorrect counts in rare scenarios but can document the
same.
Thanks,
vikas
Powered by blists - more mailing lists