lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 14 Aug 2017 12:16:18 -0700 (PDT)
From:   Shivappa Vikas <vikas.shivappa@...el.com>
To:     Thomas Gleixner <tglx@...utronix.de>
cc:     Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
        vikas.shivappa@...el.com, x86@...nel.org,
        linux-kernel@...r.kernel.org, hpa@...or.com, peterz@...radead.org,
        ravi.v.shankar@...el.com, tony.luck@...el.com,
        fenghua.yu@...el.com, eranian@...gle.com, davidcc@...gle.com,
        ak@...ux.intel.com, sai.praneeth.prakhya@...el.com
Subject: Re: [PATCH 3/3] x86/intel_rdt/cqm: Improve limbo list processing



On Mon, 14 Aug 2017, Thomas Gleixner wrote:

> On Wed, 9 Aug 2017, Vikas Shivappa wrote:
>
>> @@ -426,6 +426,9 @@ static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d)
>>  					   GFP_KERNEL);
>>  		if (!d->rmid_busy_llc)
>>  			return -ENOMEM;
>> +		INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo);
>> +		if (has_busy_rmid(r, d))
>> +			cqm_setup_limbo_handler(d);
>
> This is beyond silly. d->rmid_busy_llc is allocated a few lines above. How
> would a bit be set here?

If we logically offline all cpus in a package and bring it back, the worker 
needs to be scheduled on the package if there were busy RMIDs on this package. 
Otherwise that RMID never gets freed as its rmid->busy stays 1..

I needed to scan the limbo list and set the bits for all limbo RMIDs after the 
alloc and before doing the 'has_busy_rmid' check. Will fix

>
>>  	}
>>  	if (is_mbm_total_enabled()) {
>>  		tsize = sizeof(*d->mbm_total);
>> @@ -536,11 +539,25 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
>>  		list_del(&d->list);
>>  		if (is_mbm_enabled())
>>  			cancel_delayed_work(&d->mbm_over);
>> +
>> +		if (is_llc_occupancy_enabled() &&
>> +		    has_busy_rmid(r, d))
>
> What is that line break helping here and why can't you just unconditionally
> cancel the work?

Will fix the line break. The has_busy_rmid makes sure the worker was indeed 
scheduled - that way we cancel the worker which was actually scheduled..

>
>> +			cancel_delayed_work(&d->cqm_limbo);
>> +
>>  		kfree(d);
>> -	} else if (r == &rdt_resources_all[RDT_RESOURCE_L3] &&
>> -		   cpu == d->mbm_work_cpu && is_mbm_enabled()) {
>> -		cancel_delayed_work(&d->mbm_over);
>> -		mbm_setup_overflow_handler(d);
>> +		return;
>> +	}
>> +
>> +	if (r == &rdt_resources_all[RDT_RESOURCE_L3]) {
>> +		if (is_mbm_enabled() && cpu == d->mbm_work_cpu) {
>> +			cancel_delayed_work(&d->mbm_over);
>> +			mbm_setup_overflow_handler(d);
>
> I think this is the wrong approach. If the timer is about to fire you
> essentially double the interval. So you better flush the work, which will
> reschedule it if needed.

Ok will fix. We can flush(setup and run it immediately) the work here 
on the new cpu.

>
>> +		}
>> +		if (is_llc_occupancy_enabled() && cpu == d->mbm_work_cpu &&
>
> That want's to be d->cbm_work_cpu, right?

Correct - thanks for pointing , will fix.

>
>> +		    has_busy_rmid(r, d)) {
>> +			cancel_delayed_work(&d->cqm_limbo);
>> +			cqm_setup_limbo_handler(d);
>
> See above.

For cqm 1s is not a hard requirement, but can flush the work like mbm to keep it 
uniform..

Thanks,
Vikas

>
> Thanks,
>
> 	tglx
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ