lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <ded88b97-7bb7-429f-189b-a1047c032fdf@linux.vnet.ibm.com>
Date:   Thu, 8 Jun 2017 23:35:22 +0530
From:   Madhavan Srinivasan <maddy@...ux.vnet.ibm.com>
To:     Thomas Gleixner <tglx@...utronix.de>,
        Anju T Sudhakar <anju@...ux.vnet.ibm.com>
Cc:     mpe@...erman.id.au, linux-kernel@...r.kernel.org,
        linuxppc-dev@...ts.ozlabs.org, ego@...ux.vnet.ibm.com,
        bsingharora@...il.com, anton@...ba.org, sukadev@...ux.vnet.ibm.com,
        mikey@...ling.org, stewart@...ux.vnet.ibm.com, dja@...ens.net,
        eranian@...gle.com, hemant@...ux.vnet.ibm.com
Subject: Re: [PATCH v9 10/10] powerpc/perf: Thread imc cpuhotplug support



On Tuesday 06 June 2017 03:57 PM, Thomas Gleixner wrote:
> On Mon, 5 Jun 2017, Anju T Sudhakar wrote:
>>   static void thread_imc_mem_alloc(int cpu_id)
>>   {
>> -	u64 ldbar_addr, ldbar_value;
>>   	int phys_id = topology_physical_package_id(cpu_id);
>>   
>>   	per_cpu_add[cpu_id] = (u64)alloc_pages_exact_nid(phys_id,
>>   			(size_t)IMC_THREAD_COUNTER_MEM, GFP_KERNEL | __GFP_ZERO);
> It took me a while to understand that per_cpu_add is an array and not a new
> fangled per cpu function which adds something to a per cpu variable.
>
> So why is that storing the address as u64?

Value that we need to load to the ldbar spr also has other bits
apart from memory addr. So we made it as an array. But this
should be a per_cpu pointer.  Will fix it.


>
> And why is this a NR_CPUS sized array instead of a regular per cpu variable?

Yes. will fix it.

>> +}
>> +
>> +static void thread_imc_update_ldbar(unsigned int cpu_id)
>> +{
>> +	u64 ldbar_addr, ldbar_value;
>> +
>> +	if (per_cpu_add[cpu_id] == 0)
>> +		thread_imc_mem_alloc(cpu_id);
> So if that allocation fails then you happily proceed. Optmistic
> programming, right?

Will add return value check.

>
>> +
>>   	ldbar_addr = (u64)virt_to_phys((void *)per_cpu_add[cpu_id]);
> Ah, it's stored as u64 because you have to convert it back to a void
> pointer at the place where it is actually used. Interesting approach.
>
>>   	ldbar_value = (ldbar_addr & (u64)THREAD_IMC_LDBAR_MASK) |
>> -		(u64)THREAD_IMC_ENABLE;
>> +			(u64)THREAD_IMC_ENABLE;
>>   	mtspr(SPRN_LDBAR, ldbar_value);
>>   }
>>   
>> @@ -442,6 +450,26 @@ static void core_imc_change_cpu_context(int old_cpu, int new_cpu)
>>   	perf_pmu_migrate_context(&core_imc_pmu->pmu, old_cpu, new_cpu);
>>   }
>>   
>> +static int ppc_thread_imc_cpu_online(unsigned int cpu)
>> +{
>> +	thread_imc_update_ldbar(cpu);
>> +	return 0;
>> +}
>> +
>> +static int ppc_thread_imc_cpu_offline(unsigned int cpu)
>> +{
>> +	mtspr(SPRN_LDBAR, 0);
>> +	return 0;
>> +}
>> +
>> +void thread_imc_cpu_init(void)
>> +{
>> +	cpuhp_setup_state(CPUHP_AP_PERF_POWERPC_THREAD_IMC_ONLINE,
>> +			  "perf/powerpc/imc_thread:online",
>> +			  ppc_thread_imc_cpu_online,
>> +			  ppc_thread_imc_cpu_offline);
>> +}
>> +
>>   static int ppc_core_imc_cpu_online(unsigned int cpu)
>>   {
>>   	const struct cpumask *l_cpumask;
>> @@ -953,6 +981,9 @@ int __init init_imc_pmu(struct imc_events *events, int idx,
>>   		if (ret)
>>   			return ret;
>>   		break;
>> +	case IMC_DOMAIN_THREAD:
>> +		thread_imc_cpu_init();
>> +		break;
>>   	default:
>>   		return -1;  /* Unknown domain */
>>   	}
> Just a general observation. If anything in init_imc_pmu() fails, then all
> the hotplug callbacks are staying registered, at least I haven't seen
> anything undoing it.
Yes. We did not add code to unregister the call back on failure.
Will fix that in next version.

Thanks for the review.
Maddy

> Thanks,
>
> 	tglx
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ