lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d981134-24b0-c079-3b4a-7ffe434324d5@linux.intel.com>
Date:   Mon, 2 Dec 2019 15:44:34 -0500
From:   "Liang, Kan" <kan.liang@...ux.intel.com>
To:     Andi Kleen <ak@...ux.intel.com>
Cc:     Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com,
        acme@...nel.org, tglx@...utronix.de, bp@...en8.de,
        linux-kernel@...r.kernel.org, eranian@...gle.com,
        alexey.budankov@...ux.intel.com, vitaly.slobodskoy@...el.com
Subject: Re: [RFC PATCH 3/8] perf: Init/fini PMU specific data



On 12/2/2019 3:25 PM, Andi Kleen wrote:
> 
> Looks reasonable to me.
> 
>> //get current number of threads
>> read_lock(&tasklist_lock);
>> for_each_process_thread(g, p)
>> 	num_thread++;
>> read_unlock(&tasklist_lock);
> 
> I'm sure we have that count somewhere.
>

It looks like we can get the number from global variable "nr_threads"
I will use it in v2.

>>
>> //allocate the space for them
>> for (i = 0; i < num_thread; i++)
>> 	data[i] = kzalloc(ctx_size, flags);
>> i = 0;
>>
>> /*
>>   * Assign the space to tasks
>>   * There may be some new threads created when we allocate space.
>>   * new_task will track its number.
>>   */
>> raw_spin_lock_irqsave(&task_data_events_lock, flags);
>>
>> if (atomic_inc_return(&nr_task_data_events) > 1)
>> 	goto unlock;
>>
>> for_each_process_thread(g, p) {
>> 	if (i < num_thread)
>> 		p->perf_ctx_data = data[i++];
>> 	else
>> 		new_task++;
>> }
>> raw_spin_unlock_irqrestore(&task_data_events_lock, flags);
> 
> Is that lock taken in the context switch? >
> If not could be a normal spinlock, thus be more RT friendly.
> 

It's not in context switch. I will use the normal spinlock to instead.

Thanks,
Kan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ