lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8612523d-f035-b2aa-28f5-e4122ef59901@linux.intel.com>
Date:   Mon, 2 Dec 2019 15:13:33 -0500
From:   "Liang, Kan" <kan.liang@...ux.intel.com>
To:     Andi Kleen <ak@...ux.intel.com>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     mingo@...hat.com, acme@...nel.org, tglx@...utronix.de,
        bp@...en8.de, linux-kernel@...r.kernel.org, eranian@...gle.com,
        alexey.budankov@...ux.intel.com, vitaly.slobodskoy@...el.com
Subject: Re: [RFC PATCH 3/8] perf: Init/fini PMU specific data



On 12/2/2019 2:15 PM, Andi Kleen wrote:
> On Mon, Dec 02, 2019 at 05:21:52PM +0100, Peter Zijlstra wrote:
>> On Mon, Dec 02, 2019 at 06:59:57AM -0800, Andi Kleen wrote:
>>>>
>>>> This is atricous crap. Also it is completely broken for -RT.
>>>
>>> Well can you please suggest how you would implement it instead?
>>
>> I don't think that is on me; at best I get to explain why it is
> 
> Normally code review is expected to be constructive.
> 
>> completely unacceptible to have O(nr_tasks) and allocations under a
>> raw_spinlock_t, but I was thinking you'd already know that.
> 
> Ok if that's the only problem then a lock breaker + retry
> if rescheduling is needed + some limit against live lock
> should be sufficient.
> 

OK. I will move the allocation out of critical sections.
Here is some pseudo code.

if (atomic_read(&nr_task_data_events))
	return;

//get current number of threads
read_lock(&tasklist_lock);
for_each_process_thread(g, p)
	num_thread++;
read_unlock(&tasklist_lock);

//allocate the space for them
for (i = 0; i < num_thread; i++)
	data[i] = kzalloc(ctx_size, flags);
i = 0;

/*
  * Assign the space to tasks
  * There may be some new threads created when we allocate space.
  * new_task will track its number.
  */
raw_spin_lock_irqsave(&task_data_events_lock, flags);

if (atomic_inc_return(&nr_task_data_events) > 1)
	goto unlock;

for_each_process_thread(g, p) {
	if (i < num_thread)
		p->perf_ctx_data = data[i++];
	else
		new_task++;
}
raw_spin_unlock_irqrestore(&task_data_events_lock, flags);

if (i < num_thread)
	goto end;

/*
  * Try again to allocate the space for the task created when
  * we first allocate space.
  * We don't need to worry about the task created after
  * atomic_inc_return(). It will be handled in perf_event_fork().
  * Retry one is enough.
  */
for (i = 0; i < new_task; i++)
	data[i] = kzalloc(ctx_size, flags);

raw_spin_lock_irqsave(&task_data_events_lock, flags);

for_each_process_thread(g, p) {
	if (i < unallocated)
		p->perf_ctx_data = data[i++];
	else
		WARN_ON
}
raw_spin_unlock_irqrestore(&task_data_events_lock, flags);

unlock:
	raw_spin_unlock_irqrestore(&task_data_events_lock, flags);

end:
	free unused data[]

Thanks,
Kan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ