[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191202202535.GO84886@tassilo.jf.intel.com>
Date: Mon, 2 Dec 2019 12:25:35 -0800
From: Andi Kleen <ak@...ux.intel.com>
To: "Liang, Kan" <kan.liang@...ux.intel.com>
Cc: Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com,
acme@...nel.org, tglx@...utronix.de, bp@...en8.de,
linux-kernel@...r.kernel.org, eranian@...gle.com,
alexey.budankov@...ux.intel.com, vitaly.slobodskoy@...el.com
Subject: Re: [RFC PATCH 3/8] perf: Init/fini PMU specific data
Looks reasonable to me.
> //get current number of threads
> read_lock(&tasklist_lock);
> for_each_process_thread(g, p)
> num_thread++;
> read_unlock(&tasklist_lock);
I'm sure we have that count somewhere.
>
> //allocate the space for them
> for (i = 0; i < num_thread; i++)
> data[i] = kzalloc(ctx_size, flags);
> i = 0;
>
> /*
> * Assign the space to tasks
> * There may be some new threads created when we allocate space.
> * new_task will track its number.
> */
> raw_spin_lock_irqsave(&task_data_events_lock, flags);
>
> if (atomic_inc_return(&nr_task_data_events) > 1)
> goto unlock;
>
> for_each_process_thread(g, p) {
> if (i < num_thread)
> p->perf_ctx_data = data[i++];
> else
> new_task++;
> }
> raw_spin_unlock_irqrestore(&task_data_events_lock, flags);
Is that lock taken in the context switch?
If not could be a normal spinlock, thus be more RT friendly.
-Andi
Powered by blists - more mailing lists