[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55A8F4BF.3020902@huawei.com>
Date: Fri, 17 Jul 2015 20:27:43 +0800
From: "Wangnan (F)" <wangnan0@...wei.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: kaixu xia <xiakaixu@...wei.com>, <ast@...mgrid.com>,
<davem@...emloft.net>, <acme@...nel.org>, <mingo@...hat.com>,
<masami.hiramatsu.pt@...achi.com>, <jolsa@...nel.org>,
<linux-kernel@...r.kernel.org>, <pi3orama@....com>,
<hekuang@...wei.com>
Subject: Re: [RFC PATCH 5/6] bpf: Implement function bpf_read_pmu() that get
the selected hardware PMU conuter
On 2015/7/17 20:18, Peter Zijlstra wrote:
> On Fri, Jul 17, 2015 at 08:01:07PM +0800, Wangnan (F) wrote:
>>
>> On 2015/7/17 19:56, Peter Zijlstra wrote:
>>> On Fri, Jul 17, 2015 at 01:55:05PM +0200, Peter Zijlstra wrote:
>>>> On Fri, Jul 17, 2015 at 07:45:02PM +0800, Wangnan (F) wrote:
>>>>
>>>>>> Depends on what all you need, if you need full perf events to work then
>>>>>> yes perf_event_read_value() is your only option.
>>>>>>
>>>>>> But note that that requires scheduling, so you cannot actually use it
>>>>>> for tracing purposes etc..
>>>>> What you mean "full perf events"? Even with your code some event still not
>>>>> work?
>>>> The code I posted only works for events that do not have inherit set.
>>>> And only works from IRQ/NMI context for events that monitor the current
>>>> task or the current CPU (although that needs a little extra code still).
>>>>
>>>> Anything else and it does not work (correctly).
>>> Scratch that from NMI, for that to work we need more magic still.
>> The scheduling you said is caused by
>>
>> mutex_lock(&event->child_mutex)
>>
>> right?
>>
>> What about replacing it to mutex_trylock() and simply return an error
>> if it read from a BPF program?
> That is vile and unreliable.
>
> I think you really want to put very strict limits on what kind of events
> you accept, or create the events yourself.
>
I think we can check the limitation in BPF program. What about this:
event must on current CPU or must be on current process. If not,
bpf_read_pmu() should simply return an error.
With current design it is easy to implement, and users can still control
it through bpf map.
But what if we really want cross-cpu PMU accessing? Impossible?
Thank you.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists