lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <776c7bf0-d779-7d27-9e05-b46cd299813b@linux.intel.com>
Date:   Tue, 20 Aug 2019 10:52:57 -0400
From:   "Liang, Kan" <kan.liang@...ux.intel.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     mingo@...hat.com, acme@...nel.org, linux-kernel@...r.kernel.org,
        eranian@...gle.com, ak@...ux.intel.com
Subject: Re: [PATCH] perf/x86: Consider pinned events for group validation



On 8/20/2019 10:10 AM, Peter Zijlstra wrote:
> On Fri, Aug 16, 2019 at 10:49:10AM -0700, kan.liang@...ux.intel.com wrote:
>> From: Kan Liang <kan.liang@...ux.intel.com>
>>
>> perf stat -M metrics relies on weak groups to reject unschedulable
>> groups and run them as non-groups.
>> This uses the group validation code in the kernel. Unfortunately
>> that code doesn't take pinned events, such as the NMI watchdog, into
>> account. So some groups can pass validation, but then later still
>> never schedule.
> 
> But if you first create the group and then a pinned event it 'works',
> which is inconsistent and makes all this timing dependent.

I don't think so. The pinned event will be validated by 
validate_event(), which doesn't simulate the schedule.
So the validation still pass, but the group still never schedule.

> 
>> @@ -2011,9 +2011,11 @@ static int validate_event(struct perf_event *event)
>>    */
>>   static int validate_group(struct perf_event *event)
>>   {
>> +	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>>   	struct perf_event *leader = event->group_leader;
>>   	struct cpu_hw_events *fake_cpuc;
>> -	int ret = -EINVAL, n;
>> +	struct perf_event *pinned_event;
>> +	int ret = -EINVAL, n, i;
>>   
>>   	fake_cpuc = allocate_fake_cpuc();
>>   	if (IS_ERR(fake_cpuc))
>> @@ -2033,6 +2035,24 @@ static int validate_group(struct perf_event *event)
>>   	if (n < 0)
>>   		goto out;
>>   
>> +	/*
>> +	 * The new group must can be scheduled
>> +	 * together with current pinned events.
>> +	 * Otherwise, it will never get a chance
>> +	 * to be scheduled later.
> 
> That's wrapped short; also I don't think it is sufficient; what if you
> happen to have a pinned event on CPU1 (and not others) and happen to run
> validation for a new CPU1 event on CPUn ?
>

The patch doesn't support this case. It is mentioned in the description.
The patch doesn't intend to catch all possible cases that cannot be 
scheduled. I think it's impossible to catch all cases.
We only want to improve the validate_group() a little bit to catch some 
common cases, e.g. NMI watchdog interacting with group.


> Also; per that same; it is broken, you're accessing the cpu-local cpuc
> without serialization.

Do you mean accessing all cpuc serially?
We only check the cpuc on current CPU here. It doesn't intend to access 
other cpuc.


Thanks,
Kan

> 
>> +	 */
>> +	for (i = 0; i < cpuc->n_events; i++) {
>> +		pinned_event = cpuc->event_list[i];
>> +		if (WARN_ON_ONCE(!pinned_event))
>> +			continue;
>> +		if (!pinned_event->attr.pinned)
>> +			continue;
>> +		fake_cpuc->n_events = n;
>> +		n = collect_events(fake_cpuc, pinned_event, false);
>> +		if (n < 0)
>> +			goto out;
>> +	}
>> +
>>   	fake_cpuc->n_events = 0;
>>   	ret = x86_pmu.schedule_events(fake_cpuc, n, NULL);
>>   
>> -- 
>> 2.7.4
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ