lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 6 Mar 2023 11:33:07 +0000
From:   James Morse <james.morse@....com>
To:     Reinette Chatre <reinette.chatre@...el.com>, x86@...nel.org,
        linux-kernel@...r.kernel.org
Cc:     Fenghua Yu <fenghua.yu@...el.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        H Peter Anvin <hpa@...or.com>,
        Babu Moger <Babu.Moger@....com>,
        shameerali.kolothum.thodi@...wei.com,
        D Scott Phillips OS <scott@...amperecomputing.com>,
        carl@...amperecomputing.com, lcherian@...vell.com,
        bobo.shaobowang@...wei.com, tan.shaopeng@...itsu.com,
        xingxin.hx@...nanolis.org, baolin.wang@...ux.alibaba.com,
        Jamie Iles <quic_jiles@...cinc.com>,
        Xin Hao <xhao@...ux.alibaba.com>, peternewman@...gle.com
Subject: Re: [PATCH v2 08/18] x86/resctrl: Queue mon_event_read() instead of
 sending an IPI

Hi Reinette,

On 02/02/2023 23:47, Reinette Chatre wrote:
> Hi James,
> 
> On 1/13/2023 9:54 AM, James Morse wrote:
>> x86 is blessed with an abundance of monitors, one per RMID, that can be
>> read from any CPU in the domain. MPAMs monitors reside in the MMIO MSC,
>> the number implemented is up to the manufacturer. This means when there are
>> fewer monitors than needed, they need to be allocated and freed.
>>
>> Worse, the domain may be broken up into slices, and the MMIO accesses
>> for each slice may need performing from different CPUs.
>>
>> These two details mean MPAMs monitor code needs to be able to sleep, and
>> IPI another CPU in the domain to read from a resource that has been sliced.
>>
>> mon_event_read() already invokes mon_event_count() via IPI, which means
>> this isn't possible.
>>
>> Change mon_event_read() to schedule mon_event_count() on a remote CPU and
>> wait, instead of sending an IPI. This function is only used in response to
>> a user-space filesystem request (not the timing sensitive overflow code).
>>
>> This allows MPAM to hide the slice behaviour from resctrl, and to keep
>> the monitor-allocation in monitor.c.

>> diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>> index 1df0e3262bca..4ee3da6dced7 100644
>> --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>> +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>> @@ -532,8 +532,11 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
>>  		    struct rdt_domain *d, struct rdtgroup *rdtgrp,
>>  		    int evtid, int first)
>>  {
>> +	/* When picking a cpu from cpu_mask, ensure it can't race with cpuhp */
> 
> Please consistently use CPU instead of cpu.

(done)


>> +	lockdep_assert_held(&rdtgroup_mutex);
>> +
>>  	/*
>> -	 * setup the parameters to send to the IPI to read the data.
>> +	 * setup the parameters to pass to mon_event_count() to read the data.
>>  	 */
>>  	rr->rgrp = rdtgrp;
>>  	rr->evtid = evtid;
>> @@ -542,7 +545,7 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
>>  	rr->val = 0;
>>  	rr->first = first;
>>  
>> -	smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1);
>> +	smp_call_on_cpu(cpumask_any(&d->cpu_mask), mon_event_count, rr, false);

> This would be problematic for the use cases where single tasks are run on
> adaptive-tick CPUs. If an adaptive-tick CPU is chosen to run the function then
> it may never run. Real-time environments are target usage of resctrl (with examples
> in the documentation).

Interesting. I can't find an IPI wakeup under smp_call_on_cpu() ... I wonder what else
this breaks!

Resctrl doesn't consider the nohz-cpus when doing any of this work, or when setting up the
limbo or overflow timer work.

I think the right thing to do here is add some cpumask_any_housekeeping() helper to avoid
nohz-full CPUs where possible, and fall back to an IPI if all the CPUs in a domain are
nohz-full.

Ideally cpumask_any() would do this but it isn't possible without allocating memory.
If I can reproduce this problem, I'll propose adding the behaviour to
smp_call_function_any(), probably overloading 'wait' to return an error if all the target
CPUs are nohz-full.


This means those MPAM systems that need this can't use nohz_full, or at least need a
housekeeping CPU that can access each MSC. The driver already considers whether interrupts
are masked for this stuff because of the prototype perf support, which those machines
wouldn't be able to support either.

(I'll look at hooking this up to the limbo/overflow code too)


Thanks,

James

untested and incomplete hunk of code:
------------%<------------
	cpu = get_cpu();
	if (cpumask_test_cpu(cpu, &d->cpu_mask)) {
		smp_call_function_single(cpu, smp_mon_event_count, rr, true);
		put_cpu();
	} else {
		put_cpu();

		/*
		 * If all the CPUs available use nohz_full, fall back to an IPI.
		 * Some MPAM systems will be unable to use their counters in this case.
		 */
		cpu = cpumask_any_housekeeping(&d->cpu_mask);
		if (tick_nohz_full_cpu(cpu))
			smp_call_function_single(cpu, smp_mon_event_count, rr, true);
		else
			smp_call_on_cpu(cpu, mon_event_count, rr, false);
	}
------------%<------------

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ