lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1707021523450.2296@nanos>
Date:   Sun, 2 Jul 2017 15:37:26 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Vikas Shivappa <vikas.shivappa@...ux.intel.com>
cc:     x86@...nel.org, linux-kernel@...r.kernel.org, hpa@...or.com,
        peterz@...radead.org, ravi.v.shankar@...el.com,
        vikas.shivappa@...el.com, tony.luck@...el.com,
        fenghua.yu@...el.com, andi.kleen@...el.com
Subject: Re: [PATCH 17/21] x86/intel_rdt/cqm: Add sched_in support

On Mon, 26 Jun 2017, Vikas Shivappa wrote:
>  DECLARE_PER_CPU(struct intel_pqr_state, pqr_state);
>  DECLARE_PER_CPU_READ_MOSTLY(int, cpu_closid);
> +DECLARE_PER_CPU_READ_MOSTLY(int, cpu_rmid);
>  DECLARE_STATIC_KEY_FALSE(rdt_alloc_enable_key);
> +DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key);
> +DECLARE_STATIC_KEY_FALSE(rdt_enable_key);

Please make this a two stage change. Add rdt_enable_key first and then the
monitoring stuff. Ideally you introduce rdt_enable_key here and in the
control code in one go.

> +static void __intel_rdt_sched_in(void)
>  {
> -	if (static_branch_likely(&rdt_alloc_enable_key)) {
> -		struct intel_pqr_state *state = this_cpu_ptr(&pqr_state);
> -		int closid;
> +	struct intel_pqr_state *state = this_cpu_ptr(&pqr_state);
> +	u32 closid = 0;
> +	u32 rmid = 0;
>  
> +	if (static_branch_likely(&rdt_alloc_enable_key)) {
>  		/*
>  		 * If this task has a closid assigned, use it.
>  		 * Else use the closid assigned to this cpu.
> @@ -55,14 +59,31 @@ static inline void intel_rdt_sched_in(void)
>  		closid = current->closid;
>  		if (closid == 0)
>  			closid = this_cpu_read(cpu_closid);
> +	}
> +
> +	if (static_branch_likely(&rdt_mon_enable_key)) {
> +		/*
> +		 * If this task has a rmid assigned, use it.
> +		 * Else use the rmid assigned to this cpu.
> +		 */
> +		rmid = current->rmid;
> +		if (rmid == 0)
> +			rmid = this_cpu_read(cpu_rmid);
> +	}
>  
> -		if (closid != state->closid) {
> -			state->closid = closid;
> -			wrmsr(IA32_PQR_ASSOC, state->rmid, closid);
> -		}
> +	if (closid != state->closid || rmid != state->rmid) {
> +		state->closid = closid;
> +		state->rmid = rmid;
> +		wrmsr(IA32_PQR_ASSOC, rmid, closid);

This can be written smarter.

	struct intel_pqr_state newstate = this_cpu_read(rdt_cpu_default);
	struct intel_pqr_state *curstate = this_cpu_ptr(&pqr_state);

	if (static_branch_likely(&rdt_alloc_enable_key)) {
		if (current->closid)
			newstate.closid = current->closid;
	}

	if (static_branch_likely(&rdt_mon_enable_key)) {
		if (current->rmid)
			newstate.rmid = current->rmid;
	}

	if (newstate != *curstate) {
		*curstate = newstate;
		wrmsr(IA32_PQR_ASSOC, newstate.rmid, newstate.closid);
	}

The unconditional read of rdt_cpu_default is the right thing to do because
the default behaviour is exactly this.

Thanks,

	tglx



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ