lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MW3PR12MB4553FFAA412FB741A73009A195FF9@MW3PR12MB4553.namprd12.prod.outlook.com>
Date:   Tue, 10 Jan 2023 02:23:00 +0000
From:   "Moger, Babu" <Babu.Moger@....com>
To:     Ashok Raj <ashok_raj@...ux.intel.com>
CC:     "corbet@....net" <corbet@....net>,
        "reinette.chatre@...el.com" <reinette.chatre@...el.com>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "bp@...en8.de" <bp@...en8.de>,
        "fenghua.yu@...el.com" <fenghua.yu@...el.com>,
        "dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
        "x86@...nel.org" <x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>,
        "paulmck@...nel.org" <paulmck@...nel.org>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "quic_neeraju@...cinc.com" <quic_neeraju@...cinc.com>,
        "rdunlap@...radead.org" <rdunlap@...radead.org>,
        "damien.lemoal@...nsource.wdc.com" <damien.lemoal@...nsource.wdc.com>,
        "songmuchun@...edance.com" <songmuchun@...edance.com>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "jpoimboe@...nel.org" <jpoimboe@...nel.org>,
        "pbonzini@...hat.com" <pbonzini@...hat.com>,
        "chang.seok.bae@...el.com" <chang.seok.bae@...el.com>,
        "pawan.kumar.gupta@...ux.intel.com" 
        <pawan.kumar.gupta@...ux.intel.com>,
        "jmattson@...gle.com" <jmattson@...gle.com>,
        "daniel.sneddon@...ux.intel.com" <daniel.sneddon@...ux.intel.com>,
        "Das1, Sandipan" <Sandipan.Das@....com>,
        "tony.luck@...el.com" <tony.luck@...el.com>,
        "james.morse@....com" <james.morse@....com>,
        "linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "bagasdotme@...il.com" <bagasdotme@...il.com>,
        "eranian@...gle.com" <eranian@...gle.com>,
        "christophe.leroy@...roup.eu" <christophe.leroy@...roup.eu>,
        "jarkko@...nel.org" <jarkko@...nel.org>,
        "adrian.hunter@...el.com" <adrian.hunter@...el.com>,
        "quic_jiles@...cinc.com" <quic_jiles@...cinc.com>,
        "peternewman@...gle.com" <peternewman@...gle.com>,
        Ashok Raj <ashok.raj@...el.com>
Subject: RE: [PATCH v11 01/13] x86/resctrl: Replace smp_call_function_many()
 with on_each_cpu_mask()

[AMD Official Use Only - General]

Hi Ashok,

> -----Original Message-----
> From: Ashok Raj <ashok_raj@...ux.intel.com>
> Sent: Monday, January 9, 2023 5:27 PM
> To: Moger, Babu <Babu.Moger@....com>
> Cc: corbet@....net; reinette.chatre@...el.com; tglx@...utronix.de;
> mingo@...hat.com; bp@...en8.de; fenghua.yu@...el.com;
> dave.hansen@...ux.intel.com; x86@...nel.org; hpa@...or.com;
> paulmck@...nel.org; akpm@...ux-foundation.org; quic_neeraju@...cinc.com;
> rdunlap@...radead.org; damien.lemoal@...nsource.wdc.com;
> songmuchun@...edance.com; peterz@...radead.org; jpoimboe@...nel.org;
> pbonzini@...hat.com; chang.seok.bae@...el.com;
> pawan.kumar.gupta@...ux.intel.com; jmattson@...gle.com;
> daniel.sneddon@...ux.intel.com; Das1, Sandipan <Sandipan.Das@....com>;
> tony.luck@...el.com; james.morse@....com; linux-doc@...r.kernel.org;
> linux-kernel@...r.kernel.org; bagasdotme@...il.com; eranian@...gle.com;
> christophe.leroy@...roup.eu; jarkko@...nel.org; adrian.hunter@...el.com;
> quic_jiles@...cinc.com; peternewman@...gle.com; Ashok Raj
> <ashok.raj@...el.com>
> Subject: Re: [PATCH v11 01/13] x86/resctrl: Replace smp_call_function_many()
> with on_each_cpu_mask()
> 
> On Mon, Jan 09, 2023 at 10:43:53AM -0600, Babu Moger wrote:
> > on_each_cpu_mask() runs the function on each CPU specified by cpumask,
> > which may include the local processor.
> >
> > Replace smp_call_function_many() with on_each_cpu_mask() to simplify
> > the code.
> >
> > Reviewed-by: Reinette Chatre <reinette.chatre@...el.com>
> > Signed-off-by: Babu Moger <babu.moger@....com>
> > ---
> >  arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 11 +++------
> >  arch/x86/kernel/cpu/resctrl/rdtgroup.c    | 29 +++++++----------------
> >  2 files changed, 11 insertions(+), 29 deletions(-)
> >
> > diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
> > b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
> > index 1df0e3262bca..7eece3d2d0c3 100644
> > --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
> > +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
> > @@ -310,7 +310,6 @@ int resctrl_arch_update_domains(struct rdt_resource
> *r, u32 closid)
> >  	enum resctrl_conf_type t;
> >  	cpumask_var_t cpu_mask;
> >  	struct rdt_domain *d;
> > -	int cpu;
> >  	u32 idx;
> >
> >  	if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL)) @@ -341,13
> +340,9 @@
> > int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
> >
> >  	if (cpumask_empty(cpu_mask))
> >  		goto done;
> > -	cpu = get_cpu();
> > -	/* Update resource control msr on this CPU if it's in cpu_mask. */
> > -	if (cpumask_test_cpu(cpu, cpu_mask))
> > -		rdt_ctrl_update(&msr_param);
> > -	/* Update resource control msr on other CPUs. */
> > -	smp_call_function_many(cpu_mask, rdt_ctrl_update, &msr_param, 1);
> > -	put_cpu();
> > +
> > +	/* Update resource control msr on all the CPUs. */
> > +	on_each_cpu_mask(cpu_mask, rdt_ctrl_update, &msr_param, 1);
> 
> Do you require these updates to done immediately via an IPI? or can they be
> done bit lazy via schedule_on_each_cpu()?

I have not experimented with lazy schedule.  At least I know the call update_cpu_closid_rmid should be completed immediately. Otherwise, the result might be inconsistent as the tasks(or CPUs)  could be running on two different closed/rmids before it is updated on all CPUs in the domain.
Thanks
Babu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ