lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 13 Sep 2016 13:18:17 -0500
From:   Nilay Vaish <nilayvaish@...il.com>
To:     Fenghua Yu <fenghua.yu@...el.com>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        "H. Peter Anvin" <h.peter.anvin@...el.com>,
        Ingo Molnar <mingo@...e.hu>, Tony Luck <tony.luck@...el.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Tejun Heo <tj@...nel.org>, Borislav Petkov <bp@...e.de>,
        Stephane Eranian <eranian@...gle.com>,
        Marcelo Tosatti <mtosatti@...hat.com>,
        David Carrillo-Cisneros <davidcc@...gle.com>,
        Shaohua Li <shli@...com>,
        Ravi V Shankar <ravi.v.shankar@...el.com>,
        Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
        Sai Prakhya <sai.praneeth.prakhya@...el.com>,
        linux-kernel <linux-kernel@...r.kernel.org>, x86 <x86@...nel.org>
Subject: Re: [PATCH v2 11/33] x86/intel_rdt: Hot cpu support for Cache Allocation

On 8 September 2016 at 04:57, Fenghua Yu <fenghua.yu@...el.com> wrote:
> diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
> index 9f30492..4537658 100644
> --- a/arch/x86/kernel/cpu/intel_rdt.c
> +++ b/arch/x86/kernel/cpu/intel_rdt.c
> @@ -141,6 +145,80 @@ static inline bool rdt_cpumask_update(int cpu)
>         return false;
>  }
>
> +/*
> + * cbm_update_msrs() - Updates all the existing IA32_L3_MASK_n MSRs
> + * which are one per CLOSid on the current package.
> + */
> +static void cbm_update_msrs(void *dummy)
> +{
> +       int maxid = boot_cpu_data.x86_cache_max_closid;
> +       struct rdt_remote_data info;
> +       unsigned int i;
> +
> +       for (i = 0; i < maxid; i++) {
> +               if (cctable[i].clos_refcnt) {
> +                       info.msr = CBM_FROM_INDEX(i);
> +                       info.val = cctable[i].cbm;
> +                       msr_cpu_update(&info);
> +               }
> +       }
> +}
> +
> +static int intel_rdt_online_cpu(unsigned int cpu)
> +{
> +       struct intel_pqr_state *state = &per_cpu(pqr_state, cpu);
> +
> +       state->closid = 0;
> +       mutex_lock(&rdtgroup_mutex);
> +       /* The cpu is set in root rdtgroup after online. */
> +       cpumask_set_cpu(cpu, &root_rdtgrp->cpu_mask);
> +       per_cpu(cpu_rdtgroup, cpu) = root_rdtgrp;
> +       /*
> +        * If the cpu is first time found and set in its siblings that
> +        * share the same cache, update the CBM MSRs for the cache.
> +        */

I am finding it slightly hard to parse the comment above.  Does the
following sound better:  If the cpu is the first one found and set
amongst its siblings that ...

> +       if (rdt_cpumask_update(cpu))
> +               smp_call_function_single(cpu, cbm_update_msrs, NULL, 1);
> +       mutex_unlock(&rdtgroup_mutex);
> +}
> +
> +static int clear_rdtgroup_cpumask(unsigned int cpu)
> +{
> +       struct list_head *l;
> +       struct rdtgroup *r;
> +
> +       list_for_each(l, &rdtgroup_lists) {
> +               r = list_entry(l, struct rdtgroup, rdtgroup_list);
> +               if (cpumask_test_cpu(cpu, &r->cpu_mask)) {
> +                       cpumask_clear_cpu(cpu, &r->cpu_mask);
> +                       return 0;
> +               }
> +       }
> +
> +       return -EINVAL;
> +}
> +
> +static int intel_rdt_offline_cpu(unsigned int cpu)
> +{
> +       int i;
> +
> +       mutex_lock(&rdtgroup_mutex);
> +       if (!cpumask_test_and_clear_cpu(cpu, &rdt_cpumask)) {
> +               mutex_unlock(&rdtgroup_mutex);
> +               return;
> +       }
> +
> +       cpumask_and(&tmp_cpumask, topology_core_cpumask(cpu), cpu_online_mask);
> +       cpumask_clear_cpu(cpu, &tmp_cpumask);
> +       i = cpumask_any(&tmp_cpumask);
> +
> +       if (i < nr_cpu_ids)
> +               cpumask_set_cpu(i, &rdt_cpumask);
> +
> +       clear_rdtgroup_cpumask(cpu);
> +       mutex_unlock(&rdtgroup_mutex);
> +}
> +

Just for my info, why do we need not update MSRs when a cpu goes offline?



Thanks
Nilay

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ