lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPM31RKHZ0CxEzNm4k2RqxO_u4OOX0PPkjAdazoyfEMPyWKP7Q@mail.gmail.com>
Date:	Wed, 31 Jul 2013 14:46:31 -0700
From:	Paul Turner <pjt@...gle.com>
To:	Rik van Riel <riel@...hat.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>, jmario@...hat.com,
	dzickus@...hat.com, hpa@...or.com
Subject: Re: [PATCH] sched,x86: optimize switch_mm for multi-threaded workloads

On Wed, Jul 31, 2013 at 2:43 PM, Rik van Riel <riel@...hat.com> wrote:
> Don Zickus and Joe Mario have been working on improvements to
> perf, and noticed heavy cache line contention on the mm_cpumask,
> running linpack on a 60 core / 120 thread system.
>
> The cause turned out to be unnecessary atomic accesses to the
> mm_cpumask. When in lazy TLB mode, the CPU is only removed from
> the mm_cpumask if there is a TLB flush event.
>
> Most of the time, no such TLB flush happens, and the kernel
> skips the TLB reload.  It can also skip the atomic memory
> set & test.
>
> Here is a summary of Joe's test results:
>
>  * The __schedule function dropped from 24% of all program cycles down
>    to 5.5%.
>  * The cacheline contention/hotness for accesses to that bitmask went
>    from being the 1st/2nd hottest - down to the 84th hottest (0.3% of
>    all shared misses which is now quite cold)
>  * The average load latency for the bit-test-n-set instruction in
>    __schedule dropped from 10k-15k cycles down to an average of 600 cycles.
>  * The linpack program results improved from 133 GFlops to 144 GFlops.
>    Peak GFlops rose from 133 to 153.
>
> Reported-by: Don Zickus <dzickus@...hat.com>
> Reported-by: Joe Mario <jmario@...hat.com>
> Tested-by: Joe Mario <jmario@...hat.com>
> Signed-off-by: Rik van Riel <riel@...hat.com>
> ---
>  arch/x86/include/asm/mmu_context.h | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
> index cdbf367..987eb3d 100644
> --- a/arch/x86/include/asm/mmu_context.h
> +++ b/arch/x86/include/asm/mmu_context.h
> @@ -59,11 +59,12 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>                 this_cpu_write(cpu_tlbstate.state, TLBSTATE_OK);
>                 BUG_ON(this_cpu_read(cpu_tlbstate.active_mm) != next);
>
> -               if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next))) {
> +               if (!cpumask_test_cpu(cpu, mm_cpumask(next))) {
>                         /* We were in lazy tlb mode and leave_mm disabled
>                          * tlb flush IPI delivery. We must reload CR3
>                          * to make sure to use no freed page tables.
>                          */
> +                       cpumask_set_cpu(cpu, mm_cpumask(next));
>                         load_cr3(next->pgd);
>                         load_LDT_nolock(&next->context);
>                 }

We're carrying the *exact* same patch for *exact* same reason.  I've
been meaning to send it out but wasn't sure of a good external
workload for this.

Reviewed-by: Paul Turner <pjt@...gle.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ