lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1394156230.2555.19.camel@buesod1.americas.hpqcorp.net>
Date:	Thu, 06 Mar 2014 17:37:10 -0800
From:	Davidlohr Bueso <davidlohr@...com>
To:	Dave Hansen <dave@...1.net>
Cc:	linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
	ak@...ux.intel.com, kirill.shutemov@...ux.intel.com,
	mgorman@...e.de, alex.shi@...aro.org, x86@...nel.org,
	linux-mm@...ck.org, dave.hansen@...ux.intel.com
Subject: Re: [PATCH 5/7] x86: mm: new tunable for single vs full TLB flush

On Wed, 2014-03-05 at 16:45 -0800, Dave Hansen wrote:
> From: Dave Hansen <dave.hansen@...ux.intel.com>
> +
> +If you believe that invlpg is being called too often, you can
> +lower the tunable:
> +
> +	/sys/debug/kernel/x86/tlb_single_page_flush_ceiling
> +

Whenever this tunable needs to be updated, most users will not know what
a invlpg is and won't think in terms of pages either. How about making
this in units of Kb instead? But then again most of those users won't be
looking into tlb flushing issues anyways, so...

While obvious, tt should also mention that this does not apply to
hugepages.

> +This will cause us to do the global flush for more cases.
> +Lowering it to 0 will disable the use of invlpg.
> +
> +You might see invlpg inside of flush_tlb_mm_range() show up in
> +profiles, or you can use the trace_tlb_flush() tracepoints. to
> +determine how long the flush operations are taking.
> +
> +Essentially, you are balancing the cycles you spend doing invlpg
> +with the cycles that you spend refilling the TLB later.
> +
> +You can measure how expensive TLB refills are by using
> +performance counters and 'perf stat', like this:
> +
> +perf stat -e
> +	cpu/event=0x8,umask=0x84,name=dtlb_load_misses_walk_duration/,
> +	cpu/event=0x8,umask=0x82,name=dtlb_load_misses_walk_completed/,
> +	cpu/event=0x49,umask=0x4,name=dtlb_store_misses_walk_duration/,
> +	cpu/event=0x49,umask=0x2,name=dtlb_store_misses_walk_completed/,
> +	cpu/event=0x85,umask=0x4,name=itlb_misses_walk_duration/,
> +	cpu/event=0x85,umask=0x2,name=itlb_misses_walk_completed/
> +
> +That works on an IvyBridge-era CPU (i5-3320M).  Different CPUs
> +may have differently-named counters, but they should at least
> +be there in some form.  You can use pmu-tools 'ocperf list'
> +(https://github.com/andikleen/pmu-tools) to find the right
> +counters for a given CPU.
> +
> _
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ