lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FAB8169.7090809@intel.com>
Date:	Thu, 10 May 2012 16:50:49 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	Borislav Petkov <bp@...64.org>
CC:	rob@...dley.net, tglx@...utronix.de, mingo@...hat.com,
	hpa@...or.com, arnd@...db.de, rostedt@...dmis.org,
	fweisbec@...il.com, jeremy@...p.org, gregkh@...uxfoundation.org,
	riel@...hat.com, luto@....edu, avi@...hat.com, len.brown@...el.com,
	dhowells@...hat.com, fenghua.yu@...el.com, ak@...ux.intel.com,
	cpw@....com, steiner@....com, akpm@...ux-foundation.org,
	penberg@...nel.org, hughd@...gle.com, rientjes@...gle.com,
	kosaki.motohiro@...fujitsu.com, n-horiguchi@...jp.nec.com,
	paul.gortmaker@...driver.com, trenn@...e.de, tj@...nel.org,
	oleg@...hat.com, axboe@...nel.dk, a.p.zijlstra@...llo.nl,
	kamezawa.hiroyu@...fujitsu.com, viro@...iv.linux.org.uk,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 3/7] x86/flush_tlb: try flush_tlb_single one by one
 in flush_tlb_range

>

> Ok, question:
> 
> we're comparing TLB size with the amount of pages mapped by this mm
> struct. AFAICT, that doesn't mean that all those mapped pages do have
> respective entries in the TLB, does it?
> 
> If so, then the actual entries number is kinda inaccurate, no? We don't
> really know how many TLB entries actually belong to this mm struct. Or am I
> missing something?


No, we can not know the exactly TLB entires for. But usually, when you
process is doing the mprotect/munmap etc system call, your process has
taken much of memory and already filled lots of TLB entries.

This point is considered imply in the balance point calculation.
checking following equation
	(512 - X) * 100ns(assumed TLB refill cost) =
		X(TLB flush entries) * 100ns(assumed invlpg cost)

The X value we got is far lower then theory value. That means remain TLB
entries is may not so much, or TLB refill cost is much lower due to
hardware pre-fetcher.

>

>> +			if ((end - start)/PAGE_SIZE > act_entries/FLUSHALL_BAR)
> 
> Oh, in a later patch you do this:
> 
> +                       if ((end - start) >> PAGE_SHIFT >
> +                                       act_entries >> tlb_flushall_factor)
> 
> and the tlb_flushall_factor factor is 5 or 6 but the division by 16
> (FLUSHALL_BAR) was a >> 4. So, is this to assume that it is not 16 but
> actually more than 32 or even 64 TLB entries where a full TLB flush
> makes sense and one-by-one if less?


Yes, the FLUSHALL_BAR is just a guessing value here. And take your
advice, I modify the macro benchmark a little and get the more sensible
value in later patch.

BTW, I found 8% performance increase on kbuild on SNB EP from the
average multiple testing, while result variation is up to 15%.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ