lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 5 Oct 2015 09:23:15 -0700
From:	Dave Hansen <dave.hansen@...ux.intel.com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Theodore Ts'o <tytso@....edu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	"H. Peter Anvin" <hpa@...ux.intel.com>
Subject: Re: [REGRESSION] 998ef75ddb and aio-dio-invalidate-failure w/
 data=journal

On 10/05/2015 08:58 AM, Linus Torvalds wrote:
...
> Dave, mind sharing the micro-benchmark or perhaps even just a kernel
> profile of it? How is that "iov_iter_fault_in_readable()" so
> noticeable? It really shouldn't be a big deal.

The micro was just plugging this test:

	https://www.sr71.net/~dave/intel/write1byte.c

In to will-it-scale:

	https://github.com/antonblanchard/will-it-scale

iov_iter_fault_in_readable() shows up as the third-most expensive kernel
function in a profile:

>      7.45%  write1byte_proc  [kernel.kallsyms]     [k] copy_user_enhanced_fast_string 
>      6.51%  write1byte_proc  [kernel.kallsyms]     [k] unlock_page                    
>      6.04%  write1byte_proc  [kernel.kallsyms]     [k] iov_iter_fault_in_readable     
>      5.23%  write1byte_proc  libc-2.20.so          [.] __GI___libc_write              
>      4.86%  write1byte_proc  [kernel.kallsyms]     [k] entry_SYSCALL_64               
>      4.48%  write1byte_proc  [kernel.kallsyms]     [k] iov_iter_copy_from_user_atomic 
>      3.94%  write1byte_proc  [kernel.kallsyms]     [k] generic_perform_write          
>      3.74%  write1byte_proc  [kernel.kallsyms]     [k] mutex_lock                     
>      3.59%  write1byte_proc  [kernel.kallsyms]     [k] entry_SYSCALL_64_after_swapgs  
>      3.55%  write1byte_proc  [kernel.kallsyms]     [k] find_get_entry                 
>      3.53%  write1byte_proc  [kernel.kallsyms]     [k] vfs_write                      
>      3.17%  write1byte_proc  [kernel.kallsyms]     [k] find_lock_entry                
>      3.17%  write1byte_proc  [kernel.kallsyms]     [k] put_page                       

The disassembly points at the stac/clac pair being the culprits inside
the function (copy/paste from 'perf top' disassebly here):

...
>        │      stac
>  24.57 │      mov    (%rcx),%sil
>  15.70 │      clac
>  28.77 │      test   %eax,%eax
>   2.15 │      mov    %sil,-0x1(%rbp)
>   8.93 │    ↓ jne    66
>   2.31 │      movslq %edx,%rdx

One thing I've been noticing on Skylake is that barriers (implicit and
explicit) are showing up more in profiles.  What we're seeing here
probably isn't actually stac/clac overhead, but the cost of finishing
some other operations that are outstanding before we can proceed through
here.
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists