lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 14 Sep 2007 06:48:55 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	David Chinner <dgc@....com>
Cc:	Mel Gorman <mel@...net.ie>, Christoph Lameter <clameter@....com>,
	andrea@...e.de, torvalds@...ux-foundation.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	Christoph Hellwig <hch@....de>,
	William Lee Irwin III <wli@...omorphy.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	Badari Pulavarty <pbadari@...il.com>,
	Maxim Levitsky <maximlevitsky@...il.com>,
	Fengguang Wu <fengguang.wu@...il.com>,
	swin wang <wangswin@...il.com>, totty.lu@...il.com,
	hugh@...itas.com, joern@...ybastard.org
Subject: Re: [00/41] Large Blocksize Support V7 (adds memmap support)

On Thursday 13 September 2007 12:01, Nick Piggin wrote:
> On Thursday 13 September 2007 23:03, David Chinner wrote:
> > Then just do operations on directories with lots of files in them
> > (tens of thousands). Every directory operation will require at
> > least one vmap in this situation - e.g. a traversal will result in
> > lots and lots of blocks being read that will require vmap() for every
> > directory block read from disk and an unmap almost immediately
> > afterwards when the reference is dropped....
>
> Ah, wow, thanks: I can reproduce it.

OK, the vunmap batching code wipes your TLB flushing and IPIs off
the table. Diffstat below, but the TLB portions are here (besides that
_everything_ is probably lower due to less TLB misses caused by the
TLB flushing):

      -170   -99.4% sn2_send_IPI
      -343  -100.0% sn_send_IPI_phys
    -17911   -99.9% smp_call_function


Total performance went up by 30% on a 64-way system (248 seconds to
172 seconds to run parallel finds over different huge directories).

     23012  54790.5% _read_lock
      9427   329.0% __get_vm_area_node
      5792     0.0% __find_vm_area
      1590  53000.0% __vunmap
       107    26.0% _spin_lock
        74   119.4% _xfs_buf_find
        58     0.0% __unmap_kernel_range
        53    36.6% kmem_zone_alloc
      -129  -100.0% pio_phys_write_mmr
      -144  -100.0% unmap_kernel_range
      -170   -99.4% sn2_send_IPI
      -233   -59.1% kfree
      -266  -100.0% find_next_bit
      -343  -100.0% sn_send_IPI_phys
      -564   -19.9% xfs_iget_core
     -1946  -100.0% remove_vm_area
    -17911   -99.9% smp_call_function
    -62726    -7.2% _write_lock
   -438360   -64.2% default_idle
   -482631   -30.4% total

Next I have some patches to scale the vmap locks and data
structures better, but they're not quite ready yet. This looks like it
should result in a further speedup of several times when combined
with the TLB flushing reductions here...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ