[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <604427e00904081302p7aad170bu5ff0702415455f7@mail.gmail.com>
Date: Wed, 8 Apr 2009 13:02:23 -0700
From: Ying Han <yinghan@...gle.com>
To: linux-mm@...ck.org, linux-kernel <linux-kernel@...r.kernel.org>,
akpm <akpm@...ux-foundation.org>, torvalds@...ux-foundation.org,
Ingo Molnar <mingo@...e.hu>, Mike Waychison <mikew@...gle.com>,
Rohit Seth <rohitseth@...gle.com>,
Hugh Dickins <hugh@...itas.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"H. Peter Anvin" <hpa@...or.com>,
Török Edwin <edwintorok@...il.com>,
Lee Schermerhorn <lee.schermerhorn@...com>,
Nick Piggin <npiggin@...e.de>,
Wu Fengguang <fengguang.wu@...el.com>
Subject: [PATCH][0/2]page_fault retry with NOPAGE_RETRY
changelog[v3]:
- applied fixes and cleanups from Wu Fengguang.
filemap VM_FAULT_RETRY fixes
[PATCH 01/14] mm: fix find_lock_page_retry() return value parsing
[PATCH 02/14] mm: fix major/minor fault accounting on retried fault
[PATCH 04/14] mm: reduce duplicate page fault code
[PATCH 05/14] readahead: account mmap_miss for VM_FAULT_RETRY
- split the patch into two parts. first part includes FAULT_FLAG_RETRY
support with no current user change. second part includes individual
per-architecture cleanups that enable FAULT_FLAG_RETRY.
currently there are mainly two users for handle_mm_fault, we enable
FAULT_FLAG_RETRY for actual fault handler and leave get_user_pages
unchanged.
Benchmarks:
posted on [V1]:
case 1. one application has a high count of threads each faulting in
different pages of a hugefile. Benchmark indicate that this double data
structure walking in case of major fault results in << 1% performance hit.
case 2. add another thread in the above application which in a tight loop
of
mmap()/munmap(). Here we measure loop count in the new thread while other
threads doing the same amount of work as case one. we got << 3% performance
hit on the Complete Time(benchmark value for case one) and 10% performance
improvement on the mmap()/munmap() counter.
This patch helps a lot in cases we have writer which is waitting behind all
readers, so it could execute much faster.
some new test results from Wufengguang:
Just tested the sparse-random-read-on-sparse-file case, and found the
performance impact to be 0.4% (8.706s vs 8.744s). Kind of acceptable.
without FAULT_FLAG_RETRY:
iotrace.rb --load stride-100 --mplay /mnt/btrfs-ram/sparse 3.28s user
5.39s system 99% cpu 8.692 total
iotrace.rb --load stride-100 --mplay /mnt/btrfs-ram/sparse 3.17s user
5.54s system 99% cpu 8.742 total
iotrace.rb --load stride-100 --mplay /mnt/btrfs-ram/sparse 3.18s user
5.48s system 99% cpu 8.684 total
FAULT_FLAG_RETRY:
iotrace.rb --load stride-100 --mplay /mnt/btrfs-ram/sparse 3.18s user
5.63s system 99% cpu 8.825 total
iotrace.rb --load stride-100 --mplay /mnt/btrfs-ram/sparse 3.22s user
5.47s system 99% cpu 8.718 total
iotrace.rb --load stride-100 --mplay /mnt/btrfs-ram/sparse 3.13s user
5.55s system 99% cpu 8.690 total
In the above faked workload, the mmap read page offsets are loaded from
stride-100 and performed on /mnt/btrfs-ram/sparse, which are created by:
seq 0 100 1000000 > stride-100
dd if=/dev/zero of=/mnt/btrfs-ram/sparse bs=1M count=1 seek=1024000
Signed-off-by: Ying Han <yinghan@...gle.com>
Mike Waychison <mikew@...gle.com>
arch/x86/mm/fault.c | 20 ++++++++++++++
include/linux/fs.h | 2 +-
include/linux/mm.h | 2 +
mm/filemap.c | 72 ++++++++++++++++++++++++++++++++++++++++++++++++--
mm/memory.c | 33 +++++++++++++++++------
5 files changed, 116 insertions(+), 13 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists