[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170724114622.GK652@quack2.suse.cz>
Date: Mon, 24 Jul 2017 13:46:22 +0200
From: Jan Kara <jack@...e.cz>
To: Ross Zwisler <ross.zwisler@...ux.intel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
"Darrick J. Wong" <darrick.wong@...cle.com>,
Theodore Ts'o <tytso@....edu>,
Alexander Viro <viro@...iv.linux.org.uk>,
Andreas Dilger <adilger.kernel@...ger.ca>,
Christoph Hellwig <hch@....de>,
Dan Williams <dan.j.williams@...el.com>,
Dave Chinner <david@...morbit.com>,
David Airlie <airlied@...ux.ie>,
Ingo Molnar <mingo@...hat.com>,
Inki Dae <inki.dae@...sung.com>, Jan Kara <jack@...e.cz>,
Jonathan Corbet <corbet@....net>,
Joonyoung Shim <jy0922.shim@...sung.com>,
Krzysztof Kozlowski <krzk@...nel.org>,
Kukjin Kim <kgene@...nel.org>,
Kyungmin Park <kyungmin.park@...sung.com>,
Matthew Wilcox <mawilcox@...rosoft.com>,
Patrik Jakobsson <patrik.r.jakobsson@...il.com>,
Rob Clark <robdclark@...il.com>,
Seung-Woo Kim <sw0312.kim@...sung.com>,
Steven Rostedt <rostedt@...dmis.org>,
Tomi Valkeinen <tomi.valkeinen@...com>,
dri-devel@...ts.freedesktop.org, freedreno@...ts.freedesktop.org,
linux-arm-kernel@...ts.infradead.org,
linux-arm-msm@...r.kernel.org, linux-doc@...r.kernel.org,
linux-ext4@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, linux-nvdimm@...ts.01.org,
linux-samsung-soc@...r.kernel.org, linux-xfs@...r.kernel.org
Subject: Re: [PATCH v4 3/5] dax: use common 4k zero page for dax mmap reads
On Fri 21-07-17 16:39:53, Ross Zwisler wrote:
> When servicing mmap() reads from file holes the current DAX code allocates
> a page cache page of all zeroes and places the struct page pointer in the
> mapping->page_tree radix tree. This has three major drawbacks:
>
> 1) It consumes memory unnecessarily. For every 4k page that is read via a
> DAX mmap() over a hole, we allocate a new page cache page. This means that
> if you read 1GiB worth of pages, you end up using 1GiB of zeroed memory.
> This is easily visible by looking at the overall memory consumption of the
> system or by looking at /proc/[pid]/smaps:
>
> 7f62e72b3000-7f63272b3000 rw-s 00000000 103:00 12 /root/dax/data
> Size: 1048576 kB
> Rss: 1048576 kB
> Pss: 1048576 kB
> Shared_Clean: 0 kB
> Shared_Dirty: 0 kB
> Private_Clean: 1048576 kB
> Private_Dirty: 0 kB
> Referenced: 1048576 kB
> Anonymous: 0 kB
> LazyFree: 0 kB
> AnonHugePages: 0 kB
> ShmemPmdMapped: 0 kB
> Shared_Hugetlb: 0 kB
> Private_Hugetlb: 0 kB
> Swap: 0 kB
> SwapPss: 0 kB
> KernelPageSize: 4 kB
> MMUPageSize: 4 kB
> Locked: 0 kB
>
> 2) It is slower than using a common zero page because each page fault has
> more work to do. Instead of just inserting a common zero page we have to
> allocate a page cache page, zero it, and then insert it. Here are the
> average latencies of dax_load_hole() as measured by ftrace on a random test
> box:
>
> Old method, using zeroed page cache pages: 3.4 us
> New method, using the common 4k zero page: 0.8 us
>
> This was the average latency over 1 GiB of sequential reads done by this
> simple fio script:
>
> [global]
> size=1G
> filename=/root/dax/data
> fallocate=none
> [io]
> rw=read
> ioengine=mmap
>
> 3) The fact that we had to check for both DAX exceptional entries and for
> page cache pages in the radix tree made the DAX code more complex.
>
> Solve these issues by following the lead of the DAX PMD code and using a
> common 4k zero page instead. As with the PMD code we will now insert a DAX
> exceptional entry into the radix tree instead of a struct page pointer
> which allows us to remove all the special casing in the DAX code.
>
> Note that we do still pretty aggressively check for regular pages in the
> DAX radix tree, especially where we take action based on the bits set in
> the page. If we ever find a regular page in our radix tree now that most
> likely means that someone besides DAX is inserting pages (which has
> happened lots of times in the past), and we want to find that out early and
> fail loudly.
>
> This solution also removes the extra memory consumption. Here is that same
> /proc/[pid]/smaps after 1GiB of reading from a hole with the new code:
>
> 7f2054a74000-7f2094a74000 rw-s 00000000 103:00 12 /root/dax/data
> Size: 1048576 kB
> Rss: 0 kB
> Pss: 0 kB
> Shared_Clean: 0 kB
> Shared_Dirty: 0 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced: 0 kB
> Anonymous: 0 kB
> LazyFree: 0 kB
> AnonHugePages: 0 kB
> ShmemPmdMapped: 0 kB
> Shared_Hugetlb: 0 kB
> Private_Hugetlb: 0 kB
> Swap: 0 kB
> SwapPss: 0 kB
> KernelPageSize: 4 kB
> MMUPageSize: 4 kB
> Locked: 0 kB
>
> Overall system memory consumption is similarly improved.
>
> Another major change is that we remove dax_pfn_mkwrite() from our fault
> flow, and instead rely on the page fault itself to make the PTE dirty and
> writeable. The following description from the patch adding the
> vm_insert_mixed_mkwrite() call explains this a little more:
>
> ***
> To be able to use the common 4k zero page in DAX we need to have our PTE
> fault path look more like our PMD fault path where a PTE entry can be
> marked as dirty and writeable as it is first inserted, rather than
> waiting for a follow-up dax_pfn_mkwrite() => finish_mkwrite_fault() call.
>
> Right now we can rely on having a dax_pfn_mkwrite() call because we can
> distinguish between these two cases in do_wp_page():
>
> case 1: 4k zero page => writable DAX storage
> case 2: read-only DAX storage => writeable DAX storage
>
> This distinction is made by via vm_normal_page(). vm_normal_page()
> returns false for the common 4k zero page, though, just as it does for
> DAX ptes. Instead of special casing the DAX + 4k zero page case, we will
> simplify our DAX PTE page fault sequence so that it matches our DAX PMD
> sequence, and get rid of the dax_pfn_mkwrite() helper. We will instead
> use dax_iomap_fault() to handle write-protection faults.
>
> This means that insert_pfn() needs to follow the lead of insert_pfn_pmd()
> and allow us to pass in a 'mkwrite' flag. If 'mkwrite' is set
> insert_pfn() will do the work that was previously done by wp_page_reuse()
> as part of the dax_pfn_mkwrite() call path.
> ***
>
> Signed-off-by: Ross Zwisler <ross.zwisler@...ux.intel.com>
The patch looks good to me. You can add:
Reviewed-by: Jan Kara <jack@...e.cz>
And I really like that we've got rid of these pagecache hole pages!
Honza
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists