[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1469457565-22693-1-git-send-email-kwalker@redhat.com>
Date: Mon, 25 Jul 2016 10:39:25 -0400
From: Kyle Walker <kwalker@...hat.com>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, Kyle Walker <kwalker@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Geliang Tang <geliangtang@....com>,
Vlastimil Babka <vbabka@...e.cz>,
Roman Gushchin <klamm@...dex-team.ru>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: [PATCH] mm: Move readahead limit outside of readahead, and advisory syscalls
Java workloads using the MappedByteBuffer library result in the fadvise()
and madvise() syscalls being used extensively. Following recent readahead
limiting alterations, such as 600e19af ("mm: use only per-device readahead
limit") and 6d2be915 ("mm/readahead.c: fix readahead failure for
memoryless NUMA nodes and limit readahead pages"), application performance
suffers in instances where small readahead is configured.
By moving this limit outside of the syscall codepaths, the syscalls are
able to advise an inordinately large amount of readahead when desired.
With a cap being imposed based on the half of NR_INACTIVE_FILE and
NR_FREE_PAGES. In essence, allowing performance tuning efforts to define a
small readahead limit, but then benefiting from large sequential readahead
values selectively.
Signed-off-by: Kyle Walker <kwalker@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Geliang Tang <geliangtang@....com>
Cc: Vlastimil Babka <vbabka@...e.cz>
Cc: Roman Gushchin <klamm@...dex-team.ru>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
---
mm/readahead.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/readahead.c b/mm/readahead.c
index 65ec288..6f8bb44 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -211,7 +211,9 @@ int force_page_cache_readahead(struct address_space *mapping, struct file *filp,
if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages))
return -EINVAL;
- nr_to_read = min(nr_to_read, inode_to_bdi(mapping->host)->ra_pages);
+ nr_to_read = min(nr_to_read, (global_page_state(NR_INACTIVE_FILE) +
+ (global_page_state(NR_FREE_PAGES)) / 2));
+
while (nr_to_read) {
int err;
@@ -484,6 +486,7 @@ void page_cache_sync_readahead(struct address_space *mapping,
/* be dumb */
if (filp && (filp->f_mode & FMODE_RANDOM)) {
+ req_size = min(req_size, inode_to_bdi(mapping->host)->ra_pages);
force_page_cache_readahead(mapping, filp, offset, req_size);
return;
}
--
2.5.5
Powered by blists - more mailing lists