Remove do_page_cache_readahead(). Its last user, mmap read-around, has been changed to call ra_submit(). Also, the no-readahead-if-congested logic is not appropriate here. Raw 1-page reads can only makes things painfully slower, and users are pretty sensitive about the slow loading of executables. Signed-off-by: Fengguang Wu --- include/linux/mm.h | 2 -- mm/readahead.c | 16 ---------------- 2 files changed, 18 deletions(-) --- linux-2.6.24-rc5-mm1.orig/include/linux/mm.h +++ linux-2.6.24-rc5-mm1/include/linux/mm.h @@ -1084,8 +1084,6 @@ int write_one_page(struct page *page, in #define VM_MAX_READAHEAD 128 /* kbytes */ #define VM_MIN_READAHEAD 16 /* kbytes (includes current page) */ -int do_page_cache_readahead(struct address_space *mapping, struct file *filp, - pgoff_t offset, unsigned long nr_to_read); int force_page_cache_readahead(struct address_space *mapping, struct file *filp, pgoff_t offset, unsigned long nr_to_read); --- linux-2.6.24-rc5-mm1.orig/mm/readahead.c +++ linux-2.6.24-rc5-mm1/mm/readahead.c @@ -208,22 +208,6 @@ int force_page_cache_readahead(struct ad } /* - * This version skips the IO if the queue is read-congested, and will tell the - * block layer to abandon the readahead if request allocation would block. - * - * force_page_cache_readahead() will ignore queue congestion and will block on - * request queues. - */ -int do_page_cache_readahead(struct address_space *mapping, struct file *filp, - pgoff_t offset, unsigned long nr_to_read) -{ - if (bdi_read_congested(mapping->backing_dev_info)) - return -1; - - return __do_page_cache_readahead(mapping, filp, offset, nr_to_read, 0); -} - -/* * Given a desired number of PAGE_CACHE_SIZE readahead pages, return a * sensible upper limit. */ -- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/