lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <202106262126.An2i4ZxC-lkp@intel.com>
Date:   Sat, 26 Jun 2021 21:26:41 +0800
From:   kernel test robot <lkp@...el.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     kbuild-all@...ts.01.org, clang-built-linux@...glegroups.com,
        linux-kernel@...r.kernel.org,
        Valentin Schneider <valentin.schneider@....com>
Subject: mm/vmscan.c:1071:21: warning: stack frame size (2064) exceeds limit
 (2048) in function 'shrink_page_list'

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
head:   b7050b242430f3170e0b57f5f55136e44cb8dc66
commit: b02a4fd8148f655095d9e3d6eddd8f0042bcc27c cpumask: Make cpu_{online,possible,present,active}() inline
date:   2 months ago
config: powerpc-randconfig-r034-20210626 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 557b101ce714e39438ba1d39c4c50b03e12fcb96)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install powerpc cross compiling tool for clang build
        # apt-get install binutils-powerpc-linux-gnu
        # https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b02a4fd8148f655095d9e3d6eddd8f0042bcc27c
        git remote add linus https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
        git fetch --no-tags linus master
        git checkout b02a4fd8148f655095d9e3d6eddd8f0042bcc27c
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@...el.com>

All warnings (new ones prefixed by >>):

   In file included from mm/vmscan.c:19:
   In file included from include/linux/kernel_stat.h:9:
   In file included from include/linux/interrupt.h:11:
   In file included from include/linux/hardirq.h:10:
   In file included from arch/powerpc/include/asm/hardirq.h:6:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/powerpc/include/asm/io.h:619:
   arch/powerpc/include/asm/io-defs.h:45:1: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
   DEF_PCI_AC_NORET(insw, (unsigned long p, void *b, unsigned long c),
   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
                   __do_##name al;                                 \
                   ^~~~~~~~~~~~~~
   <scratch space>:186:1: note: expanded from here
   __do_insw
   ^
   arch/powerpc/include/asm/io.h:557:56: note: expanded from macro '__do_insw'
   #define __do_insw(p, b, n)      readsw((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
                                          ~~~~~~~~~~~~~~~~~~~~~^
   In file included from mm/vmscan.c:19:
   In file included from include/linux/kernel_stat.h:9:
   In file included from include/linux/interrupt.h:11:
   In file included from include/linux/hardirq.h:10:
   In file included from arch/powerpc/include/asm/hardirq.h:6:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/powerpc/include/asm/io.h:619:
   arch/powerpc/include/asm/io-defs.h:47:1: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
   DEF_PCI_AC_NORET(insl, (unsigned long p, void *b, unsigned long c),
   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
                   __do_##name al;                                 \
                   ^~~~~~~~~~~~~~
   <scratch space>:190:1: note: expanded from here
   __do_insl
   ^
   arch/powerpc/include/asm/io.h:558:56: note: expanded from macro '__do_insl'
   #define __do_insl(p, b, n)      readsl((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
                                          ~~~~~~~~~~~~~~~~~~~~~^
   In file included from mm/vmscan.c:19:
   In file included from include/linux/kernel_stat.h:9:
   In file included from include/linux/interrupt.h:11:
   In file included from include/linux/hardirq.h:10:
   In file included from arch/powerpc/include/asm/hardirq.h:6:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/powerpc/include/asm/io.h:619:
   arch/powerpc/include/asm/io-defs.h:49:1: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
   DEF_PCI_AC_NORET(outsb, (unsigned long p, const void *b, unsigned long c),
   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
                   __do_##name al;                                 \
                   ^~~~~~~~~~~~~~
   <scratch space>:194:1: note: expanded from here
   __do_outsb
   ^
   arch/powerpc/include/asm/io.h:559:58: note: expanded from macro '__do_outsb'
   #define __do_outsb(p, b, n)     writesb((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
                                           ~~~~~~~~~~~~~~~~~~~~~^
   In file included from mm/vmscan.c:19:
   In file included from include/linux/kernel_stat.h:9:
   In file included from include/linux/interrupt.h:11:
   In file included from include/linux/hardirq.h:10:
   In file included from arch/powerpc/include/asm/hardirq.h:6:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/powerpc/include/asm/io.h:619:
   arch/powerpc/include/asm/io-defs.h:51:1: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
   DEF_PCI_AC_NORET(outsw, (unsigned long p, const void *b, unsigned long c),
   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
                   __do_##name al;                                 \
                   ^~~~~~~~~~~~~~
   <scratch space>:198:1: note: expanded from here
   __do_outsw
   ^
   arch/powerpc/include/asm/io.h:560:58: note: expanded from macro '__do_outsw'
   #define __do_outsw(p, b, n)     writesw((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
                                           ~~~~~~~~~~~~~~~~~~~~~^
   In file included from mm/vmscan.c:19:
   In file included from include/linux/kernel_stat.h:9:
   In file included from include/linux/interrupt.h:11:
   In file included from include/linux/hardirq.h:10:
   In file included from arch/powerpc/include/asm/hardirq.h:6:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/powerpc/include/asm/io.h:619:
   arch/powerpc/include/asm/io-defs.h:53:1: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
   DEF_PCI_AC_NORET(outsl, (unsigned long p, const void *b, unsigned long c),
   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
                   __do_##name al;                                 \
                   ^~~~~~~~~~~~~~
   <scratch space>:202:1: note: expanded from here
   __do_outsl
   ^
   arch/powerpc/include/asm/io.h:561:58: note: expanded from macro '__do_outsl'
   #define __do_outsl(p, b, n)     writesl((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
                                           ~~~~~~~~~~~~~~~~~~~~~^
>> mm/vmscan.c:1071:21: warning: stack frame size (2064) exceeds limit (2048) in function 'shrink_page_list' [-Wframe-larger-than]
   static unsigned int shrink_page_list(struct list_head *page_list,
                       ^
   14 warnings generated.


vim +/shrink_page_list +1071 mm/vmscan.c

e2be15f6c3eece Mel Gorman              2013-07-03  1067  
^1da177e4c3f41 Linus Torvalds          2005-04-16  1068  /*
1742f19fa920cd Andrew Morton           2006-03-22  1069   * shrink_page_list() returns the number of reclaimed pages
^1da177e4c3f41 Linus Torvalds          2005-04-16  1070   */
730ec8c01a2bd6 Maninder Singh          2020-06-03 @1071  static unsigned int shrink_page_list(struct list_head *page_list,
599d0c954f91d0 Mel Gorman              2016-07-28  1072  				     struct pglist_data *pgdat,
f84f6e2b0868f1 Mel Gorman              2011-10-31  1073  				     struct scan_control *sc,
3c710c1ad11b4a Michal Hocko            2017-02-22  1074  				     struct reclaim_stat *stat,
8940b34a4e082a Minchan Kim             2019-09-25  1075  				     bool ignore_references)
^1da177e4c3f41 Linus Torvalds          2005-04-16  1076  {
^1da177e4c3f41 Linus Torvalds          2005-04-16  1077  	LIST_HEAD(ret_pages);
abe4c3b50c3f25 Mel Gorman              2010-08-09  1078  	LIST_HEAD(free_pages);
730ec8c01a2bd6 Maninder Singh          2020-06-03  1079  	unsigned int nr_reclaimed = 0;
730ec8c01a2bd6 Maninder Singh          2020-06-03  1080  	unsigned int pgactivate = 0;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1081  
060f005f074791 Kirill Tkhai            2019-03-05  1082  	memset(stat, 0, sizeof(*stat));
^1da177e4c3f41 Linus Torvalds          2005-04-16  1083  	cond_resched();
^1da177e4c3f41 Linus Torvalds          2005-04-16  1084  
^1da177e4c3f41 Linus Torvalds          2005-04-16  1085  	while (!list_empty(page_list)) {
^1da177e4c3f41 Linus Torvalds          2005-04-16  1086  		struct address_space *mapping;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1087  		struct page *page;
8940b34a4e082a Minchan Kim             2019-09-25  1088  		enum page_references references = PAGEREF_RECLAIM;
4b793062674707 Kirill Tkhai            2020-04-01  1089  		bool dirty, writeback, may_enter_fs;
98879b3b9edc16 Yang Shi                2019-07-11  1090  		unsigned int nr_pages;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1091  
^1da177e4c3f41 Linus Torvalds          2005-04-16  1092  		cond_resched();
^1da177e4c3f41 Linus Torvalds          2005-04-16  1093  
^1da177e4c3f41 Linus Torvalds          2005-04-16  1094  		page = lru_to_page(page_list);
^1da177e4c3f41 Linus Torvalds          2005-04-16  1095  		list_del(&page->lru);
^1da177e4c3f41 Linus Torvalds          2005-04-16  1096  
529ae9aaa08378 Nick Piggin             2008-08-02  1097  		if (!trylock_page(page))
^1da177e4c3f41 Linus Torvalds          2005-04-16  1098  			goto keep;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1099  
309381feaee564 Sasha Levin             2014-01-23  1100  		VM_BUG_ON_PAGE(PageActive(page), page);
^1da177e4c3f41 Linus Torvalds          2005-04-16  1101  
d8c6546b1aea84 Matthew Wilcox (Oracle  2019-09-23  1102) 		nr_pages = compound_nr(page);
98879b3b9edc16 Yang Shi                2019-07-11  1103  
98879b3b9edc16 Yang Shi                2019-07-11  1104  		/* Account the number of base pages even though THP */
98879b3b9edc16 Yang Shi                2019-07-11  1105  		sc->nr_scanned += nr_pages;
80e4342601abfa Christoph Lameter       2006-02-11  1106  
39b5f29ac1f988 Hugh Dickins            2012-10-08  1107  		if (unlikely(!page_evictable(page)))
ad6b67041a4549 Minchan Kim             2017-05-03  1108  			goto activate_locked;
894bc310419ac9 Lee Schermerhorn        2008-10-18  1109  
a6dc60f8975ad9 Johannes Weiner         2009-03-31  1110  		if (!sc->may_unmap && page_mapped(page))
80e4342601abfa Christoph Lameter       2006-02-11  1111  			goto keep_locked;
80e4342601abfa Christoph Lameter       2006-02-11  1112  
c661b078fd62ab Andy Whitcroft          2007-08-22  1113  		may_enter_fs = (sc->gfp_mask & __GFP_FS) ||
c661b078fd62ab Andy Whitcroft          2007-08-22  1114  			(PageSwapCache(page) && (sc->gfp_mask & __GFP_IO));
c661b078fd62ab Andy Whitcroft          2007-08-22  1115  
e2be15f6c3eece Mel Gorman              2013-07-03  1116  		/*
894befec4d70b1 Andrey Ryabinin         2018-04-10  1117  		 * The number of dirty pages determines if a node is marked
e2be15f6c3eece Mel Gorman              2013-07-03  1118  		 * reclaim_congested which affects wait_iff_congested. kswapd
e2be15f6c3eece Mel Gorman              2013-07-03  1119  		 * will stall and start writing pages if the tail of the LRU
e2be15f6c3eece Mel Gorman              2013-07-03  1120  		 * is all dirty unqueued pages.
e2be15f6c3eece Mel Gorman              2013-07-03  1121  		 */
e2be15f6c3eece Mel Gorman              2013-07-03  1122  		page_check_dirty_writeback(page, &dirty, &writeback);
e2be15f6c3eece Mel Gorman              2013-07-03  1123  		if (dirty || writeback)
060f005f074791 Kirill Tkhai            2019-03-05  1124  			stat->nr_dirty++;
e2be15f6c3eece Mel Gorman              2013-07-03  1125  
e2be15f6c3eece Mel Gorman              2013-07-03  1126  		if (dirty && !writeback)
060f005f074791 Kirill Tkhai            2019-03-05  1127  			stat->nr_unqueued_dirty++;
e2be15f6c3eece Mel Gorman              2013-07-03  1128  
d04e8acd03e5c3 Mel Gorman              2013-07-03  1129  		/*
d04e8acd03e5c3 Mel Gorman              2013-07-03  1130  		 * Treat this page as congested if the underlying BDI is or if
d04e8acd03e5c3 Mel Gorman              2013-07-03  1131  		 * pages are cycling through the LRU so quickly that the
d04e8acd03e5c3 Mel Gorman              2013-07-03  1132  		 * pages marked for immediate reclaim are making it to the
d04e8acd03e5c3 Mel Gorman              2013-07-03  1133  		 * end of the LRU a second time.
d04e8acd03e5c3 Mel Gorman              2013-07-03  1134  		 */
e2be15f6c3eece Mel Gorman              2013-07-03  1135  		mapping = page_mapping(page);
1da58ee2a0279a Jamie Liu               2014-12-10  1136  		if (((dirty || writeback) && mapping &&
703c270887bb51 Tejun Heo               2015-05-22  1137  		     inode_write_congested(mapping->host)) ||
d04e8acd03e5c3 Mel Gorman              2013-07-03  1138  		    (writeback && PageReclaim(page)))
060f005f074791 Kirill Tkhai            2019-03-05  1139  			stat->nr_congested++;
e2be15f6c3eece Mel Gorman              2013-07-03  1140  
e62e384e9da8d9 Michal Hocko            2012-07-31  1141  		/*
283aba9f9e0e48 Mel Gorman              2013-07-03  1142  		 * If a page at the tail of the LRU is under writeback, there
283aba9f9e0e48 Mel Gorman              2013-07-03  1143  		 * are three cases to consider.
283aba9f9e0e48 Mel Gorman              2013-07-03  1144  		 *
283aba9f9e0e48 Mel Gorman              2013-07-03  1145  		 * 1) If reclaim is encountering an excessive number of pages
283aba9f9e0e48 Mel Gorman              2013-07-03  1146  		 *    under writeback and this page is both under writeback and
283aba9f9e0e48 Mel Gorman              2013-07-03  1147  		 *    PageReclaim then it indicates that pages are being queued
283aba9f9e0e48 Mel Gorman              2013-07-03  1148  		 *    for IO but are being recycled through the LRU before the
283aba9f9e0e48 Mel Gorman              2013-07-03  1149  		 *    IO can complete. Waiting on the page itself risks an
283aba9f9e0e48 Mel Gorman              2013-07-03  1150  		 *    indefinite stall if it is impossible to writeback the
283aba9f9e0e48 Mel Gorman              2013-07-03  1151  		 *    page due to IO error or disconnected storage so instead
b1a6f21e3b2315 Mel Gorman              2013-07-03  1152  		 *    note that the LRU is being scanned too quickly and the
b1a6f21e3b2315 Mel Gorman              2013-07-03  1153  		 *    caller can stall after page list has been processed.
283aba9f9e0e48 Mel Gorman              2013-07-03  1154  		 *
97c9341f727105 Tejun Heo               2015-05-22  1155  		 * 2) Global or new memcg reclaim encounters a page that is
ecf5fc6e9654cd Michal Hocko            2015-08-04  1156  		 *    not marked for immediate reclaim, or the caller does not
ecf5fc6e9654cd Michal Hocko            2015-08-04  1157  		 *    have __GFP_FS (or __GFP_IO if it's simply going to swap,
ecf5fc6e9654cd Michal Hocko            2015-08-04  1158  		 *    not to fs). In this case mark the page for immediate
97c9341f727105 Tejun Heo               2015-05-22  1159  		 *    reclaim and continue scanning.
283aba9f9e0e48 Mel Gorman              2013-07-03  1160  		 *
ecf5fc6e9654cd Michal Hocko            2015-08-04  1161  		 *    Require may_enter_fs because we would wait on fs, which
ecf5fc6e9654cd Michal Hocko            2015-08-04  1162  		 *    may not have submitted IO yet. And the loop driver might
283aba9f9e0e48 Mel Gorman              2013-07-03  1163  		 *    enter reclaim, and deadlock if it waits on a page for
283aba9f9e0e48 Mel Gorman              2013-07-03  1164  		 *    which it is needed to do the write (loop masks off
283aba9f9e0e48 Mel Gorman              2013-07-03  1165  		 *    __GFP_IO|__GFP_FS for this reason); but more thought
283aba9f9e0e48 Mel Gorman              2013-07-03  1166  		 *    would probably show more reasons.
283aba9f9e0e48 Mel Gorman              2013-07-03  1167  		 *
7fadc820222497 Hugh Dickins            2015-09-08  1168  		 * 3) Legacy memcg encounters a page that is already marked
283aba9f9e0e48 Mel Gorman              2013-07-03  1169  		 *    PageReclaim. memcg does not have any dirty pages
283aba9f9e0e48 Mel Gorman              2013-07-03  1170  		 *    throttling so we could easily OOM just because too many
283aba9f9e0e48 Mel Gorman              2013-07-03  1171  		 *    pages are in writeback and there is nothing else to
283aba9f9e0e48 Mel Gorman              2013-07-03  1172  		 *    reclaim. Wait for the writeback to complete.
c55e8d035b28b2 Johannes Weiner         2017-02-24  1173  		 *
c55e8d035b28b2 Johannes Weiner         2017-02-24  1174  		 * In cases 1) and 2) we activate the pages to get them out of
c55e8d035b28b2 Johannes Weiner         2017-02-24  1175  		 * the way while we continue scanning for clean pages on the
c55e8d035b28b2 Johannes Weiner         2017-02-24  1176  		 * inactive list and refilling from the active list. The
c55e8d035b28b2 Johannes Weiner         2017-02-24  1177  		 * observation here is that waiting for disk writes is more
c55e8d035b28b2 Johannes Weiner         2017-02-24  1178  		 * expensive than potentially causing reloads down the line.
c55e8d035b28b2 Johannes Weiner         2017-02-24  1179  		 * Since they're marked for immediate reclaim, they won't put
c55e8d035b28b2 Johannes Weiner         2017-02-24  1180  		 * memory pressure on the cache working set any longer than it
c55e8d035b28b2 Johannes Weiner         2017-02-24  1181  		 * takes to write them to disk.
e62e384e9da8d9 Michal Hocko            2012-07-31  1182  		 */
283aba9f9e0e48 Mel Gorman              2013-07-03  1183  		if (PageWriteback(page)) {
283aba9f9e0e48 Mel Gorman              2013-07-03  1184  			/* Case 1 above */
283aba9f9e0e48 Mel Gorman              2013-07-03  1185  			if (current_is_kswapd() &&
283aba9f9e0e48 Mel Gorman              2013-07-03  1186  			    PageReclaim(page) &&
599d0c954f91d0 Mel Gorman              2016-07-28  1187  			    test_bit(PGDAT_WRITEBACK, &pgdat->flags)) {
060f005f074791 Kirill Tkhai            2019-03-05  1188  				stat->nr_immediate++;
c55e8d035b28b2 Johannes Weiner         2017-02-24  1189  				goto activate_locked;
283aba9f9e0e48 Mel Gorman              2013-07-03  1190  
283aba9f9e0e48 Mel Gorman              2013-07-03  1191  			/* Case 2 above */
b5ead35e7e1d34 Johannes Weiner         2019-11-30  1192  			} else if (writeback_throttling_sane(sc) ||
ecf5fc6e9654cd Michal Hocko            2015-08-04  1193  			    !PageReclaim(page) || !may_enter_fs) {
c3b94f44fcb072 Hugh Dickins            2012-07-31  1194  				/*
c3b94f44fcb072 Hugh Dickins            2012-07-31  1195  				 * This is slightly racy - end_page_writeback()
c3b94f44fcb072 Hugh Dickins            2012-07-31  1196  				 * might have just cleared PageReclaim, then
c3b94f44fcb072 Hugh Dickins            2012-07-31  1197  				 * setting PageReclaim here end up interpreted
c3b94f44fcb072 Hugh Dickins            2012-07-31  1198  				 * as PageReadahead - but that does not matter
c3b94f44fcb072 Hugh Dickins            2012-07-31  1199  				 * enough to care.  What we do want is for this
c3b94f44fcb072 Hugh Dickins            2012-07-31  1200  				 * page to have PageReclaim set next time memcg
c3b94f44fcb072 Hugh Dickins            2012-07-31  1201  				 * reclaim reaches the tests above, so it will
c3b94f44fcb072 Hugh Dickins            2012-07-31  1202  				 * then wait_on_page_writeback() to avoid OOM;
c3b94f44fcb072 Hugh Dickins            2012-07-31  1203  				 * and it's also appropriate in global reclaim.
c3b94f44fcb072 Hugh Dickins            2012-07-31  1204  				 */
c3b94f44fcb072 Hugh Dickins            2012-07-31  1205  				SetPageReclaim(page);
060f005f074791 Kirill Tkhai            2019-03-05  1206  				stat->nr_writeback++;
c55e8d035b28b2 Johannes Weiner         2017-02-24  1207  				goto activate_locked;
283aba9f9e0e48 Mel Gorman              2013-07-03  1208  
283aba9f9e0e48 Mel Gorman              2013-07-03  1209  			/* Case 3 above */
283aba9f9e0e48 Mel Gorman              2013-07-03  1210  			} else {
7fadc820222497 Hugh Dickins            2015-09-08  1211  				unlock_page(page);
c3b94f44fcb072 Hugh Dickins            2012-07-31  1212  				wait_on_page_writeback(page);
7fadc820222497 Hugh Dickins            2015-09-08  1213  				/* then go back and try same page again */
7fadc820222497 Hugh Dickins            2015-09-08  1214  				list_add_tail(&page->lru, page_list);
7fadc820222497 Hugh Dickins            2015-09-08  1215  				continue;
e62e384e9da8d9 Michal Hocko            2012-07-31  1216  			}
283aba9f9e0e48 Mel Gorman              2013-07-03  1217  		}
^1da177e4c3f41 Linus Torvalds          2005-04-16  1218  
8940b34a4e082a Minchan Kim             2019-09-25  1219  		if (!ignore_references)
6a18adb35c2784 Konstantin Khlebnikov   2012-05-29  1220  			references = page_check_references(page, sc);
02c6de8d757cb3 Minchan Kim             2012-10-08  1221  
dfc8d636cdb95f Johannes Weiner         2010-03-05  1222  		switch (references) {
dfc8d636cdb95f Johannes Weiner         2010-03-05  1223  		case PAGEREF_ACTIVATE:
^1da177e4c3f41 Linus Torvalds          2005-04-16  1224  			goto activate_locked;
645747462435d8 Johannes Weiner         2010-03-05  1225  		case PAGEREF_KEEP:
98879b3b9edc16 Yang Shi                2019-07-11  1226  			stat->nr_ref_keep += nr_pages;
645747462435d8 Johannes Weiner         2010-03-05  1227  			goto keep_locked;
dfc8d636cdb95f Johannes Weiner         2010-03-05  1228  		case PAGEREF_RECLAIM:
dfc8d636cdb95f Johannes Weiner         2010-03-05  1229  		case PAGEREF_RECLAIM_CLEAN:
dfc8d636cdb95f Johannes Weiner         2010-03-05  1230  			; /* try to reclaim the page below */
dfc8d636cdb95f Johannes Weiner         2010-03-05  1231  		}
^1da177e4c3f41 Linus Torvalds          2005-04-16  1232  
^1da177e4c3f41 Linus Torvalds          2005-04-16  1233  		/*
^1da177e4c3f41 Linus Torvalds          2005-04-16  1234  		 * Anonymous process memory has backing store?
^1da177e4c3f41 Linus Torvalds          2005-04-16  1235  		 * Try to allocate it some swap space here.
802a3a92ad7ac0 Shaohua Li              2017-05-03  1236  		 * Lazyfree page could be freed directly
^1da177e4c3f41 Linus Torvalds          2005-04-16  1237  		 */
bd4c82c22c367e Huang Ying              2017-09-06  1238  		if (PageAnon(page) && PageSwapBacked(page)) {
bd4c82c22c367e Huang Ying              2017-09-06  1239  			if (!PageSwapCache(page)) {
63eb6b93ce725e Hugh Dickins            2008-11-19  1240  				if (!(sc->gfp_mask & __GFP_IO))
63eb6b93ce725e Hugh Dickins            2008-11-19  1241  					goto keep_locked;
feb889fb40fafc Linus Torvalds          2021-01-16  1242  				if (page_maybe_dma_pinned(page))
feb889fb40fafc Linus Torvalds          2021-01-16  1243  					goto keep_locked;
747552b1e71b40 Huang Ying              2017-07-06  1244  				if (PageTransHuge(page)) {
b8f593cd0896b8 Huang Ying              2017-07-06  1245  					/* cannot split THP, skip it */
747552b1e71b40 Huang Ying              2017-07-06  1246  					if (!can_split_huge_page(page, NULL))
b8f593cd0896b8 Huang Ying              2017-07-06  1247  						goto activate_locked;
747552b1e71b40 Huang Ying              2017-07-06  1248  					/*
747552b1e71b40 Huang Ying              2017-07-06  1249  					 * Split pages without a PMD map right
747552b1e71b40 Huang Ying              2017-07-06  1250  					 * away. Chances are some or all of the
747552b1e71b40 Huang Ying              2017-07-06  1251  					 * tail pages can be freed without IO.
747552b1e71b40 Huang Ying              2017-07-06  1252  					 */
747552b1e71b40 Huang Ying              2017-07-06  1253  					if (!compound_mapcount(page) &&
bd4c82c22c367e Huang Ying              2017-09-06  1254  					    split_huge_page_to_list(page,
bd4c82c22c367e Huang Ying              2017-09-06  1255  								    page_list))
747552b1e71b40 Huang Ying              2017-07-06  1256  						goto activate_locked;
747552b1e71b40 Huang Ying              2017-07-06  1257  				}
0f0746589e4be0 Minchan Kim             2017-07-06  1258  				if (!add_to_swap(page)) {
0f0746589e4be0 Minchan Kim             2017-07-06  1259  					if (!PageTransHuge(page))
98879b3b9edc16 Yang Shi                2019-07-11  1260  						goto activate_locked_split;
bd4c82c22c367e Huang Ying              2017-09-06  1261  					/* Fallback to swap normal pages */
bd4c82c22c367e Huang Ying              2017-09-06  1262  					if (split_huge_page_to_list(page,
bd4c82c22c367e Huang Ying              2017-09-06  1263  								    page_list))
0f0746589e4be0 Minchan Kim             2017-07-06  1264  						goto activate_locked;
fe490cc0fe9e6e Huang Ying              2017-09-06  1265  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
fe490cc0fe9e6e Huang Ying              2017-09-06  1266  					count_vm_event(THP_SWPOUT_FALLBACK);
fe490cc0fe9e6e Huang Ying              2017-09-06  1267  #endif
0f0746589e4be0 Minchan Kim             2017-07-06  1268  					if (!add_to_swap(page))
98879b3b9edc16 Yang Shi                2019-07-11  1269  						goto activate_locked_split;
0f0746589e4be0 Minchan Kim             2017-07-06  1270  				}
0f0746589e4be0 Minchan Kim             2017-07-06  1271  
4b793062674707 Kirill Tkhai            2020-04-01  1272  				may_enter_fs = true;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1273  
e2be15f6c3eece Mel Gorman              2013-07-03  1274  				/* Adding to swap updated mapping */
^1da177e4c3f41 Linus Torvalds          2005-04-16  1275  				mapping = page_mapping(page);
bd4c82c22c367e Huang Ying              2017-09-06  1276  			}
7751b2da6be0b5 Kirill A. Shutemov      2016-07-26  1277  		} else if (unlikely(PageTransHuge(page))) {
7751b2da6be0b5 Kirill A. Shutemov      2016-07-26  1278  			/* Split file THP */
7751b2da6be0b5 Kirill A. Shutemov      2016-07-26  1279  			if (split_huge_page_to_list(page, page_list))
7751b2da6be0b5 Kirill A. Shutemov      2016-07-26  1280  				goto keep_locked;
e2be15f6c3eece Mel Gorman              2013-07-03  1281  		}
^1da177e4c3f41 Linus Torvalds          2005-04-16  1282  
98879b3b9edc16 Yang Shi                2019-07-11  1283  		/*
98879b3b9edc16 Yang Shi                2019-07-11  1284  		 * THP may get split above, need minus tail pages and update
98879b3b9edc16 Yang Shi                2019-07-11  1285  		 * nr_pages to avoid accounting tail pages twice.
98879b3b9edc16 Yang Shi                2019-07-11  1286  		 *
98879b3b9edc16 Yang Shi                2019-07-11  1287  		 * The tail pages that are added into swap cache successfully
98879b3b9edc16 Yang Shi                2019-07-11  1288  		 * reach here.
98879b3b9edc16 Yang Shi                2019-07-11  1289  		 */
98879b3b9edc16 Yang Shi                2019-07-11  1290  		if ((nr_pages > 1) && !PageTransHuge(page)) {
98879b3b9edc16 Yang Shi                2019-07-11  1291  			sc->nr_scanned -= (nr_pages - 1);
98879b3b9edc16 Yang Shi                2019-07-11  1292  			nr_pages = 1;
98879b3b9edc16 Yang Shi                2019-07-11  1293  		}
98879b3b9edc16 Yang Shi                2019-07-11  1294  
^1da177e4c3f41 Linus Torvalds          2005-04-16  1295  		/*
^1da177e4c3f41 Linus Torvalds          2005-04-16  1296  		 * The page is mapped into the page tables of one or more
^1da177e4c3f41 Linus Torvalds          2005-04-16  1297  		 * processes. Try to unmap it here.
^1da177e4c3f41 Linus Torvalds          2005-04-16  1298  		 */
802a3a92ad7ac0 Shaohua Li              2017-05-03  1299  		if (page_mapped(page)) {
013339df116c2e Shakeel Butt            2020-12-14  1300  			enum ttu_flags flags = TTU_BATCH_FLUSH;
1f318a9b0dc399 Jaewon Kim              2020-06-03  1301  			bool was_swapbacked = PageSwapBacked(page);
bd4c82c22c367e Huang Ying              2017-09-06  1302  
bd4c82c22c367e Huang Ying              2017-09-06  1303  			if (unlikely(PageTransHuge(page)))
bd4c82c22c367e Huang Ying              2017-09-06  1304  				flags |= TTU_SPLIT_HUGE_PMD;
1f318a9b0dc399 Jaewon Kim              2020-06-03  1305  
bd4c82c22c367e Huang Ying              2017-09-06  1306  			if (!try_to_unmap(page, flags)) {
98879b3b9edc16 Yang Shi                2019-07-11  1307  				stat->nr_unmap_fail += nr_pages;
1f318a9b0dc399 Jaewon Kim              2020-06-03  1308  				if (!was_swapbacked && PageSwapBacked(page))
1f318a9b0dc399 Jaewon Kim              2020-06-03  1309  					stat->nr_lazyfree_fail += nr_pages;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1310  				goto activate_locked;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1311  			}
^1da177e4c3f41 Linus Torvalds          2005-04-16  1312  		}
^1da177e4c3f41 Linus Torvalds          2005-04-16  1313  
^1da177e4c3f41 Linus Torvalds          2005-04-16  1314  		if (PageDirty(page)) {
ee72886d8ed5d9 Mel Gorman              2011-10-31  1315  			/*
4eda48235011d6 Johannes Weiner         2017-02-24  1316  			 * Only kswapd can writeback filesystem pages
4eda48235011d6 Johannes Weiner         2017-02-24  1317  			 * to avoid risk of stack overflow. But avoid
4eda48235011d6 Johannes Weiner         2017-02-24  1318  			 * injecting inefficient single-page IO into
4eda48235011d6 Johannes Weiner         2017-02-24  1319  			 * flusher writeback as much as possible: only
4eda48235011d6 Johannes Weiner         2017-02-24  1320  			 * write pages when we've encountered many
4eda48235011d6 Johannes Weiner         2017-02-24  1321  			 * dirty pages, and when we've already scanned
4eda48235011d6 Johannes Weiner         2017-02-24  1322  			 * the rest of the LRU for clean pages and see
4eda48235011d6 Johannes Weiner         2017-02-24  1323  			 * the same dirty pages again (PageReclaim).
ee72886d8ed5d9 Mel Gorman              2011-10-31  1324  			 */
9de4f22a60f731 Huang Ying              2020-04-06  1325  			if (page_is_file_lru(page) &&
4eda48235011d6 Johannes Weiner         2017-02-24  1326  			    (!current_is_kswapd() || !PageReclaim(page) ||
599d0c954f91d0 Mel Gorman              2016-07-28  1327  			     !test_bit(PGDAT_DIRTY, &pgdat->flags))) {
49ea7eb65e7c50 Mel Gorman              2011-10-31  1328  				/*
49ea7eb65e7c50 Mel Gorman              2011-10-31  1329  				 * Immediately reclaim when written back.
49ea7eb65e7c50 Mel Gorman              2011-10-31  1330  				 * Similar in principal to deactivate_page()
49ea7eb65e7c50 Mel Gorman              2011-10-31  1331  				 * except we already have the page isolated
49ea7eb65e7c50 Mel Gorman              2011-10-31  1332  				 * and know it's dirty
49ea7eb65e7c50 Mel Gorman              2011-10-31  1333  				 */
c4a25635b60d08 Mel Gorman              2016-07-28  1334  				inc_node_page_state(page, NR_VMSCAN_IMMEDIATE);
49ea7eb65e7c50 Mel Gorman              2011-10-31  1335  				SetPageReclaim(page);
49ea7eb65e7c50 Mel Gorman              2011-10-31  1336  
c55e8d035b28b2 Johannes Weiner         2017-02-24  1337  				goto activate_locked;
ee72886d8ed5d9 Mel Gorman              2011-10-31  1338  			}
ee72886d8ed5d9 Mel Gorman              2011-10-31  1339  
dfc8d636cdb95f Johannes Weiner         2010-03-05  1340  			if (references == PAGEREF_RECLAIM_CLEAN)
^1da177e4c3f41 Linus Torvalds          2005-04-16  1341  				goto keep_locked;
4dd4b920218326 Andrew Morton           2008-03-24  1342  			if (!may_enter_fs)
^1da177e4c3f41 Linus Torvalds          2005-04-16  1343  				goto keep_locked;
52a8363eae3872 Christoph Lameter       2006-02-01  1344  			if (!sc->may_writepage)
^1da177e4c3f41 Linus Torvalds          2005-04-16  1345  				goto keep_locked;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1346  
d950c9477d51f0 Mel Gorman              2015-09-04  1347  			/*
d950c9477d51f0 Mel Gorman              2015-09-04  1348  			 * Page is dirty. Flush the TLB if a writable entry
d950c9477d51f0 Mel Gorman              2015-09-04  1349  			 * potentially exists to avoid CPU writes after IO
d950c9477d51f0 Mel Gorman              2015-09-04  1350  			 * starts and then write it out here.
d950c9477d51f0 Mel Gorman              2015-09-04  1351  			 */
d950c9477d51f0 Mel Gorman              2015-09-04  1352  			try_to_unmap_flush_dirty();
cb16556d913f2b Yang Shi                2019-11-30  1353  			switch (pageout(page, mapping)) {
^1da177e4c3f41 Linus Torvalds          2005-04-16  1354  			case PAGE_KEEP:
^1da177e4c3f41 Linus Torvalds          2005-04-16  1355  				goto keep_locked;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1356  			case PAGE_ACTIVATE:
^1da177e4c3f41 Linus Torvalds          2005-04-16  1357  				goto activate_locked;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1358  			case PAGE_SUCCESS:
6c357848b44b40 Matthew Wilcox (Oracle  2020-08-14  1359) 				stat->nr_pageout += thp_nr_pages(page);
96f8bf4fb1dd26 Johannes Weiner         2020-06-03  1360  
7d3579e8e61937 KOSAKI Motohiro         2010-10-26  1361  				if (PageWriteback(page))
41ac1999c3e356 Mel Gorman              2012-05-29  1362  					goto keep;
7d3579e8e61937 KOSAKI Motohiro         2010-10-26  1363  				if (PageDirty(page))
^1da177e4c3f41 Linus Torvalds          2005-04-16  1364  					goto keep;
7d3579e8e61937 KOSAKI Motohiro         2010-10-26  1365  
^1da177e4c3f41 Linus Torvalds          2005-04-16  1366  				/*
^1da177e4c3f41 Linus Torvalds          2005-04-16  1367  				 * A synchronous write - probably a ramdisk.  Go
^1da177e4c3f41 Linus Torvalds          2005-04-16  1368  				 * ahead and try to reclaim the page.
^1da177e4c3f41 Linus Torvalds          2005-04-16  1369  				 */
529ae9aaa08378 Nick Piggin             2008-08-02  1370  				if (!trylock_page(page))
^1da177e4c3f41 Linus Torvalds          2005-04-16  1371  					goto keep;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1372  				if (PageDirty(page) || PageWriteback(page))
^1da177e4c3f41 Linus Torvalds          2005-04-16  1373  					goto keep_locked;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1374  				mapping = page_mapping(page);
01359eb2013b4b Gustavo A. R. Silva     2020-12-14  1375  				fallthrough;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1376  			case PAGE_CLEAN:
^1da177e4c3f41 Linus Torvalds          2005-04-16  1377  				; /* try to free the page below */
^1da177e4c3f41 Linus Torvalds          2005-04-16  1378  			}
^1da177e4c3f41 Linus Torvalds          2005-04-16  1379  		}
^1da177e4c3f41 Linus Torvalds          2005-04-16  1380  
^1da177e4c3f41 Linus Torvalds          2005-04-16  1381  		/*
^1da177e4c3f41 Linus Torvalds          2005-04-16  1382  		 * If the page has buffers, try to free the buffer mappings
^1da177e4c3f41 Linus Torvalds          2005-04-16  1383  		 * associated with this page. If we succeed we try to free
^1da177e4c3f41 Linus Torvalds          2005-04-16  1384  		 * the page as well.
^1da177e4c3f41 Linus Torvalds          2005-04-16  1385  		 *
^1da177e4c3f41 Linus Torvalds          2005-04-16  1386  		 * We do this even if the page is PageDirty().
^1da177e4c3f41 Linus Torvalds          2005-04-16  1387  		 * try_to_release_page() does not perform I/O, but it is
^1da177e4c3f41 Linus Torvalds          2005-04-16  1388  		 * possible for a page to have PageDirty set, but it is actually
^1da177e4c3f41 Linus Torvalds          2005-04-16  1389  		 * clean (all its buffers are clean).  This happens if the
^1da177e4c3f41 Linus Torvalds          2005-04-16  1390  		 * buffers were written out directly, with submit_bh(). ext3
^1da177e4c3f41 Linus Torvalds          2005-04-16  1391  		 * will do this, as well as the blockdev mapping.
^1da177e4c3f41 Linus Torvalds          2005-04-16  1392  		 * try_to_release_page() will discover that cleanness and will
^1da177e4c3f41 Linus Torvalds          2005-04-16  1393  		 * drop the buffers and mark the page clean - it can be freed.
^1da177e4c3f41 Linus Torvalds          2005-04-16  1394  		 *
^1da177e4c3f41 Linus Torvalds          2005-04-16  1395  		 * Rarely, pages can have buffers and no ->mapping.  These are
^1da177e4c3f41 Linus Torvalds          2005-04-16  1396  		 * the pages which were not successfully invalidated in
d12b8951ad17cd Yang Shi                2020-12-14  1397  		 * truncate_cleanup_page().  We try to drop those buffers here
^1da177e4c3f41 Linus Torvalds          2005-04-16  1398  		 * and if that worked, and the page is no longer mapped into
^1da177e4c3f41 Linus Torvalds          2005-04-16  1399  		 * process address space (page_count == 1) it can be freed.
^1da177e4c3f41 Linus Torvalds          2005-04-16  1400  		 * Otherwise, leave the page on the LRU so it is swappable.
^1da177e4c3f41 Linus Torvalds          2005-04-16  1401  		 */
266cf658efcf6a David Howells           2009-04-03  1402  		if (page_has_private(page)) {
^1da177e4c3f41 Linus Torvalds          2005-04-16  1403  			if (!try_to_release_page(page, sc->gfp_mask))
^1da177e4c3f41 Linus Torvalds          2005-04-16  1404  				goto activate_locked;
e286781d5f2e9c Nick Piggin             2008-07-25  1405  			if (!mapping && page_count(page) == 1) {
e286781d5f2e9c Nick Piggin             2008-07-25  1406  				unlock_page(page);
e286781d5f2e9c Nick Piggin             2008-07-25  1407  				if (put_page_testzero(page))
^1da177e4c3f41 Linus Torvalds          2005-04-16  1408  					goto free_it;
e286781d5f2e9c Nick Piggin             2008-07-25  1409  				else {
e286781d5f2e9c Nick Piggin             2008-07-25  1410  					/*
e286781d5f2e9c Nick Piggin             2008-07-25  1411  					 * rare race with speculative reference.
e286781d5f2e9c Nick Piggin             2008-07-25  1412  					 * the speculative reference will free
e286781d5f2e9c Nick Piggin             2008-07-25  1413  					 * this page shortly, so we may
e286781d5f2e9c Nick Piggin             2008-07-25  1414  					 * increment nr_reclaimed here (and
e286781d5f2e9c Nick Piggin             2008-07-25  1415  					 * leave it off the LRU).
e286781d5f2e9c Nick Piggin             2008-07-25  1416  					 */
e286781d5f2e9c Nick Piggin             2008-07-25  1417  					nr_reclaimed++;
e286781d5f2e9c Nick Piggin             2008-07-25  1418  					continue;
e286781d5f2e9c Nick Piggin             2008-07-25  1419  				}
e286781d5f2e9c Nick Piggin             2008-07-25  1420  			}
^1da177e4c3f41 Linus Torvalds          2005-04-16  1421  		}
^1da177e4c3f41 Linus Torvalds          2005-04-16  1422  
802a3a92ad7ac0 Shaohua Li              2017-05-03  1423  		if (PageAnon(page) && !PageSwapBacked(page)) {
802a3a92ad7ac0 Shaohua Li              2017-05-03  1424  			/* follow __remove_mapping for reference */
802a3a92ad7ac0 Shaohua Li              2017-05-03  1425  			if (!page_ref_freeze(page, 1))
49d2e9cc454436 Christoph Lameter       2006-01-08  1426  				goto keep_locked;
802a3a92ad7ac0 Shaohua Li              2017-05-03  1427  			if (PageDirty(page)) {
802a3a92ad7ac0 Shaohua Li              2017-05-03  1428  				page_ref_unfreeze(page, 1);
802a3a92ad7ac0 Shaohua Li              2017-05-03  1429  				goto keep_locked;
802a3a92ad7ac0 Shaohua Li              2017-05-03  1430  			}
^1da177e4c3f41 Linus Torvalds          2005-04-16  1431  
802a3a92ad7ac0 Shaohua Li              2017-05-03  1432  			count_vm_event(PGLAZYFREED);
2262185c5b287f Roman Gushchin          2017-07-06  1433  			count_memcg_page_event(page, PGLAZYFREED);
b910718a948a91 Johannes Weiner         2019-11-30  1434  		} else if (!mapping || !__remove_mapping(mapping, page, true,
b910718a948a91 Johannes Weiner         2019-11-30  1435  							 sc->target_mem_cgroup))
802a3a92ad7ac0 Shaohua Li              2017-05-03  1436  			goto keep_locked;
9a1ea439b16b92 Hugh Dickins            2018-12-28  1437  
9a1ea439b16b92 Hugh Dickins            2018-12-28  1438  		unlock_page(page);
e286781d5f2e9c Nick Piggin             2008-07-25  1439  free_it:
98879b3b9edc16 Yang Shi                2019-07-11  1440  		/*
98879b3b9edc16 Yang Shi                2019-07-11  1441  		 * THP may get swapped out in a whole, need account
98879b3b9edc16 Yang Shi                2019-07-11  1442  		 * all base pages.
98879b3b9edc16 Yang Shi                2019-07-11  1443  		 */
98879b3b9edc16 Yang Shi                2019-07-11  1444  		nr_reclaimed += nr_pages;
abe4c3b50c3f25 Mel Gorman              2010-08-09  1445  
abe4c3b50c3f25 Mel Gorman              2010-08-09  1446  		/*
abe4c3b50c3f25 Mel Gorman              2010-08-09  1447  		 * Is there need to periodically free_page_list? It would
abe4c3b50c3f25 Mel Gorman              2010-08-09  1448  		 * appear not as the counts should be low
abe4c3b50c3f25 Mel Gorman              2010-08-09  1449  		 */
7ae88534cdd962 Yang Shi                2019-09-23  1450  		if (unlikely(PageTransHuge(page)))
ff45fc3ca0f3c3 Matthew Wilcox (Oracle  2020-06-03  1451) 			destroy_compound_page(page);
7ae88534cdd962 Yang Shi                2019-09-23  1452  		else
abe4c3b50c3f25 Mel Gorman              2010-08-09  1453  			list_add(&page->lru, &free_pages);
^1da177e4c3f41 Linus Torvalds          2005-04-16  1454  		continue;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1455  
98879b3b9edc16 Yang Shi                2019-07-11  1456  activate_locked_split:
98879b3b9edc16 Yang Shi                2019-07-11  1457  		/*
98879b3b9edc16 Yang Shi                2019-07-11  1458  		 * The tail pages that are failed to add into swap cache
98879b3b9edc16 Yang Shi                2019-07-11  1459  		 * reach here.  Fixup nr_scanned and nr_pages.
98879b3b9edc16 Yang Shi                2019-07-11  1460  		 */
98879b3b9edc16 Yang Shi                2019-07-11  1461  		if (nr_pages > 1) {
98879b3b9edc16 Yang Shi                2019-07-11  1462  			sc->nr_scanned -= (nr_pages - 1);
98879b3b9edc16 Yang Shi                2019-07-11  1463  			nr_pages = 1;
98879b3b9edc16 Yang Shi                2019-07-11  1464  		}
^1da177e4c3f41 Linus Torvalds          2005-04-16  1465  activate_locked:
68a22394c286a2 Rik van Riel            2008-10-18  1466  		/* Not a candidate for swapping, so reclaim swap space. */
ad6b67041a4549 Minchan Kim             2017-05-03  1467  		if (PageSwapCache(page) && (mem_cgroup_swap_full(page) ||
ad6b67041a4549 Minchan Kim             2017-05-03  1468  						PageMlocked(page)))
a2c43eed8334e8 Hugh Dickins            2009-01-06  1469  			try_to_free_swap(page);
309381feaee564 Sasha Levin             2014-01-23  1470  		VM_BUG_ON_PAGE(PageActive(page), page);
ad6b67041a4549 Minchan Kim             2017-05-03  1471  		if (!PageMlocked(page)) {
9de4f22a60f731 Huang Ying              2020-04-06  1472  			int type = page_is_file_lru(page);
^1da177e4c3f41 Linus Torvalds          2005-04-16  1473  			SetPageActive(page);
98879b3b9edc16 Yang Shi                2019-07-11  1474  			stat->nr_activate[type] += nr_pages;
2262185c5b287f Roman Gushchin          2017-07-06  1475  			count_memcg_page_event(page, PGACTIVATE);
ad6b67041a4549 Minchan Kim             2017-05-03  1476  		}
^1da177e4c3f41 Linus Torvalds          2005-04-16  1477  keep_locked:
^1da177e4c3f41 Linus Torvalds          2005-04-16  1478  		unlock_page(page);
^1da177e4c3f41 Linus Torvalds          2005-04-16  1479  keep:
^1da177e4c3f41 Linus Torvalds          2005-04-16  1480  		list_add(&page->lru, &ret_pages);
309381feaee564 Sasha Levin             2014-01-23  1481  		VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page);
^1da177e4c3f41 Linus Torvalds          2005-04-16  1482  	}
abe4c3b50c3f25 Mel Gorman              2010-08-09  1483  
98879b3b9edc16 Yang Shi                2019-07-11  1484  	pgactivate = stat->nr_activate[0] + stat->nr_activate[1];
98879b3b9edc16 Yang Shi                2019-07-11  1485  
747db954cab64c Johannes Weiner         2014-08-08  1486  	mem_cgroup_uncharge_list(&free_pages);
72b252aed506b8 Mel Gorman              2015-09-04  1487  	try_to_unmap_flush();
2d4894b5d2ae0f Mel Gorman              2017-11-15  1488  	free_unref_page_list(&free_pages);
abe4c3b50c3f25 Mel Gorman              2010-08-09  1489  
^1da177e4c3f41 Linus Torvalds          2005-04-16  1490  	list_splice(&ret_pages, page_list);
886cf1901db962 Kirill Tkhai            2019-05-13  1491  	count_vm_events(PGACTIVATE, pgactivate);
060f005f074791 Kirill Tkhai            2019-03-05  1492  
05ff51376f01fd Andrew Morton           2006-03-22  1493  	return nr_reclaimed;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1494  }
^1da177e4c3f41 Linus Torvalds          2005-04-16  1495  

:::::: The code at line 1071 was first introduced by commit
:::::: 730ec8c01a2bd6a311ada404398f44c142ac5e8e mm/vmscan.c: change prototype for shrink_page_list

:::::: TO: Maninder Singh <maninder1.s@...sung.com>
:::::: CC: Linus Torvalds <torvalds@...ux-foundation.org>

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

Download attachment ".config.gz" of type "application/gzip" (31923 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ