lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202510300341.cOYqY4ki-lkp@intel.com>
Date: Thu, 30 Oct 2025 03:25:46 +0800
From: kernel test robot <lkp@...el.com>
To: Kairui Song <ryncsn@...il.com>, linux-mm@...ck.org
Cc: llvm@...ts.linux.dev, oe-kbuild-all@...ts.linux.dev,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linux Memory Management List <linux-mm@...ck.org>,
	Baoquan He <bhe@...hat.com>, Barry Song <baohua@...nel.org>,
	Chris Li <chrisl@...nel.org>, Nhat Pham <nphamcs@...il.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Yosry Ahmed <yosry.ahmed@...ux.dev>,
	David Hildenbrand <david@...hat.com>,
	Youngjun Park <youngjun.park@....com>,
	Hugh Dickins <hughd@...gle.com>,
	Baolin Wang <baolin.wang@...ux.alibaba.com>,
	"Huang, Ying" <ying.huang@...ux.alibaba.com>,
	Kemeng Shi <shikemeng@...weicloud.com>,
	Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
	"Matthew Wilcox (Oracle)" <willy@...radead.org>,
	linux-kernel@...r.kernel.org, Kairui Song <kasong@...cent.com>
Subject: Re: [PATCH 14/19] mm, swap: sanitize swap entry management workflow

Hi Kairui,

kernel test robot noticed the following build errors:

[auto build test ERROR on f30d294530d939fa4b77d61bc60f25c4284841fa]

url:    https://github.com/intel-lab-lkp/linux/commits/Kairui-Song/mm-swap-rename-__read_swap_cache_async-to-swap_cache_alloc_folio/20251030-000506
base:   f30d294530d939fa4b77d61bc60f25c4284841fa
patch link:    https://lore.kernel.org/r/20251029-swap-table-p2-v1-14-3d43f3b6ec32%40tencent.com
patch subject: [PATCH 14/19] mm, swap: sanitize swap entry management workflow
config: x86_64-allnoconfig (https://download.01.org/0day-ci/archive/20251030/202510300341.cOYqY4ki-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251030/202510300341.cOYqY4ki-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@...el.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202510300341.cOYqY4ki-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from mm/shmem.c:44:
   mm/swap.h:465:1: warning: non-void function does not return a value [-Wreturn-type]
     465 | }
         | ^
>> mm/shmem.c:1649:29: error: too few arguments to function call, expected 2, have 1
    1649 |         if (!folio_alloc_swap(folio)) {
         |              ~~~~~~~~~~~~~~~~      ^
   mm/swap.h:388:19: note: 'folio_alloc_swap' declared here
     388 | static inline int folio_alloc_swap(struct folio *folio, gfp_t gfp)
         |                   ^                ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   1 warning and 1 error generated.


vim +1649 mm/shmem.c

^1da177e4c3f41 Linus Torvalds          2005-04-16  1563  
7b73c12c6ebf00 Matthew Wilcox (Oracle  2025-04-02  1564) /**
7b73c12c6ebf00 Matthew Wilcox (Oracle  2025-04-02  1565)  * shmem_writeout - Write the folio to swap
7b73c12c6ebf00 Matthew Wilcox (Oracle  2025-04-02  1566)  * @folio: The folio to write
44b1b073eb3614 Christoph Hellwig       2025-06-10  1567   * @plug: swap plug
44b1b073eb3614 Christoph Hellwig       2025-06-10  1568   * @folio_list: list to put back folios on split
7b73c12c6ebf00 Matthew Wilcox (Oracle  2025-04-02  1569)  *
7b73c12c6ebf00 Matthew Wilcox (Oracle  2025-04-02  1570)  * Move the folio from the page cache to the swap cache.
7b73c12c6ebf00 Matthew Wilcox (Oracle  2025-04-02  1571)  */
44b1b073eb3614 Christoph Hellwig       2025-06-10  1572  int shmem_writeout(struct folio *folio, struct swap_iocb **plug,
44b1b073eb3614 Christoph Hellwig       2025-06-10  1573  		struct list_head *folio_list)
7b73c12c6ebf00 Matthew Wilcox (Oracle  2025-04-02  1574) {
8ccee8c19c605a Luis Chamberlain        2023-03-09  1575  	struct address_space *mapping = folio->mapping;
8ccee8c19c605a Luis Chamberlain        2023-03-09  1576  	struct inode *inode = mapping->host;
8ccee8c19c605a Luis Chamberlain        2023-03-09  1577  	struct shmem_inode_info *info = SHMEM_I(inode);
2c6efe9cf2d784 Luis Chamberlain        2023-03-09  1578  	struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
6922c0c7abd387 Hugh Dickins            2011-08-03  1579  	pgoff_t index;
650180760be6bb Baolin Wang             2024-08-12  1580  	int nr_pages;
809bc86517cc40 Baolin Wang             2024-08-12  1581  	bool split = false;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1582  
adae46ac1e38a2 Ricardo CaƱuelo Navarro 2025-02-26  1583  	if ((info->flags & VM_LOCKED) || sbinfo->noswap)
9a976f0c847b67 Luis Chamberlain        2023-03-09  1584  		goto redirty;
9a976f0c847b67 Luis Chamberlain        2023-03-09  1585  
9a976f0c847b67 Luis Chamberlain        2023-03-09  1586  	if (!total_swap_pages)
9a976f0c847b67 Luis Chamberlain        2023-03-09  1587  		goto redirty;
9a976f0c847b67 Luis Chamberlain        2023-03-09  1588  
1e6decf30af5c5 Hugh Dickins            2021-09-02  1589  	/*
809bc86517cc40 Baolin Wang             2024-08-12  1590  	 * If CONFIG_THP_SWAP is not enabled, the large folio should be
809bc86517cc40 Baolin Wang             2024-08-12  1591  	 * split when swapping.
809bc86517cc40 Baolin Wang             2024-08-12  1592  	 *
809bc86517cc40 Baolin Wang             2024-08-12  1593  	 * And shrinkage of pages beyond i_size does not split swap, so
809bc86517cc40 Baolin Wang             2024-08-12  1594  	 * swapout of a large folio crossing i_size needs to split too
809bc86517cc40 Baolin Wang             2024-08-12  1595  	 * (unless fallocate has been used to preallocate beyond EOF).
1e6decf30af5c5 Hugh Dickins            2021-09-02  1596  	 */
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1597) 	if (folio_test_large(folio)) {
809bc86517cc40 Baolin Wang             2024-08-12  1598  		index = shmem_fallocend(inode,
809bc86517cc40 Baolin Wang             2024-08-12  1599  			DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE));
809bc86517cc40 Baolin Wang             2024-08-12  1600  		if ((index > folio->index && index < folio_next_index(folio)) ||
809bc86517cc40 Baolin Wang             2024-08-12  1601  		    !IS_ENABLED(CONFIG_THP_SWAP))
809bc86517cc40 Baolin Wang             2024-08-12  1602  			split = true;
809bc86517cc40 Baolin Wang             2024-08-12  1603  	}
809bc86517cc40 Baolin Wang             2024-08-12  1604  
809bc86517cc40 Baolin Wang             2024-08-12  1605  	if (split) {
809bc86517cc40 Baolin Wang             2024-08-12  1606  try_split:
1e6decf30af5c5 Hugh Dickins            2021-09-02  1607  		/* Ensure the subpages are still dirty */
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1608) 		folio_test_set_dirty(folio);
44b1b073eb3614 Christoph Hellwig       2025-06-10  1609  		if (split_folio_to_list(folio, folio_list))
1e6decf30af5c5 Hugh Dickins            2021-09-02  1610  			goto redirty;
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1611) 		folio_clear_dirty(folio);
1e6decf30af5c5 Hugh Dickins            2021-09-02  1612  	}
1e6decf30af5c5 Hugh Dickins            2021-09-02  1613  
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1614) 	index = folio->index;
650180760be6bb Baolin Wang             2024-08-12  1615  	nr_pages = folio_nr_pages(folio);
1635f6a74152f1 Hugh Dickins            2012-05-29  1616  
1635f6a74152f1 Hugh Dickins            2012-05-29  1617  	/*
1635f6a74152f1 Hugh Dickins            2012-05-29  1618  	 * This is somewhat ridiculous, but without plumbing a SWAP_MAP_FALLOC
1635f6a74152f1 Hugh Dickins            2012-05-29  1619  	 * value into swapfile.c, the only way we can correctly account for a
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1620) 	 * fallocated folio arriving here is now to initialize it and write it.
1aac1400319d30 Hugh Dickins            2012-05-29  1621  	 *
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1622) 	 * That's okay for a folio already fallocated earlier, but if we have
1aac1400319d30 Hugh Dickins            2012-05-29  1623  	 * not yet completed the fallocation, then (a) we want to keep track
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1624) 	 * of this folio in case we have to undo it, and (b) it may not be a
1aac1400319d30 Hugh Dickins            2012-05-29  1625  	 * good idea to continue anyway, once we're pushing into swap.  So
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1626) 	 * reactivate the folio, and let shmem_fallocate() quit when too many.
1635f6a74152f1 Hugh Dickins            2012-05-29  1627  	 */
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1628) 	if (!folio_test_uptodate(folio)) {
1aac1400319d30 Hugh Dickins            2012-05-29  1629  		if (inode->i_private) {
1aac1400319d30 Hugh Dickins            2012-05-29  1630  			struct shmem_falloc *shmem_falloc;
1aac1400319d30 Hugh Dickins            2012-05-29  1631  			spin_lock(&inode->i_lock);
1aac1400319d30 Hugh Dickins            2012-05-29  1632  			shmem_falloc = inode->i_private;
1aac1400319d30 Hugh Dickins            2012-05-29  1633  			if (shmem_falloc &&
8e205f779d1443 Hugh Dickins            2014-07-23  1634  			    !shmem_falloc->waitq &&
1aac1400319d30 Hugh Dickins            2012-05-29  1635  			    index >= shmem_falloc->start &&
1aac1400319d30 Hugh Dickins            2012-05-29  1636  			    index < shmem_falloc->next)
d77b90d2b26426 Baolin Wang             2024-12-19  1637  				shmem_falloc->nr_unswapped += nr_pages;
1aac1400319d30 Hugh Dickins            2012-05-29  1638  			else
1aac1400319d30 Hugh Dickins            2012-05-29  1639  				shmem_falloc = NULL;
1aac1400319d30 Hugh Dickins            2012-05-29  1640  			spin_unlock(&inode->i_lock);
1aac1400319d30 Hugh Dickins            2012-05-29  1641  			if (shmem_falloc)
1aac1400319d30 Hugh Dickins            2012-05-29  1642  				goto redirty;
1aac1400319d30 Hugh Dickins            2012-05-29  1643  		}
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1644) 		folio_zero_range(folio, 0, folio_size(folio));
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1645) 		flush_dcache_folio(folio);
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1646) 		folio_mark_uptodate(folio);
1635f6a74152f1 Hugh Dickins            2012-05-29  1647  	}
1635f6a74152f1 Hugh Dickins            2012-05-29  1648  
7d14492199f93c Kairui Song             2025-10-24 @1649  	if (!folio_alloc_swap(folio)) {
ea693aaa5ce5ad Hugh Dickins            2025-07-16  1650  		bool first_swapped = shmem_recalc_inode(inode, 0, nr_pages);
6344a6d9ce13ae Hugh Dickins            2025-07-16  1651  		int error;
ea693aaa5ce5ad Hugh Dickins            2025-07-16  1652  
b1dea800ac3959 Hugh Dickins            2011-05-11  1653  		/*
b1dea800ac3959 Hugh Dickins            2011-05-11  1654  		 * Add inode to shmem_unuse()'s list of swapped-out inodes,
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1655) 		 * if it's not already there.  Do it now before the folio is
ea693aaa5ce5ad Hugh Dickins            2025-07-16  1656  		 * removed from page cache, when its pagelock no longer
ea693aaa5ce5ad Hugh Dickins            2025-07-16  1657  		 * protects the inode from eviction.  And do it now, after
ea693aaa5ce5ad Hugh Dickins            2025-07-16  1658  		 * we've incremented swapped, because shmem_unuse() will
ea693aaa5ce5ad Hugh Dickins            2025-07-16  1659  		 * prune a !swapped inode from the swaplist.
b1dea800ac3959 Hugh Dickins            2011-05-11  1660  		 */
ea693aaa5ce5ad Hugh Dickins            2025-07-16  1661  		if (first_swapped) {
ea693aaa5ce5ad Hugh Dickins            2025-07-16  1662  			spin_lock(&shmem_swaplist_lock);
05bf86b4ccfd0f Hugh Dickins            2011-05-14  1663  			if (list_empty(&info->swaplist))
b56a2d8af9147a Vineeth Remanan Pillai  2019-03-05  1664  				list_add(&info->swaplist, &shmem_swaplist);
ea693aaa5ce5ad Hugh Dickins            2025-07-16  1665  			spin_unlock(&shmem_swaplist_lock);
ea693aaa5ce5ad Hugh Dickins            2025-07-16  1666  		}
b1dea800ac3959 Hugh Dickins            2011-05-11  1667  
80d6ed40156385 Kairui Song             2025-10-29  1668  		folio_dup_swap(folio, NULL);
b487a2da3575b6 Kairui Song             2025-03-14  1669  		shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap));
267a4c76bbdb95 Hugh Dickins            2015-12-11  1670  
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1671) 		BUG_ON(folio_mapped(folio));
6344a6d9ce13ae Hugh Dickins            2025-07-16  1672  		error = swap_writeout(folio, plug);
6344a6d9ce13ae Hugh Dickins            2025-07-16  1673  		if (error != AOP_WRITEPAGE_ACTIVATE) {
6344a6d9ce13ae Hugh Dickins            2025-07-16  1674  			/* folio has been unlocked */
6344a6d9ce13ae Hugh Dickins            2025-07-16  1675  			return error;
6344a6d9ce13ae Hugh Dickins            2025-07-16  1676  		}
6344a6d9ce13ae Hugh Dickins            2025-07-16  1677  
6344a6d9ce13ae Hugh Dickins            2025-07-16  1678  		/*
6344a6d9ce13ae Hugh Dickins            2025-07-16  1679  		 * The intention here is to avoid holding on to the swap when
6344a6d9ce13ae Hugh Dickins            2025-07-16  1680  		 * zswap was unable to compress and unable to writeback; but
6344a6d9ce13ae Hugh Dickins            2025-07-16  1681  		 * it will be appropriate if other reactivate cases are added.
6344a6d9ce13ae Hugh Dickins            2025-07-16  1682  		 */
6344a6d9ce13ae Hugh Dickins            2025-07-16  1683  		error = shmem_add_to_page_cache(folio, mapping, index,
6344a6d9ce13ae Hugh Dickins            2025-07-16  1684  				swp_to_radix_entry(folio->swap),
6344a6d9ce13ae Hugh Dickins            2025-07-16  1685  				__GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN);
6344a6d9ce13ae Hugh Dickins            2025-07-16  1686  		/* Swap entry might be erased by racing shmem_free_swap() */
6344a6d9ce13ae Hugh Dickins            2025-07-16  1687  		if (!error) {
6344a6d9ce13ae Hugh Dickins            2025-07-16  1688  			shmem_recalc_inode(inode, 0, -nr_pages);
80d6ed40156385 Kairui Song             2025-10-29  1689  			folio_put_swap(folio, NULL);
6344a6d9ce13ae Hugh Dickins            2025-07-16  1690  		}
6344a6d9ce13ae Hugh Dickins            2025-07-16  1691  
6344a6d9ce13ae Hugh Dickins            2025-07-16  1692  		/*
fd8d4f862f8c27 Kairui Song             2025-09-17  1693  		 * The swap_cache_del_folio() below could be left for
6344a6d9ce13ae Hugh Dickins            2025-07-16  1694  		 * shrink_folio_list()'s folio_free_swap() to dispose of;
6344a6d9ce13ae Hugh Dickins            2025-07-16  1695  		 * but I'm a little nervous about letting this folio out of
6344a6d9ce13ae Hugh Dickins            2025-07-16  1696  		 * shmem_writeout() in a hybrid half-tmpfs-half-swap state
6344a6d9ce13ae Hugh Dickins            2025-07-16  1697  		 * e.g. folio_mapping(folio) might give an unexpected answer.
6344a6d9ce13ae Hugh Dickins            2025-07-16  1698  		 */
fd8d4f862f8c27 Kairui Song             2025-09-17  1699  		swap_cache_del_folio(folio);
6344a6d9ce13ae Hugh Dickins            2025-07-16  1700  		goto redirty;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1701  	}
b487a2da3575b6 Kairui Song             2025-03-14  1702  	if (nr_pages > 1)
b487a2da3575b6 Kairui Song             2025-03-14  1703  		goto try_split;
^1da177e4c3f41 Linus Torvalds          2005-04-16  1704  redirty:
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1705) 	folio_mark_dirty(folio);
f530ed0e2d01aa Matthew Wilcox (Oracle  2022-09-02  1706) 	return AOP_WRITEPAGE_ACTIVATE;	/* Return with folio locked */
^1da177e4c3f41 Linus Torvalds          2005-04-16  1707  }
7b73c12c6ebf00 Matthew Wilcox (Oracle  2025-04-02  1708) EXPORT_SYMBOL_GPL(shmem_writeout);
^1da177e4c3f41 Linus Torvalds          2005-04-16  1709  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ