[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202002150049.JtbQNZ7x%lkp@intel.com>
Date: Sat, 15 Feb 2020 00:53:10 +0800
From: kbuild test robot <lkp@...el.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: kbuild-all@...ts.01.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Dave Chinner <david@...morbit.com>,
Yafang Shao <laoar.shao@...il.com>,
Michal Hocko <mhocko@...e.com>, Roman Gushchin <guro@...com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Al Viro <viro@...iv.linux.org.uk>, kernel-team@...com
Subject: Re: [PATCH] vfs: keep inodes with page cache off the inode shrinker
LRU
Hi Johannes,
I love your patch! Yet something to improve:
[auto build test ERROR on vfs/for-next]
[also build test ERROR on linux/master linus/master v5.6-rc1 next-20200213]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]
url: https://github.com/0day-ci/linux/commits/Johannes-Weiner/vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-LRU/20200214-083756
base: https://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs.git for-next
config: m68k-allmodconfig (attached as .config)
compiler: m68k-linux-gcc (GCC) 7.5.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
GCC_VERSION=7.5.0 make.cross ARCH=m68k
If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@...el.com>
All errors (new ones prefixed by >>):
fs/dax.c: In function 'grab_mapping_entry':
>> fs/dax.c:556:28: error: 'struct address_space' has no member named 'inode'
inode_pages_clear(mapping->inode);
^~
fs/dax.c:558:26: error: 'struct address_space' has no member named 'inode'
inode_pages_set(mapping->inode);
^~
vim +556 fs/dax.c
446
447 /*
448 * Find page cache entry at given index. If it is a DAX entry, return it
449 * with the entry locked. If the page cache doesn't contain an entry at
450 * that index, add a locked empty entry.
451 *
452 * When requesting an entry with size DAX_PMD, grab_mapping_entry() will
453 * either return that locked entry or will return VM_FAULT_FALLBACK.
454 * This will happen if there are any PTE entries within the PMD range
455 * that we are requesting.
456 *
457 * We always favor PTE entries over PMD entries. There isn't a flow where we
458 * evict PTE entries in order to 'upgrade' them to a PMD entry. A PMD
459 * insertion will fail if it finds any PTE entries already in the tree, and a
460 * PTE insertion will cause an existing PMD entry to be unmapped and
461 * downgraded to PTE entries. This happens for both PMD zero pages as
462 * well as PMD empty entries.
463 *
464 * The exception to this downgrade path is for PMD entries that have
465 * real storage backing them. We will leave these real PMD entries in
466 * the tree, and PTE writes will simply dirty the entire PMD entry.
467 *
468 * Note: Unlike filemap_fault() we don't honor FAULT_FLAG_RETRY flags. For
469 * persistent memory the benefit is doubtful. We can add that later if we can
470 * show it helps.
471 *
472 * On error, this function does not return an ERR_PTR. Instead it returns
473 * a VM_FAULT code, encoded as an xarray internal entry. The ERR_PTR values
474 * overlap with xarray value entries.
475 */
476 static void *grab_mapping_entry(struct xa_state *xas,
477 struct address_space *mapping, unsigned int order)
478 {
479 unsigned long index = xas->xa_index;
480 bool pmd_downgrade = false; /* splitting PMD entry into PTE entries? */
481 int populated;
482 void *entry;
483
484 retry:
485 populated = 0;
486 xas_lock_irq(xas);
487 entry = get_unlocked_entry(xas, order);
488
489 if (entry) {
490 if (dax_is_conflict(entry))
491 goto fallback;
492 if (!xa_is_value(entry)) {
493 xas_set_err(xas, EIO);
494 goto out_unlock;
495 }
496
497 if (order == 0) {
498 if (dax_is_pmd_entry(entry) &&
499 (dax_is_zero_entry(entry) ||
500 dax_is_empty_entry(entry))) {
501 pmd_downgrade = true;
502 }
503 }
504 }
505
506 if (pmd_downgrade) {
507 /*
508 * Make sure 'entry' remains valid while we drop
509 * the i_pages lock.
510 */
511 dax_lock_entry(xas, entry);
512
513 /*
514 * Besides huge zero pages the only other thing that gets
515 * downgraded are empty entries which don't need to be
516 * unmapped.
517 */
518 if (dax_is_zero_entry(entry)) {
519 xas_unlock_irq(xas);
520 unmap_mapping_pages(mapping,
521 xas->xa_index & ~PG_PMD_COLOUR,
522 PG_PMD_NR, false);
523 xas_reset(xas);
524 xas_lock_irq(xas);
525 }
526
527 dax_disassociate_entry(entry, mapping, false);
528 xas_store(xas, NULL); /* undo the PMD join */
529 dax_wake_entry(xas, entry, true);
530 mapping->nrexceptional--;
531 if (mapping_empty(mapping))
532 populated = -1;
533 entry = NULL;
534 xas_set(xas, index);
535 }
536
537 if (entry) {
538 dax_lock_entry(xas, entry);
539 } else {
540 unsigned long flags = DAX_EMPTY;
541
542 if (order > 0)
543 flags |= DAX_PMD;
544 entry = dax_make_entry(pfn_to_pfn_t(0), flags);
545 dax_lock_entry(xas, entry);
546 if (xas_error(xas))
547 goto out_unlock;
548 if (mapping_empty(mapping))
549 populated++;
550 mapping->nrexceptional++;
551 }
552
553 out_unlock:
554 xas_unlock_irq(xas);
555 if (populated == -1)
> 556 inode_pages_clear(mapping->inode);
557 else if (populated == 1)
558 inode_pages_set(mapping->inode);
559 if (xas_nomem(xas, mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM))
560 goto retry;
561 if (xas->xa_node == XA_ERROR(-ENOMEM))
562 return xa_mk_internal(VM_FAULT_OOM);
563 if (xas_error(xas))
564 return xa_mk_internal(VM_FAULT_SIGBUS);
565 return entry;
566 fallback:
567 xas_unlock_irq(xas);
568 return xa_mk_internal(VM_FAULT_FALLBACK);
569 }
570
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Download attachment ".config.gz" of type "application/gzip" (51877 bytes)
Powered by blists - more mailing lists