[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202506142050.kfDUdARX-lkp@intel.com>
Date: Sat, 14 Jun 2025 20:28:42 +0800
From: kernel test robot <lkp@...el.com>
To: James Houghton <jthoughton@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>
Cc: oe-kbuild-all@...ts.linux.dev, Vipin Sharma <vipinsh@...gle.com>,
David Matlack <dmatlack@...gle.com>,
James Houghton <jthoughton@...gle.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 1/7] KVM: x86/mmu: Track TDP MMU NX huge pages
separately
Hi James,
kernel test robot noticed the following build errors:
[auto build test ERROR on 8046d29dde17002523f94d3e6e0ebe486ce52166]
url: https://github.com/intel-lab-lkp/linux/commits/James-Houghton/KVM-x86-mmu-Track-TDP-MMU-NX-huge-pages-separately/20250614-042620
base: 8046d29dde17002523f94d3e6e0ebe486ce52166
patch link: https://lore.kernel.org/r/20250613202315.2790592-2-jthoughton%40google.com
patch subject: [PATCH v4 1/7] KVM: x86/mmu: Track TDP MMU NX huge pages separately
config: i386-randconfig-003-20250614 (https://download.01.org/0day-ci/archive/20250614/202506142050.kfDUdARX-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250614/202506142050.kfDUdARX-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@...el.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202506142050.kfDUdARX-lkp@intel.com/
All errors (new ones prefixed by >>):
arch/x86/kvm/mmu/mmu.c: In function 'kvm_recover_nx_huge_pages':
>> arch/x86/kvm/mmu/mmu.c:7609:38: error: 'KVM_TDP_MMU' undeclared (first use in this function)
7609 | else if (mmu_type == KVM_TDP_MMU)
| ^~~~~~~~~~~
arch/x86/kvm/mmu/mmu.c:7609:38: note: each undeclared identifier is reported only once for each function it appears in
vim +/KVM_TDP_MMU +7609 arch/x86/kvm/mmu/mmu.c
7537
7538 static void kvm_recover_nx_huge_pages(struct kvm *kvm,
7539 enum kvm_mmu_type mmu_type)
7540 {
7541 unsigned long to_zap = nx_huge_pages_to_zap(kvm, mmu_type);
7542 struct list_head *nx_huge_pages;
7543 struct kvm_memory_slot *slot;
7544 struct kvm_mmu_page *sp;
7545 LIST_HEAD(invalid_list);
7546 bool flush = false;
7547 int rcu_idx;
7548
7549 nx_huge_pages = &kvm->arch.possible_nx_huge_pages[mmu_type].pages;
7550
7551 rcu_idx = srcu_read_lock(&kvm->srcu);
7552 write_lock(&kvm->mmu_lock);
7553
7554 /*
7555 * Zapping TDP MMU shadow pages, including the remote TLB flush, must
7556 * be done under RCU protection, because the pages are freed via RCU
7557 * callback.
7558 */
7559 rcu_read_lock();
7560
7561 for ( ; to_zap; --to_zap) {
7562 if (list_empty(nx_huge_pages))
7563 break;
7564
7565 /*
7566 * We use a separate list instead of just using active_mmu_pages
7567 * because the number of shadow pages that be replaced with an
7568 * NX huge page is expected to be relatively small compared to
7569 * the total number of shadow pages. And because the TDP MMU
7570 * doesn't use active_mmu_pages.
7571 */
7572 sp = list_first_entry(nx_huge_pages,
7573 struct kvm_mmu_page,
7574 possible_nx_huge_page_link);
7575 WARN_ON_ONCE(!sp->nx_huge_page_disallowed);
7576 WARN_ON_ONCE(!sp->role.direct);
7577
7578 /*
7579 * Unaccount and do not attempt to recover any NX Huge Pages
7580 * that are being dirty tracked, as they would just be faulted
7581 * back in as 4KiB pages. The NX Huge Pages in this slot will be
7582 * recovered, along with all the other huge pages in the slot,
7583 * when dirty logging is disabled.
7584 *
7585 * Since gfn_to_memslot() is relatively expensive, it helps to
7586 * skip it if it the test cannot possibly return true. On the
7587 * other hand, if any memslot has logging enabled, chances are
7588 * good that all of them do, in which case unaccount_nx_huge_page()
7589 * is much cheaper than zapping the page.
7590 *
7591 * If a memslot update is in progress, reading an incorrect value
7592 * of kvm->nr_memslots_dirty_logging is not a problem: if it is
7593 * becoming zero, gfn_to_memslot() will be done unnecessarily; if
7594 * it is becoming nonzero, the page will be zapped unnecessarily.
7595 * Either way, this only affects efficiency in racy situations,
7596 * and not correctness.
7597 */
7598 slot = NULL;
7599 if (atomic_read(&kvm->nr_memslots_dirty_logging)) {
7600 struct kvm_memslots *slots;
7601
7602 slots = kvm_memslots_for_spte_role(kvm, sp->role);
7603 slot = __gfn_to_memslot(slots, sp->gfn);
7604 WARN_ON_ONCE(!slot);
7605 }
7606
7607 if (slot && kvm_slot_dirty_track_enabled(slot))
7608 unaccount_nx_huge_page(kvm, sp);
> 7609 else if (mmu_type == KVM_TDP_MMU)
7610 flush |= kvm_tdp_mmu_zap_sp(kvm, sp);
7611 else
7612 kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
7613 WARN_ON_ONCE(sp->nx_huge_page_disallowed);
7614
7615 if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {
7616 kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush);
7617 rcu_read_unlock();
7618
7619 cond_resched_rwlock_write(&kvm->mmu_lock);
7620 flush = false;
7621
7622 rcu_read_lock();
7623 }
7624 }
7625 kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush);
7626
7627 rcu_read_unlock();
7628
7629 write_unlock(&kvm->mmu_lock);
7630 srcu_read_unlock(&kvm->srcu, rcu_idx);
7631 }
7632
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Powered by blists - more mailing lists