[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202601160405.2XZZtwqw-lkp@intel.com>
Date: Fri, 16 Jan 2026 04:40:21 +0800
From: kernel test robot <lkp@...el.com>
To: mpenttil@...hat.com, linux-mm@...ck.org
Cc: oe-kbuild-all@...ts.linux.dev, linux-kernel@...r.kernel.org,
Mika Penttilä <mpenttil@...hat.com>,
David Hildenbrand <david@...hat.com>,
Jason Gunthorpe <jgg@...dia.com>,
Leon Romanovsky <leonro@...dia.com>,
Alistair Popple <apopple@...dia.com>,
Balbir Singh <balbirs@...dia.com>, Zi Yan <ziy@...dia.com>,
Matthew Brost <matthew.brost@...el.com>
Subject: Re: [PATCH 1/3] mm: unified hmm fault and migrate device pagewalk
paths
Hi,
kernel test robot noticed the following build warnings:
[auto build test WARNING on akpm-mm/mm-nonmm-unstable]
[also build test WARNING on linus/master v6.19-rc5 next-20260115]
[cannot apply to akpm-mm/mm-everything]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/mpenttil-redhat-com/mm-unified-hmm-fault-and-migrate-device-pagewalk-paths/20260114-172232
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-nonmm-unstable
patch link: https://lore.kernel.org/r/20260114091923.3950465-2-mpenttil%40redhat.com
patch subject: [PATCH 1/3] mm: unified hmm fault and migrate device pagewalk paths
config: x86_64-randconfig-123-20260115 (https://download.01.org/0day-ci/archive/20260116/202601160405.2XZZtwqw-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260116/202601160405.2XZZtwqw-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@...el.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202601160405.2XZZtwqw-lkp@intel.com/
sparse warnings: (new ones prefixed by >>)
mm/migrate_device.c:179:25: sparse: sparse: context imbalance in 'migrate_vma_collect_huge_pmd' - unexpected unlock
mm/migrate_device.c:262:27: sparse: sparse: context imbalance in 'migrate_vma_collect_pmd' - different lock contexts for basic block
>> mm/migrate_device.c:743:18: sparse: sparse: Initializer entry defined twice
mm/migrate_device.c:746:18: sparse: also defined here
mm/migrate_device.c:915:16: sparse: sparse: context imbalance in 'migrate_vma_insert_huge_pmd_page' - different lock contexts for basic block
vim +743 mm/migrate_device.c
670
671 /**
672 * migrate_vma_setup() - prepare to migrate a range of memory
673 * @args: contains the vma, start, and pfns arrays for the migration
674 *
675 * Returns: negative errno on failures, 0 when 0 or more pages were migrated
676 * without an error.
677 *
678 * Prepare to migrate a range of memory virtual address range by collecting all
679 * the pages backing each virtual address in the range, saving them inside the
680 * src array. Then lock those pages and unmap them. Once the pages are locked
681 * and unmapped, check whether each page is pinned or not. Pages that aren't
682 * pinned have the MIGRATE_PFN_MIGRATE flag set (by this function) in the
683 * corresponding src array entry. Then restores any pages that are pinned, by
684 * remapping and unlocking those pages.
685 *
686 * The caller should then allocate destination memory and copy source memory to
687 * it for all those entries (ie with MIGRATE_PFN_VALID and MIGRATE_PFN_MIGRATE
688 * flag set). Once these are allocated and copied, the caller must update each
689 * corresponding entry in the dst array with the pfn value of the destination
690 * page and with MIGRATE_PFN_VALID. Destination pages must be locked via
691 * lock_page().
692 *
693 * Note that the caller does not have to migrate all the pages that are marked
694 * with MIGRATE_PFN_MIGRATE flag in src array unless this is a migration from
695 * device memory to system memory. If the caller cannot migrate a device page
696 * back to system memory, then it must return VM_FAULT_SIGBUS, which has severe
697 * consequences for the userspace process, so it must be avoided if at all
698 * possible.
699 *
700 * For empty entries inside CPU page table (pte_none() or pmd_none() is true) we
701 * do set MIGRATE_PFN_MIGRATE flag inside the corresponding source array thus
702 * allowing the caller to allocate device memory for those unbacked virtual
703 * addresses. For this the caller simply has to allocate device memory and
704 * properly set the destination entry like for regular migration. Note that
705 * this can still fail, and thus inside the device driver you must check if the
706 * migration was successful for those entries after calling migrate_vma_pages(),
707 * just like for regular migration.
708 *
709 * After that, the callers must call migrate_vma_pages() to go over each entry
710 * in the src array that has the MIGRATE_PFN_VALID and MIGRATE_PFN_MIGRATE flag
711 * set. If the corresponding entry in dst array has MIGRATE_PFN_VALID flag set,
712 * then migrate_vma_pages() to migrate struct page information from the source
713 * struct page to the destination struct page. If it fails to migrate the
714 * struct page information, then it clears the MIGRATE_PFN_MIGRATE flag in the
715 * src array.
716 *
717 * At this point all successfully migrated pages have an entry in the src
718 * array with MIGRATE_PFN_VALID and MIGRATE_PFN_MIGRATE flag set and the dst
719 * array entry with MIGRATE_PFN_VALID flag set.
720 *
721 * Once migrate_vma_pages() returns the caller may inspect which pages were
722 * successfully migrated, and which were not. Successfully migrated pages will
723 * have the MIGRATE_PFN_MIGRATE flag set for their src array entry.
724 *
725 * It is safe to update device page table after migrate_vma_pages() because
726 * both destination and source page are still locked, and the mmap_lock is held
727 * in read mode (hence no one can unmap the range being migrated).
728 *
729 * Once the caller is done cleaning up things and updating its page table (if it
730 * chose to do so, this is not an obligation) it finally calls
731 * migrate_vma_finalize() to update the CPU page table to point to new pages
732 * for successfully migrated pages or otherwise restore the CPU page table to
733 * point to the original source pages.
734 */
735 int migrate_vma_setup(struct migrate_vma *args)
736 {
737 int ret;
738 long nr_pages = (args->end - args->start) >> PAGE_SHIFT;
739 struct hmm_range range = {
740 .notifier = NULL,
741 .start = args->start,
742 .end = args->end,
> 743 .migrate = args,
744 .hmm_pfns = args->src,
745 .dev_private_owner = args->pgmap_owner,
746 .migrate = args
747 };
748
749 args->start &= PAGE_MASK;
750 args->end &= PAGE_MASK;
751 if (!args->vma || is_vm_hugetlb_page(args->vma) ||
752 (args->vma->vm_flags & VM_SPECIAL) || vma_is_dax(args->vma))
753 return -EINVAL;
754 if (nr_pages <= 0)
755 return -EINVAL;
756 if (args->start < args->vma->vm_start ||
757 args->start >= args->vma->vm_end)
758 return -EINVAL;
759 if (args->end <= args->vma->vm_start || args->end > args->vma->vm_end)
760 return -EINVAL;
761 if (!args->src || !args->dst)
762 return -EINVAL;
763 if (args->fault_page && !is_device_private_page(args->fault_page))
764 return -EINVAL;
765 if (args->fault_page && !PageLocked(args->fault_page))
766 return -EINVAL;
767
768 memset(args->src, 0, sizeof(*args->src) * nr_pages);
769 args->cpages = 0;
770 args->npages = 0;
771
772 if (args->flags & MIGRATE_VMA_FAULT)
773 range.default_flags |= HMM_PFN_REQ_FAULT;
774
775 ret = hmm_range_fault(&range);
776
777 migrate_hmm_range_setup(&range);
778
779 /*
780 * At this point pages are locked and unmapped, and thus they have
781 * stable content and can safely be copied to destination memory that
782 * is allocated by the drivers.
783 */
784 return ret;
785
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Powered by blists - more mailing lists