[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZRIIIFm5IMnkGh3T@casper.infradead.org>
Date: Mon, 25 Sep 2023 23:22:24 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Hugh Dickins <hughd@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andi Kleen <ak@...ux.intel.com>,
Christoph Lameter <cl@...ux.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
David Hildenbrand <david@...hat.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Yang Shi <shy828301@...il.com>,
Sidhartha Kumar <sidhartha.kumar@...cle.com>,
Vishal Moola <vishal.moola@...il.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Tejun Heo <tj@...nel.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Michal Hocko <mhocko@...e.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH 03/12] mempolicy: fix migrate_pages(2) syscall return
nr_failed
On Mon, Sep 25, 2023 at 01:24:02AM -0700, Hugh Dickins wrote:
> "man 2 migrate_pages" says "On success migrate_pages() returns the number
> of pages that could not be moved". Although 5.3 and 5.4 commits fixed
> mbind(MPOL_MF_STRICT|MPOL_MF_MOVE*) to fail with EIO when not all pages
> could be moved (because some could not be isolated for migration),
> migrate_pages(2) was left still reporting only those pages failing at the
> migration stage, forgetting those failing at the earlier isolation stage.
>
> Fix that by accumulating a long nr_failed count in struct queue_pages,
> returned by queue_pages_range() when it's not returning an error, for
> adding on to the nr_failed count from migrate_pages() in mm/migrate.c.
> A count of pages? It's more a count of folios, but changing it to pages
> would entail more work (also in mm/migrate.c): does not seem justified.
I certainly see what you're saying. If a folio is only partially mapped
(in an extreme case, the VMA is PAGE_SIZE and maps one page of a 512-page
folio), then setting nr_failed to folio_nr_pages() is misleading at best.
> +static void queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
> unsigned long end, struct mm_walk *walk)
> - __releases(ptl)
> {
> - int ret = 0;
> struct folio *folio;
> struct queue_pages *qp = walk->private;
> - unsigned long flags;
>
> if (unlikely(is_pmd_migration_entry(*pmd))) {
> - ret = -EIO;
> - goto unlock;
> + qp->nr_failed++;
> + return;
> }
> folio = pfn_folio(pmd_pfn(*pmd));
> if (is_huge_zero_page(&folio->page)) {
> walk->action = ACTION_CONTINUE;
> - goto unlock;
> + return;
> }
> if (!queue_folio_required(folio, qp))
> - goto unlock;
> -
> - flags = qp->flags;
> - /* go to folio migration */
> - if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
> - if (!vma_migratable(walk->vma) ||
> - migrate_folio_add(folio, qp->pagelist, flags)) {
> - ret = 1;
> - goto unlock;
> - }
> - } else
> - ret = -EIO;
> -unlock:
> - spin_unlock(ptl);
> - return ret;
> + return;
> + if (!(qp->flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) ||
> + !vma_migratable(walk->vma) ||
> + !migrate_folio_add(folio, qp->pagelist, qp->flags))
> + qp->nr_failed++;
However, I think here, we would do well to increment by HPAGE_PMD_NR.
Or whatever equivalent is flavour of the week.
Bravo to the other changes.
Reviewed-by: Matthew Wilcox (Oracle) <willy@...radead.org>
Powered by blists - more mailing lists