[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZOgpb1Qo5B0r+mhJ@casper.infradead.org>
Date: Fri, 25 Aug 2023 05:09:19 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Will Deacon <will@...nel.org>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Nick Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>,
Arnd Bergmann <arnd@...db.de>,
David Hildenbrand <david@...hat.com>,
Yu Zhao <yuzhao@...gle.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Yin Fengwei <fengwei.yin@...el.com>,
Yang Shi <shy828301@...il.com>,
"Huang, Ying" <ying.huang@...el.com>, Zi Yan <ziy@...dia.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 4/4] mm/mmu_gather: Store and process pages in contig
ranges
On Thu, Aug 10, 2023 at 11:33:32AM +0100, Ryan Roberts wrote:
> +void folios_put_refs(struct folio_range *folios, int nr)
> +{
> + int i;
> + LIST_HEAD(pages_to_free);
> + struct lruvec *lruvec = NULL;
> + unsigned long flags = 0;
> + unsigned int lock_batch;
> +
> + for (i = 0; i < nr; i++) {
> + struct folio *folio = page_folio(folios[i].start);
> + int refs = folios[i].end - folios[i].start;
> +
> + /*
> + * Make sure the IRQ-safe lock-holding time does not get
> + * excessive with a continuous string of pages from the
> + * same lruvec. The lock is held only if lruvec != NULL.
> + */
> + if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) {
> + unlock_page_lruvec_irqrestore(lruvec, flags);
> + lruvec = NULL;
> + }
> +
> + if (is_huge_zero_page(&folio->page))
> + continue;
> +
> + if (folio_is_zone_device(folio)) {
> + if (lruvec) {
> + unlock_page_lruvec_irqrestore(lruvec, flags);
> + lruvec = NULL;
> + }
> + if (put_devmap_managed_page(&folio->page))
> + continue;
> + if (folio_put_testzero(folio))
We're only putting one ref for the zone_device folios? Surely
this should be ref_sub_and_test like below?
> + free_zone_device_page(&folio->page);
> + continue;
> + }
> +
> + if (!folio_ref_sub_and_test(folio, refs))
> + continue;
> +
> + if (folio_test_large(folio)) {
> + if (lruvec) {
> + unlock_page_lruvec_irqrestore(lruvec, flags);
> + lruvec = NULL;
> + }
> + __folio_put_large(folio);
> + continue;
> + }
> +
> + if (folio_test_lru(folio)) {
> + struct lruvec *prev_lruvec = lruvec;
> +
> + lruvec = folio_lruvec_relock_irqsave(folio, lruvec,
> + &flags);
> + if (prev_lruvec != lruvec)
> + lock_batch = 0;
> +
> + lruvec_del_folio(lruvec, folio);
> + __folio_clear_lru_flags(folio);
> + }
> +
> + /*
> + * In rare cases, when truncation or holepunching raced with
> + * munlock after VM_LOCKED was cleared, Mlocked may still be
> + * found set here. This does not indicate a problem, unless
> + * "unevictable_pgs_cleared" appears worryingly large.
> + */
> + if (unlikely(folio_test_mlocked(folio))) {
> + __folio_clear_mlocked(folio);
> + zone_stat_sub_folio(folio, NR_MLOCK);
> + count_vm_event(UNEVICTABLE_PGCLEARED);
> + }
You'll be glad to know I've factored out a nice little helper for that.
Powered by blists - more mailing lists