lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAPTztWYMhh3+V=-jXaMz5muTsj8fBX29umgJcsW7JfHA2LouuA@mail.gmail.com>
Date: Mon, 29 Dec 2025 10:57:23 -0800
From: Frank van der Linden <fvdl@...gle.com>
To: Li Zhe <lizhe.67@...edance.com>
Cc: akpm@...ux-foundation.org, david@...nel.org, linux-kernel@...r.kernel.org, 
	linux-mm@...ck.org, muchun.song@...ux.dev, osalvador@...e.de
Subject: Re: [PATCH 4/8] mm/hugetlb: introduce per-node sysfs interface "zeroable_hugepages"

On Mon, Dec 29, 2025 at 4:26 AM Li Zhe <lizhe.67@...edance.com> wrote:
>
> On Fri, 26 Dec 2025 10:51:01 -0800, fvdl@...gle.com wrote:
>
> > > +static ssize_t zeroable_hugepages_show(struct kobject *kobj,
> > > +                                       struct kobj_attribute *attr, char *buf)
> > > +{
> > > +       struct hstate *h;
> > > +       unsigned long free_huge_pages_zero;
> > > +       int nid;
> > > +
> > > +       h = kobj_to_hstate(kobj, &nid);
> > > +       if (WARN_ON(nid == NUMA_NO_NODE))
> > > +               return -EPERM;
> > > +
> > > +       free_huge_pages_zero = h->free_huge_pages_node[nid] -
> > > +                              h->free_huge_pages_zero_node[nid];
> > > +
> > > +       return sprintf(buf, "%lu\n", free_huge_pages_zero);
> > > +}
> > > +
> > > +static inline bool zero_should_abort(struct hstate *h, int nid)
> > > +{
> > > +       return (h->free_huge_pages_zero_node[nid] ==
> > > +               h->free_huge_pages_node[nid]) ||
> > > +               list_empty(&h->hugepage_freelists[nid]);
> > > +}
> > > +
> > > +static void zero_free_hugepages_nid(struct hstate *h,
> > > +                                  int nid, unsigned int nr_zero)
> > > +{
> > > +       struct list_head *freelist = &h->hugepage_freelists[nid];
> > > +       unsigned int nr_zerod = 0;
> > > +       struct folio *folio;
> > > +
> > > +       if (zero_should_abort(h, nid))
> > > +               return;
> > > +
> > > +       spin_lock_irq(&hugetlb_lock);
> > > +
> > > +       while (nr_zerod < nr_zero) {
> > > +
> > > +               if (zero_should_abort(h, nid) || fatal_signal_pending(current))
> > > +                       break;
> > > +
> > > +               freelist = freelist->prev;
> > > +               if (unlikely(list_is_head(freelist, &h->hugepage_freelists[nid])))
> > > +                       break;
> > > +               folio = list_entry(freelist, struct folio, lru);
> > > +
> > > +               if (folio_test_hugetlb_zeroed(folio) ||
> > > +                   folio_test_hugetlb_zeroing(folio))
> > > +                       continue;
> > > +
> > > +               folio_set_hugetlb_zeroing(folio);
> > > +
> > > +               /*
> > > +                * Incrementing this here is a bit of a fib, since
> > > +                * the page hasn't been cleared yet (it will be done
> > > +                * immediately after dropping the lock below). But
> > > +                * it keeps the count consistent with the overall
> > > +                * free count in case the page gets taken off the
> > > +                * freelist while we're working on it.
> > > +                */
> > > +               h->free_huge_pages_zero_node[nid]++;
> > > +               spin_unlock_irq(&hugetlb_lock);
> > > +
> > > +               /*
> > > +                * HWPoison pages may show up on the freelist.
> > > +                * Don't try to zero it out, but do set the flag
> > > +                * and counts, so that we don't consider it again.
> > > +                */
> > > +               if (!folio_test_hwpoison(folio))
> > > +                       folio_zero_user(folio, 0);
> > > +
> > > +               cond_resched();
> > > +
> > > +               spin_lock_irq(&hugetlb_lock);
> > > +               folio_set_hugetlb_zeroed(folio);
> > > +               folio_clear_hugetlb_zeroing(folio);
> > > +
> > > +               /*
> > > +                * If the page is still on the free list, move
> > > +                * it to the head.
> > > +                */
> > > +               if (folio_test_hugetlb_freed(folio))
> > > +                       list_move(&folio->lru, &h->hugepage_freelists[nid]);
> > > +
> > > +               /*
> > > +                * If someone was waiting for the zero to
> > > +                * finish, wake them up.
> > > +                */
> > > +               if (waitqueue_active(&h->dqzero_wait[nid]))
> > > +                       wake_up(&h->dqzero_wait[nid]);
> > > +               nr_zerod++;
> > > +               freelist = &h->hugepage_freelists[nid];
> > > +       }
> > > +       spin_unlock_irq(&hugetlb_lock);
> > > +}
> >
> > Nit: s/nr_zerod/nr_zeroed/
>
> Thank you for the reminder. I will address this issue in v2.
>
> > Feels like the list logic can be cleaned up a bit here. Since the
> > zeroed folios are at the head of the list, and the dirty ones at the
> > tail, and you start walking from the tail, you don't need to check if
> > you circled back to the head - just stop if you encounter a prezeroed
> > folio. If you encounter a prezeroed folio while walking from the tail,
> > that means that all other folios from that one to the head will also
> > be prezeroed already.
>
> Thank you for the thoughtful suggestion. Your line of reasoning is,
> in most situations, perfectly valid. Under extreme concurrency,
> however, a corner case can still appear. Imagine two processes
> simultaneously zeroing huge pages: Process A enters
> zero_free_hugepages_nid(), completes the zeroing of one huge page,
> and marks the folio in the list as pre-zeroed. Should Process B enter
> the same function moments later and decide to exit as soon as it
> meets a prezeroed folio, the intended parallel zeroing would quietly
> fall back to a single-threaded pace.

Hm, setting the prezeroed bit and moving the folio to the front of the
free list happens while holding hugetlb_lock. In other words, if you
encounter a folio with the prezeroed bit set while holding
hugetlb_lock, it will always be in a contiguous stretch of prezeroed
folios at the head of the free list.

Since the check for 'is this already prezeroed' is done while holding
hugetlb_lock, you know for sure that the folio is part of a list of
prezeroed folios at the head, and you can stop, right?

- Frank

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ