lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <fc83a855-bb3f-4374-8896-579420732b25@redhat.com> Date: Sat, 14 Dec 2024 16:32:00 +0100 From: David Hildenbrand <david@...hat.com> To: Alistair Popple <apopple@...dia.com>, dan.j.williams@...el.com, linux-mm@...ck.org Cc: lina@...hilina.net, zhang.lyra@...il.com, gerald.schaefer@...ux.ibm.com, vishal.l.verma@...el.com, dave.jiang@...el.com, logang@...tatee.com, bhelgaas@...gle.com, jack@...e.cz, jgg@...pe.ca, catalin.marinas@....com, will@...nel.org, mpe@...erman.id.au, npiggin@...il.com, dave.hansen@...ux.intel.com, ira.weiny@...el.com, willy@...radead.org, djwong@...nel.org, tytso@....edu, linmiaohe@...wei.com, peterx@...hat.com, linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org, linuxppc-dev@...ts.ozlabs.org, nvdimm@...ts.linux.dev, linux-cxl@...r.kernel.org, linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org, linux-xfs@...r.kernel.org, jhubbard@...dia.com, hch@....de, david@...morbit.com Subject: Re: [PATCH v3 14/25] huge_memory: Allow mappings of PUD sized pages On 22.11.24 02:40, Alistair Popple wrote: > Currently DAX folio/page reference counts are managed differently to > normal pages. To allow these to be managed the same as normal pages > introduce vmf_insert_folio_pud. This will map the entire PUD-sized folio > and take references as it would for a normally mapped page. > > This is distinct from the current mechanism, vmf_insert_pfn_pud, which > simply inserts a special devmap PUD entry into the page table without > holding a reference to the page for the mapping. > > Signed-off-by: Alistair Popple <apopple@...dia.com> > --- Hi, The patch subject of this (and especially the next patch) is misleading. Likely you meant to have it as: "mm/huge_memory: add vmf_insert_folio_pud() for mapping PUD sized pages" > for (i = 0; i < nr_pages; i++) { > @@ -1523,6 +1531,26 @@ void folio_add_file_rmap_pmd(struct folio *folio, struct page *page, > #endif > } > > +/** > + * folio_add_file_rmap_pud - add a PUD mapping to a page range of a folio > + * @folio: The folio to add the mapping to > + * @page: The first page to add > + * @vma: The vm area in which the mapping is added > + * > + * The page range of the folio is defined by [page, page + HPAGE_PUD_NR) > + * > + * The caller needs to hold the page table lock. > + */ > +void folio_add_file_rmap_pud(struct folio *folio, struct page *page, > + struct vm_area_struct *vma) > +{ > +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD > + __folio_add_file_rmap(folio, page, HPAGE_PUD_NR, vma, RMAP_LEVEL_PUD); > +#else > + WARN_ON_ONCE(true); > +#endif > +} > + > static __always_inline void __folio_remove_rmap(struct folio *folio, > struct page *page, int nr_pages, struct vm_area_struct *vma, > enum rmap_level level) > @@ -1552,6 +1580,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, > partially_mapped = nr && atomic_read(mapped); > break; > case RMAP_LEVEL_PMD: > + case RMAP_LEVEL_PUD: > atomic_dec(&folio->_large_mapcount); > last = atomic_add_negative(-1, &folio->_entire_mapcount); > if (last) { If you simply reuse that code (here and on the adding path), you will end up effectively setting nr_pmdmapped to a very large value and passing that into __folio_mod_stat(). There, we will adjust NR_SHMEM_PMDMAPPED/NR_FILE_PMDMAPPED, which is wrong (it's PUD mapped ;) ). It's probably best to split out the rmap changes from the other things in this patch. -- Cheers, David / dhildenb
Powered by blists - more mailing lists