lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YPg2756QFreokTIg@casper.infradead.org>
Date:   Wed, 21 Jul 2021 16:02:07 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Mike Rapoport <rppt@...nel.org>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        linux-fsdevel@...r.kernel.org, Christoph Hellwig <hch@....de>
Subject: Re: [PATCH v14 054/138] mm: Add kmap_local_folio()

On Wed, Jul 21, 2021 at 05:22:16PM +0300, Mike Rapoport wrote:
> On Wed, Jul 21, 2021 at 03:12:03PM +0100, Matthew Wilcox wrote:
> > On Wed, Jul 21, 2021 at 12:58:24PM +0300, Mike Rapoport wrote:
> > > > +/**
> > > > + * kmap_local_folio - Map a page in this folio for temporary usage
> > > > + * @folio:	The folio to be mapped.
> > > > + * @offset:	The byte offset within the folio.
> > > > + *
> > > > + * Returns: The virtual address of the mapping
> > > > + *
> > > > + * Can be invoked from any context.
> > > 
> > > Context: Can be invoked from any context.
> > > 
> > > > + *
> > > > + * Requires careful handling when nesting multiple mappings because the map
> > > > + * management is stack based. The unmap has to be in the reverse order of
> > > > + * the map operation:
> > > > + *
> > > > + * addr1 = kmap_local_folio(page1, offset1);
> > > > + * addr2 = kmap_local_folio(page2, offset2);
> > > 
> > > Please s/page/folio/g here and in the description below
> > > 
> > > > + * ...
> > > > + * kunmap_local(addr2);
> > > > + * kunmap_local(addr1);
> > > > + *
> > > > + * Unmapping addr1 before addr2 is invalid and causes malfunction.
> > > > + *
> > > > + * Contrary to kmap() mappings the mapping is only valid in the context of
> > > > + * the caller and cannot be handed to other contexts.
> > > > + *
> > > > + * On CONFIG_HIGHMEM=n kernels and for low memory pages this returns the
> > > > + * virtual address of the direct mapping. Only real highmem pages are
> > > > + * temporarily mapped.
> > > > + *
> > > > + * While it is significantly faster than kmap() for the higmem case it
> > > > + * comes with restrictions about the pointer validity. Only use when really
> > > > + * necessary.
> > > > + *
> > > > + * On HIGHMEM enabled systems mapping a highmem page has the side effect of
> > > > + * disabling migration in order to keep the virtual address stable across
> > > > + * preemption. No caller of kmap_local_folio() can rely on this side effect.
> > > > + */
> > 
> > kmap_local_folio() only maps one page from the folio.  So it's not
> > appropriate to s/page/folio/g.  I fiddled with the description a bit to
> > make this clearer:
> > 
> >  /**
> >   * kmap_local_folio - Map a page in this folio for temporary usage
> > - * @folio:     The folio to be mapped.
> > - * @offset:    The byte offset within the folio.
> > - *
> > - * Returns: The virtual address of the mapping
> > - *
> > - * Can be invoked from any context.
> > + * @folio: The folio containing the page.
> > + * @offset: The byte offset within the folio which identifies the page.
> >   *
> >   * Requires careful handling when nesting multiple mappings because the map
> >   * management is stack based. The unmap has to be in the reverse order of
> >   * the map operation:
> >   *
> > - * addr1 = kmap_local_folio(page1, offset1);
> > - * addr2 = kmap_local_folio(page2, offset2);
> > + * addr1 = kmap_local_folio(folio1, offset1);
> > + * addr2 = kmap_local_folio(folio2, offset2);
> >   * ...
> >   * kunmap_local(addr2);
> >   * kunmap_local(addr1);
> > @@ -131,6 +127,9 @@ static inline void *kmap_local_page(struct page *page);
> >   * On HIGHMEM enabled systems mapping a highmem page has the side effect of
> >   * disabling migration in order to keep the virtual address stable across
> >   * preemption. No caller of kmap_local_folio() can rely on this side effect.
> > + *
> > + * Context: Can be invoked from any context.
> > + * Return: The virtual address of @offset.
> >   */
> >  static inline void *kmap_local_folio(struct folio *folio, size_t offset)
> 
> This is clearer, thanks! 
> 
> Maybe just add page to Return: description:
> 
> * Return: The virtual address of page @offset.

No, it really does return the virtual address of @offset.  If you ask
for offset 0x1234 within a (sufficiently large) folio, it will map the
second page of that folio and return the address of the 0x234'th byte
within it.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ