lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 12 Jul 2021 15:44:47 +0800
From:   Yunsheng Lin <linyunsheng@...wei.com>
To:     Alexander Duyck <alexander.duyck@...il.com>
CC:     David Miller <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Russell King - ARM Linux <linux@...linux.org.uk>,
        Marcin Wojtas <mw@...ihalf.com>, <linuxarm@...neuler.org>,
        <yisen.zhuang@...wei.com>, "Salil Mehta" <salil.mehta@...wei.com>,
        <thomas.petazzoni@...tlin.com>, <hawk@...nel.org>,
        Ilias Apalodimas <ilias.apalodimas@...aro.org>,
        "Alexei Starovoitov" <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        "John Fastabend" <john.fastabend@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Peter Zijlstra <peterz@...radead.org>,
        "Will Deacon" <will@...nel.org>,
        Matthew Wilcox <willy@...radead.org>,
        "Vlastimil Babka" <vbabka@...e.cz>, <fenghua.yu@...el.com>,
        <guro@...com>, Peter Xu <peterx@...hat.com>,
        Feng Tang <feng.tang@...el.com>,
        Jason Gunthorpe <jgg@...pe.ca>,
        Matteo Croce <mcroce@...rosoft.com>,
        Hugh Dickins <hughd@...gle.com>,
        Jonathan Lemon <jonathan.lemon@...il.com>,
        "Alexander Lobakin" <alobakin@...me>,
        Willem de Bruijn <willemb@...gle.com>, <wenxu@...oud.cn>,
        Cong Wang <cong.wang@...edance.com>,
        Kevin Hao <haokexin@...il.com>, <nogikh@...gle.com>,
        Marco Elver <elver@...gle.com>,
        Netdev <netdev@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>, bpf <bpf@...r.kernel.org>
Subject: Re: [PATCH rfc v2 2/5] page_pool: add interface for getting and
 setting pagecnt_bias

On 2021/7/11 0:55, Alexander Duyck wrote:
> On Sat, Jul 10, 2021 at 12:44 AM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>>
>> As suggested by Alexander, "A DMA mapping should be page
>> aligned anyway so the lower 12 bits would be reserved 0",
>> so it might make more sense to repurpose the lower 12 bits
>> of the dma address to store the pagecnt_bias for elevated
>> refcnt case in page pool.
>>
>> As newly added page_pool_get_pagecnt_bias() may be called
>> outside of the softirq context, so annotate the access to
>> page->dma_addr[0] with READ_ONCE() and WRITE_ONCE().
>>
>> Other three interfaces using page->dma_addr[0] is only called
>> in the softirq context during normal rx processing, hopefully
>> the barrier in the rx processing will ensure the correct order
>> between getting and setting pagecnt_bias.
>>
>> Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com>
>> ---
>>  include/net/page_pool.h | 24 ++++++++++++++++++++++--
>>  1 file changed, 22 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
>> index 8d7744d..5746f17 100644
>> --- a/include/net/page_pool.h
>> +++ b/include/net/page_pool.h
>> @@ -200,7 +200,7 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
>>
>>  static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
>>  {
>> -       dma_addr_t ret = page->dma_addr[0];
>> +       dma_addr_t ret = READ_ONCE(page->dma_addr[0]) & PAGE_MASK;
>>         if (sizeof(dma_addr_t) > sizeof(unsigned long))
>>                 ret |= (dma_addr_t)page->dma_addr[1] << 16 << 16;
>>         return ret;
>> @@ -208,11 +208,31 @@ static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
>>
>>  static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
>>  {
>> -       page->dma_addr[0] = addr;
>> +       unsigned long dma_addr_0 = READ_ONCE(page->dma_addr[0]);
>> +
>> +       dma_addr_0 &= ~PAGE_MASK;
>> +       dma_addr_0 |= (addr & PAGE_MASK);
> 
> So rather than doing all this testing and clearing it would probably
> be better to add a return value to the function and do something like:
> 
> if (WARN_ON(dma_addr_0 & ~PAGE_MASK))
>     return -1;
> 
> That way you could have page_pool_dma_map unmap, free the page, and
> return false indicating that the DMA mapping failed with a visible
> error in the event that our expectionat that the dma_addr is page
> aligned is ever violated.

I suppose the above is based on that page_pool_set_dma_addr() is called
only once before page_pool_set_pagecnt_bias(), right? so we could:

static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
{
	if (WARN_ON(dma_addr_0 & ~PAGE_MASK))
		return false;

	page->dma_addr[0] = addr;
	if (sizeof(dma_addr_t) > sizeof(unsigned long))
		page->dma_addr[1] = upper_32_bits(addr);

	return true;
}

> 
>> +       WRITE_ONCE(page->dma_addr[0], dma_addr_0);
>> +
>>         if (sizeof(dma_addr_t) > sizeof(unsigned long))
>>                 page->dma_addr[1] = upper_32_bits(addr);
>>  }
>>
>> +static inline int page_pool_get_pagecnt_bias(struct page *page)
>> +{
>> +       return (READ_ONCE(page->dma_addr[0]) & ~PAGE_MASK);
> 
> You don't need the parenthesis around the READ_ONCE and PAGE_MASK.

ok.

> 
>> +}
>> +
>> +static inline void page_pool_set_pagecnt_bias(struct page *page, int bias)
>> +{
>> +       unsigned long dma_addr_0 = READ_ONCE(page->dma_addr[0]);
>> +
>> +       dma_addr_0 &= PAGE_MASK;
>> +       dma_addr_0 |= (bias & ~PAGE_MASK);
>> +
>> +       WRITE_ONCE(page->dma_addr[0], dma_addr_0);
>> +}
>> +
>>  static inline bool is_page_pool_compiled_in(void)
>>  {
>>  #ifdef CONFIG_PAGE_POOL
>> --
>> 2.7.4
>>
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ