lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180323151522.2d3dde07@redhat.com>
Date:   Fri, 23 Mar 2018 15:15:22 +0100
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Eric Dumazet <eric.dumazet@...il.com>
Cc:     netdev@...r.kernel.org,
        BjörnTöpel <bjorn.topel@...el.com>,
        magnus.karlsson@...el.com, eugenia@...lanox.com,
        Jason Wang <jasowang@...hat.com>,
        John Fastabend <john.fastabend@...il.com>,
        Eran Ben Elisha <eranbe@...lanox.com>,
        Saeed Mahameed <saeedm@...lanox.com>, galp@...lanox.com,
        Daniel Borkmann <borkmann@...earbox.net>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Tariq Toukan <tariqt@...lanox.com>, brouer@...hat.com
Subject: Re: [bpf-next V5 PATCH 11/15] page_pool: refurbish version of
 page_pool code

On Fri, 23 Mar 2018 06:29:55 -0700
Eric Dumazet <eric.dumazet@...il.com> wrote:

> On 03/23/2018 05:18 AM, Jesper Dangaard Brouer wrote:
> 
> > +
> > +void page_pool_destroy_rcu(struct page_pool *pool)
> > +{
> > +	call_rcu(&pool->rcu, __page_pool_destroy_rcu);
> > +}
> > +EXPORT_SYMBOL(page_pool_destroy_rcu);
> >   
> 
> 
> Why do we need to respect one rcu grace period before destroying a page pool ?

Due to previous allocator ID patch, which can have a pointer reference
to a page_pool, and the allocator ID lookup uses RCU.

> In any case, this should be called page_pool_destroy()

Okay.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ