lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 27 Jun 2011 21:51:53 +0100
From:	Jeremy Fitzhardinge <jeremy@...p.org>
To:	Ian Campbell <Ian.Campbell@...rix.com>
CC:	David Miller <davem@...emloft.net>, eric.dumazet@...il.com,
	netdev@...r.kernel.org, xen-devel@...ts.xensource.com,
	rusty@...tcorp.com.au
Subject: Re: SKB paged fragment lifecycle on receive

On 06/25/2011 12:58 PM, Ian Campbell wrote:
> On Fri, 2011-06-24 at 13:11 -0700, Jeremy Fitzhardinge wrote:
>> On 06/24/2011 12:46 PM, David Miller wrote:
>>> Pages get transferred between different SKBs all the time.
>>>
>>> For example, GRO makes extensive use of this technique.
>>> See net/core/skbuff.c:skb_gro_receive().
>>>
>>> It is just one example.
>> I see, and the new skb doesn't get a destructor copied from the
>> original, so there'd be no second callback.
> What about if we were to have a per-shinfo destructor (called once for
> each page as its refcount goes 1->0, from whichever skb ends up with the
> last ref) as well as the skb-destructors.

We never want the refcount for granted pages to go from 1 -> 0.  The
safest thing is to make sure we always elevate the refcount to make sure
that nothing else can ever drop the last ref.

If we can trust the network stack to always do the last release (and not
hand it off to something else to do it), then we could have a destructor
which gets called before the last ref drop (or leaves the ref drop to
the destructor to do), and do everything required that way.  But it
seems pretty fragile.  At the very least it would need a thorough code
audit to make sure that everything handles page lifetimes in the
expected way - but then I'd still worry about out-of-tree patches
breaking something in subtle ways.

>  This already handles the
> cloning case but when pages are moved between shinfo then would it make
> sense for that to be propagated between skb's under these circumstances
> and/or require them to be the same? Since in the case of something like
> skb_gro_receive the skbs (and hence the frag array pages) are all from
> the same 'owner' (even if the skb is actually created by the stack on
> their behalf) I suspect this could work?
>
> But I bet this assumption isn't valid in all cases.

Hm.

> In which case I end up wondering about a destructor per page in the frag
> array. At which point we might as well consider it as a part of the core
> mm stuff rather than something net specific?

Doing it generically still needs some kind of marker that the page has a
special-case destructor (and the destructor pointer itself).

    J
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ