lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAHmME9pgnhr=yQZqzr3oXpyVrp7hFAZddTUBX=gy-QY-QMsYtg@mail.gmail.com>
Date:   Tue, 25 Apr 2017 17:59:34 +0200
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     Sabrina Dubroca <sd@...asysnail.net>
Cc:     Netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH net] macsec: avoid heap overflow in skb_to_sgvec on receive

On Tue, Apr 25, 2017 at 5:51 PM, Sabrina Dubroca <sd@...asysnail.net> wrote:
> 2017-04-25, 17:39:09 +0200, Jason A. Donenfeld wrote:
>> Hi Sabrina,
>>
>> I think I may have beaten you to the punch here by a few minutes. :)
>
> I said I was going to post a patch.
> Mail headers seem to disagree with you ;)

Oh, whoops, sorry -- thought you wanted me to. Misread, but happy to
have done it anyway.

> Unless I missed something, encrypt was already handling fragments
> correctly. An skb with ->frag_list should have no skb_tailroom, so it
> will be linearized skb_copy_expand().

You're right. In that case, you might as well add back the FRAGLIST
annotation to the FEATURES, so that packets with frag_lists aren't
copied twice -- once upon linearization into xmit and again when
calling copy_expand in case they need more head or tail space. In
general, too, using the precise number of frags saves memory. But
above all, it seems to me the important thing is that it's very
_clear_ there isn't going to be an overflow, that the code doesn't
rely on so many odd particulars that then get violated later in its
life. Using the explicit length makes things a lot more clear. So
maybe better to go with mine?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ