lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 10 Jul 2013 11:46:57 +0100
From:	"Jan Beulich" <JBeulich@...e.com>
To:	"Ian Campbell" <ian.campbell@...rix.com>,
	"Wei Liu" <wei.liu2@...rix.com>
Cc:	<davem@...emloft.net>, "Dion Kant" <g.w.kant@...enet.nl>,
	<xen-devel@...ts.xen.org>, <netdev@...r.kernel.org>,
	<stable@...r.kernel.org>
Subject: Re: [Xen-devel] [PATCH] xen-netfront: pull on receive skb may
 need to happen earlier

>>> On 10.07.13 at 12:04, Wei Liu <wei.liu2@...rix.com> wrote:
> Jan, looking at the commit log, the overrun issue in
> xennet_get_responses was not introduced by __pskb_pull_tail. The call to
> xennet_fill_frags has always been in the same place.

I'm convinced it was: Prior to that commit, if the first response slot
contained up to RX_COPY_THRESHOLD bytes, it got entirely
consumed into the linear portion of the SKB, leaving the number of
fragments available for filling at MAX_SKB_FRAGS. Said commit
dropped the early copying, leaving the fragment count at 1
unconditionally, and now accumulates all of the response slots into
fragments, only pulling after all of them got filled in. It neglected to
realize - due to the count now always being 1 at the beginning - that
this can lead to MAX_SKB_FRAGS + 1 frags getting filled, corrupting
memory.

Ian - I have to admit that I'm slightly irritated by you so far not
having participated at all in sorting out the fix for this bug that a
change of yours introduced.

Jan

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ