lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 26 Mar 2013 11:29:59 +0000 From: Wei Liu <wei.liu2@...rix.com> To: David Vrabel <david.vrabel@...rix.com> CC: Wei Liu <liuw@...w.name>, Ian Campbell <Ian.Campbell@...rix.com>, "konrad.wilk@...cle.com" <konrad.wilk@...cle.com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>, "annie.li@...cle.com" <annie.li@...cle.com> Subject: Re: [Xen-devel] [PATCH 5/6] xen-netback: coalesce slots before copying On Tue, Mar 26, 2013 at 11:13:38AM +0000, David Vrabel wrote: > >> > >> Separately, it may be sensible for the backend to drop packets with more > >> frags than max-slots-per-frame up to some threshold where anything more > >> is considered malicious (i.e., 1 - 18 slots is a valid packet, 19-20 are > >> dropped and 21 or more is a fatal error). > >> > > > > Why drop the packet when we are able to process it? Frontend cannot know > > it has crossed the line anyway. > > Because it's a change to the protocol and we do not want to do this for > a regression fix. > If I understand correctly the regression you talked about was introduced by harsh punishment in XSA-39? If so, this is the patch you need to fix that. Frontend only knows that it has connectivity or not. This patch guarantee that the old netfront with larger MAX_SKB_FRAGS still see the same thing from its point of view. Netfront cannot know the intermediate state between 18 and 20. > As a separate fix we can consider increasing the number of slots > per-packet once there is a mechanism to report this to the front end. > Sure, that's on my TODO list. Wei. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists