lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 26 Mar 2013 18:02:34 +0000
From:	David Vrabel <david.vrabel@...rix.com>
To:	Wei Liu <wei.liu2@...rix.com>
CC:	"xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	Ian Campbell <Ian.Campbell@...rix.com>,
	"annie.li@...cle.com" <annie.li@...cle.com>,
	"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>
Subject: Re: [PATCH 5/6] xen-netback: coalesce slots before copying

On 25/03/13 11:08, Wei Liu wrote:
> This patch tries to coalesce tx requests when constructing grant copy
> structures. It enables netback to deal with situation when frontend's
> MAX_SKB_FRAGS is larger than backend's MAX_SKB_FRAGS.
> 
> It defines max_skb_slots, which is a estimation of the maximum number of slots
> a guest can send, anything bigger than that is considered malicious. Now it is
> set to 20, which should be enough to accommodate Linux (16 to 19).
> 
> Also change variable name from "frags" to "slots" in netbk_count_requests.

It it worth summarizing an (off-line) discussion I had with Wei on this
patch.

There are two regression that need to be addressed here.

1. The reduction of the number of supported ring entries (slots) per
packet (from 18 to 17).

2. The XSA-39 security fix turning "too many frags" errors from just
dropping the packet to a fatal error and disabling the VIF.

The key root cause of the problem is that the protocol is poorly
specified using a property external to the netback/netfront drivers
(i.e., MAX_SKB_FRAGS).

A secondary problem is some frontends have used a max slots per packet
value that is larger than netback as supported. e.g., the Windows GPLPV
drivers use up to 19.  These packets have always been dropped.

The first step is to properly specify the maximum slots per-packet as
part of the interface.  This should be specified as 18 (the historical
value).

The second step is to define a threshold for slots per packet, above
which the guest is considered to be malicious and the error is fatal.
20 seems a sensible value here.

The behavior of netback for packet is thus:

    1-18 slots: valid
   19-20 slots: drop and respond with an error
   21+   slots: fatal error

Note that we do not make 19-20 valid as this is a change to the protocol
and guests may end up relying on this which would then break the guests
if they migrate or start on host with a limit of 18.

A third (and future) step would be to investigate whether increasing the
slots per-packet limit is sensible.  There would then need to be a
mechanism to negotiate this limit between the front and back ends.

David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ