lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <532C8FF0.7090008@citrix.com>
Date:	Fri, 21 Mar 2014 19:16:00 +0000
From:	Zoltan Kiss <zoltan.kiss@...rix.com>
To:	Ian Campbell <Ian.Campbell@...rix.com>
CC:	<wei.liu2@...rix.com>, <xen-devel@...ts.xenproject.org>,
	<paul.durrant@...rix.com>, <netdev@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>, <jonathan.davies@...rix.com>
Subject: Re: [PATCH net-next] xen-netback: Stop using xenvif_tx_pending_slots_available

On 21/03/14 09:36, Ian Campbell wrote:
> On Thu, 2014-03-20 at 19:32 +0000, Zoltan Kiss wrote:
>> Since the early days TX stops if there isn't enough free pending slots to
>> consume a maximum sized (slot-wise) packet. Probably the reason for that is to
>> avoid the case when we don't have enough free pending slot in the ring to finish
>> the packet. But if we make sure that the pending ring has the same size as the
>> shared ring, that shouldn't really happen. The frontend can only post packets
>> which fit the to the free space of the shared ring. If it doesn't, the frontend
>> has to stop, as it can only increase the req_prod when the whole packet fits
>> onto the ring.
>
> My only real concern here is that by removing these checks we are
> introducing a way for a malicious or buggy guest to trigger misbehaviour
> in the backend, leading to e.g. a DoS.
>
> Should we need to some sanity checks which shutdown the ring if
> something like this occurs? i.e. if we come to consume a packet and
> there is insufficient space on the pending ring we kill the vif.
The backend doesn't see what the guest does with the responses, and 
that's OK, it's the guest's problem, after netback increased 
rsp_prod_pvt it doesn't really care. But as soon as the guest start 
placing new requests after rsp_prod_pvt, or just increasing req_prod so 
req_prod-rsp_prod_pvt > XEN_NETIF_TX_RING_SIZE, it becomes an issue.
So far this xenvif_tx_pending_slots_available indirectly saved us from 
consuming requests overwriting still pending requests, but the guest 
could still overwrote our responses. But again, that's still the guests 
problem, we had the original request saved in the pending ring data. If 
the guest went too far, build_gops killed the vif when req_prod-req_cons 
 > XEN_NETIF_TX_RING_SIZE
Indirect above means it only happened because the pending ring had the 
same size, and the used slots has a 1-to-1 mapping for slots between 
rsp_prod_pvt and req_cons. So xenvif_tx_pending_slots_available also 
means (req_cons - rsp_prod_pvt) + XEN_NETBK_LEGACY_SLOTS_MAX < 
XEN_NETIF_TX_RING_SIZE (does this look familiar? :)
But consuming overrunning requests after rsp_prod_pvt is a problem:
- NAPI instance races with dealloc thread over the slots. The first 
reads them as requests, the second writes them as responses
- the NAPI instance overwrites used pending slots as well, so skb frag 
release go wrong etc.
But the current RING_HAS_UNCONSUMED_REQUESTS fortunately saves us here, 
let me explain it through an example:
rsp_prod_pvt = 0
req_cons = 253
req_prod = 258

Therefore:
req = 5
rsp = 3

So in xenvif_tx_build_gops work_to_do will be 3, and in 
xenvif_count_requests we bail out when we see that the packet actually 
exceeds that.
So in the end, we are safe here, but we shouldn't change that macro I 
suggested to refactor :)

Zoli


>
> What's the invariant we are relying on here, is it:
>      req_prod >= req_cons >= pending_prod >= pending_cons >= rsp_prod >= rsp_cons
> ?
>
>> This patch avoid using this checking, makes sure the 2 ring has the same size,
>> and remove a checking from the callback. As now we don't stop the NAPI instance
>> on this condition, we don't have to wake it up if we free pending slots up.
>>
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@...rix.com>
>> ---
>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
>> index bef37be..a800a8e 100644
>> --- a/drivers/net/xen-netback/common.h
>> +++ b/drivers/net/xen-netback/common.h
>> @@ -81,7 +81,7 @@ struct xenvif_rx_meta {
>>
>>   #define MAX_BUFFER_OFFSET PAGE_SIZE
>>
>> -#define MAX_PENDING_REQS 256
>> +#define MAX_PENDING_REQS XEN_NETIF_TX_RING_SIZE
>
> XEN_NETIF_TX_RING_SIZE is already == 256, right? (Just want to make sure
> this is semantically no change).
Yes, it is __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ