[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0200C2A@AMSPEX01CL01.citrite.net>
Date: Thu, 9 Jan 2014 09:20:32 +0000
From: Paul Durrant <Paul.Durrant@...rix.com>
To: Zoltan Kiss <zoltan.kiss@...rix.com>,
Ian Campbell <Ian.Campbell@...rix.com>,
Wei Liu <wei.liu2@...rix.com>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Jonathan Davies" <Jonathan.Davies@...rix.com>
Subject: RE: [PATCH net-next v3 8/9] xen-netback: Timeout packets in RX path
> -----Original Message-----
> From: Zoltan Kiss
> Sent: 08 January 2014 21:34
> To: Ian Campbell; Wei Liu; xen-devel@...ts.xenproject.org;
> netdev@...r.kernel.org; linux-kernel@...r.kernel.org; Jonathan Davies
> Cc: Zoltan Kiss; Paul Durrant
> Subject: Re: [PATCH net-next v3 8/9] xen-netback: Timeout packets in RX
> path
>
> I just realized when answering Ma's mail that this doesn't cause the
> desired effect after Paul's flow control improvement: starting the queue
> doesn't drop the packets which cannot fit the ring. Which in fact might
> be not good.
No, that would not be good.
> We are adding the skb to vif->rx_queue even when
> xenvif_rx_ring_slots_available(vif, min_slots_needed) said there is no
> space for that. Or am I missing something? Paul?
>
That's correct. Part of the flow control improvement was to get rid of needless packet drops. For your purposes, you basically need to avoid using the queuing discipline and take packets into netback's vif->rx_queue regardless of the state of the shared ring so that you can drop them if they get beyond a certain age. So, perhaps you should never stop the netif queue, place an upper limit on vif->rx_queue (either packet or byte count) and drop when that is exceeded (i.e. mimicking pfifo or bfifo internally).
Paul
> Zoli
>
> On 08/01/14 00:10, Zoltan Kiss wrote:
> > A malicious or buggy guest can leave its queue filled indefinitely, in which
> > case qdisc start to queue packets for that VIF. If those packets came from
> an
> > another guest, it can block its slots and prevent shutdown. To avoid that,
> we
> > make sure the queue is drained in every 10 seconds.
> ...
> > diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> > index 95fcd63..ce032f9 100644
> > --- a/drivers/net/xen-netback/interface.c
> > +++ b/drivers/net/xen-netback/interface.c
> > @@ -114,6 +114,16 @@ static irqreturn_t xenvif_interrupt(int irq, void
> *dev_id)
> > return IRQ_HANDLED;
> > }
> >
> > +static void xenvif_wake_queue(unsigned long data)
> > +{
> > + struct xenvif *vif = (struct xenvif *)data;
> > +
> > + if (netif_queue_stopped(vif->dev)) {
> > + netdev_err(vif->dev, "draining TX queue\n");
> > + netif_wake_queue(vif->dev);
> > + }
> > +}
> > +
> > static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
> > {
> > struct xenvif *vif = netdev_priv(dev);
> > @@ -143,8 +153,13 @@ static int xenvif_start_xmit(struct sk_buff *skb,
> struct net_device *dev)
> > * then turn off the queue to give the ring a chance to
> > * drain.
> > */
> > - if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
> > + if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) {
> > + vif->wake_queue.function = xenvif_wake_queue;
> > + vif->wake_queue.data = (unsigned long)vif;
> > xenvif_stop_queue(vif);
> > + mod_timer(&vif->wake_queue,
> > + jiffies + rx_drain_timeout_jiffies);
> > + }
> >
> > skb_queue_tail(&vif->rx_queue, skb);
> > xenvif_kick_thread(vif);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists