[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121125214332.GA2722@shrek.podlesie.net>
Date: Sun, 25 Nov 2012 22:43:33 +0100
From: Krzysztof Mazur <krzysiek@...lesie.net>
To: David Woodhouse <dwmw2@...radead.org>
Cc: netdev@...r.kernel.org, John Crispin <blogic@...nwrt.org>,
Dave Täht <dave.taht@...il.com>,
"Chas Williams (CONTRACTOR)" <chas@....nrl.navy.mil>
Subject: Re: [PATCH] atm: br2684: Fix excessive queue bloat
On Sat, Nov 24, 2012 at 12:01:32AM +0000, David Woodhouse wrote:
> There's really no excuse for an additional wmem_default of buffering
> between the netdev queue and the ATM device. Two packets (one in-flight,
> and one ready to send) ought to be fine. It's not as if it should take
> long to get another from the netdev queue when we need it.
>
> If necessary we can make the queue space configurable later, but I don't
> think it's likely to be necessary.
Maybe some high-speed devices will require larger queue, especially for
smaller packets, but 2 packet queue should be sufficient in almost all cases.
> static inline struct br2684_vcc *pick_outgoing_vcc(const struct sk_buff *skb,
> @@ -504,6 +505,11 @@ static int br2684_regvcc(struct atm_vcc *atmvcc, void __user * arg)
> brvcc = kzalloc(sizeof(struct br2684_vcc), GFP_KERNEL);
> if (!brvcc)
> return -ENOMEM;
> + /* Allow two packets in the ATM queue. One actually being sent, and one
> + for the ATM 'TX done' handler to send. It shouldn't take long to get
> + the next one from the netdev queue, when we need it. More than that
> + would be bufferbloat. */
> + atomic_set(&brvcc->qspace, 2);
Maybe this magic "2" and the comment should be moved to some #define.
> write_lock_irq(&devs_lock);
> net_dev = br2684_find_dev(&be.ifspec);
> if (net_dev == NULL) {
Looks good,
Reviewed-by: Krzysztof Mazur <krzysiek@...lesie.net>
Krzysiek
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists