[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070501231028.4cfa4053@logostar.upir.cz>
Date: Tue, 1 May 2007 23:10:28 +0200
From: Jiri Benc <jbenc@...e.cz>
To: Ulrich Kunitz <kune@...ne-taler.de>
Cc: Daniel Drake <dsd@...too.org>, linville@...driver.com,
netdev@...r.kernel.org, linux-wireless@...r.kernel.org
Subject: Re: [PATCH] zd1211rw-mac80211: limit URB buffering in tx path
On Tue, 1 May 2007 21:50:08 +0200, Ulrich Kunitz wrote:
> On 07-05-01 12:34 Jiri Benc wrote:
> > On Tue, 1 May 2007 04:01:00 +0100 (BST), Daniel Drake wrote:
> > > The old code allowed unlimited buffing of tx frames in URBs
> > > submitted for transfer to the device. This patch stops the
> > > ieee80211_hw queue(s) if to many URBs are ready for submit to the
> > > device. Actually the ZD1211 device supports currently only one
> > > queue.
> >
> > This doesn't look correct to me. The limits should be per queue and you
> > should always stop queues selectively.
>
> The old ZD1211 chip doesn't support queuing and the new ZD1211B
> chip has support, but it is unclear how to put packets in the
> different queues. However the error condition here is, that
> packets can't be transmitted over the USB, which will affect all
> queues.
Really? From what you wrote ("if too many URBs are ready for submit") it
seems that the code is triggered when the queue is just full. That's not
necessarily an error condition and the only thing needed to do is to stop
the queue. Unless zd1211 is really special here (and then I'd like to know
how is it special).
> Sure one could manage different high level marks for the
> different queues, but this is all theoretical currently. I could
> have coded with the explicit knowledge that we support only one
> queue, but it is really work the hassle.
If you support one queue only, call ieee80211_stop_queue(hw, 0). Calling
ieee80211_stop_queues if you have just a full queue is wrong.
Thanks,
Jiri
--
Jiri Benc
SUSE Labs
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists