lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 11 May 2007 14:48:14 +0530
From:	Krishna Kumar2 <krkumar2@...ibm.com>
To:	Evgeniy Polyakov <johnpol@....mipt.ru>
Cc:	Ian McDonald <ian.mcdonald@...di.co.nz>, netdev@...r.kernel.org,
	Rick Jones <rick.jones2@...com>,
	Vlad Yasevich <vladislav.yasevich@...com>
Subject: Re: [RFC] New driver API to speed up small packets xmits

Hi Evgeniy,

Evgeniy Polyakov <johnpol@....mipt.ru> wrote on 05/11/2007 02:31:38 PM:

> On Fri, May 11, 2007 at 10:34:22AM +0530, Krishna Kumar2
(krkumar2@...ibm.com) wrote:
> > Not combining packets, I am sending them out in the same sequence it
was
> > queued. If the xmit failed, the driver's new API returns the skb which
> > failed to be sent. This skb and all other linked skbs are requeue'd in
> > the reverse order (fofi?) till the next time it is tried again. I see
> > that sometimes I can send tx_queue_len packets in one shot and all
> > succeeds. But the downside is that in failure case, the packets have to
> > be requeue'd.
>
> And what if you have thousand(s) of packets queued and first one has
> failed, requeing all the rest one-by-one is not a solution. If it is
> being done under heavy lock (with disabled irqs especially) it becomes a
> disaster.

If first one failed for other reasons from described below, it is freed up
and the next one attempted. There are three cases where we cannot continue
:
no slots, device blocked, no lock.

Jamal had suggested to get information on #available slots from the driver.
The queue_stopped is checked before linking packets, so the only other
error case is not getting a lock. And this too is true only in the ~LLTX
case,
which optionally could be a check to enable this linking. Also, there could
be limits to how many packets are queue'd. I tried tx_queue_len as well as
tx_queue_len/2, but there could be other options.

> I thought of a bit different approach: driver maintains own queue (or
> has access to stack's one) and only uses lock to dequeue a packet. If
> transmit fails, nothing is requeued, but the same packet is tried until
> transmit is completed. If number of transmits failed in order, driver
> says it is broken and/or stops queue. Thus one can setup several
> descriptors in one go and do it without any locks. Stack calls driver's
> xmit function which essentially sets bit that new packets are available,
> but driver must have access to qdisk.

Bugs will increase if drivers also are allowed to access qdisc, since it
is difficult code as it is. Right now it is easy to see how the qdisc is
manipulated.

thanks,

- KK

> If e1000 driver would not be so... so 'uncommon' compared to another
> small gige drivers I would try to cook up a patch, but I will not with
> e1000 :)
>
> --
>    Evgeniy Polyakov

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ