lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <OF206CB1B7.31D1C39B-ON6525731F.00254E4C-6525731F.00261F1A@in.ibm.com>
Date:	Sat, 21 Jul 2007 12:26:23 +0530
From:	Krishna Kumar2 <krkumar2@...ibm.com>
To:	Patrick McHardy <kaber@...sh.net>
Cc:	davem@...emloft.net, gaagaan@...il.com,
	general@...ts.openfabrics.org, hadi@...erus.ca,
	herbert@...dor.apana.org.au, jagana@...ibm.com, jeff@...zik.org,
	johnpol@....mipt.ru, kumarkr@...ux.ibm.com, mcarlson@...adcom.com,
	mchan@...adcom.com, netdev@...r.kernel.org,
	peter.p.waskiewicz.jr@...el.com, rdreier@...co.com,
	rick.jones2@...com, Robert.Olsson@...a.slu.se, sri@...ibm.com,
	tgraf@...g.ch, xma@...ibm.com
Subject: Re: [PATCH 05/10] sch_generic.c changes.

Hi Patrick,

Patrick McHardy <kaber@...sh.net> wrote on 07/20/2007 11:46:36 PM:

> Krishna Kumar wrote:
> > +static inline int get_skb(struct net_device *dev, struct Qdisc *q,
> > +           struct sk_buff_head *blist,
> > +           struct sk_buff **skbp)
> > +{
> > +   if (likely(!blist) || (!skb_queue_len(blist) && qdisc_qlen(q) <=
1)) {
> > +      return likely((*skbp = dev_dequeue_skb(dev, q)) != NULL);
> > +   } else {
> > +      int max = dev->tx_queue_len - skb_queue_len(blist);
>
>
> I'm assuming the driver will simply leave excess packets in the
> blist for the next run.

Yes, and the next run will be scheduled even if no more xmits are called
either
due to qdisc_restart()'s call to driver returning :
      BUSY : driver failed to send all, net_tx_action will handle this
later (the
             case you mentioned)
      OK : and qlen is > 0, return 1 and __qdisc_run() will re-retry (where
            blist len will become zero as driver processed EVERYTHING on
blist)

> The check for tx_queue_len is wrong though,
> its only a default which can be overriden and some qdiscs don't
> care for it at all.

I think it should not matter whether qdiscs use this or not, or even if it
is modified (unless it is made zero in which case this breaks). The
intention behind this check is to make sure that not more than tx_queue_len
skbs are in all queues put together (q->qdisc + dev->skb_blist), otherwise
the blist can become too large and breaks the idea of tx_queue_len. Is that
a good justification ?

> > -void __qdisc_run(struct net_device *dev)
> > +void __qdisc_run(struct net_device *dev, struct sk_buff_head *blist)
>
>
> And the patches should really be restructured so this change is
> in the same patch changing the header and the caller, for example.

Ah, OK.

Thanks,

- KK

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ