[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1409601942.21965.23.camel@localhost>
Date: Mon, 01 Sep 2014 22:05:42 +0200
From: Hannes Frederic Sowa <hannes@...essinduktion.org>
To: David Miller <davem@...emloft.net>
Cc: netdev@...r.kernel.org, therbert@...gle.com, jhs@...atatu.com,
edumazet@...gle.com, jeffrey.t.kirsher@...el.com,
rusty@...tcorp.com.au, dborkman@...hat.com, brouer@...hat.com,
john.r.fastabend@...el.com
Subject: Re: [PATCH 0/2] Get rid of ndo_xmit_flush
On Fr, 2014-08-29 at 20:22 -0700, David Miller wrote:
> From: Hannes Frederic Sowa <hannes@...essinduktion.org>
> Date: Thu, 28 Aug 2014 03:42:54 +0200
>
> > I wonder if we still might need a separate call for tx_flush, e.g. for
> > af_packet if one wants to allow user space control of batching, MSG_MORE
> > with tx hangcheck (also in case user space has control over it) or
> > implement TCP_CORK alike option in af_packet.
>
> I disagree with allowing the user to hold a device TX queue hostage
> across system calls, therefore the user should provide the entire
> batch in such a case.
Ok, granted. In regards to syscall latency this also is a stupid idea.
mmaped tx approaches won't even pass these functions, so we don't care
here.
But as soon as we try to make Qdiscs absolutely lockless, we don't have
any guard that we don't concurrently dequeue skbs from it and suddenly
one Qdisc dequeue processing entity couldn't notify the driver that the
end of the batching was reached. I think this could become a problem
depending on how much of the locking is removed?
Bye,
Hannes
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists