lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 12 Aug 2014 22:44:27 -0700
From:	Raghuram Kothakota <>
To:	David Miller <>
Subject: Re: [PATCH net-next 3/3] sunvnet: Schedule maybe_tx_wakeup as a tasklet from ldc_rx path

On Aug 12, 2014, at 10:29 PM, David Miller <> wrote:

> From: Raghuram Kothakota <>
> Date: Tue, 12 Aug 2014 22:14:09 -0700
>> On Aug 12, 2014, at 9:31 PM, David Miller <> wrote:
>>> From: Raghuram Kothakota <>
>>> Date: Tue, 12 Aug 2014 21:26:28 -0700
>>>> One important point to keep in mind that packets to different peers shouldn't
>>>> be blocked by one blocked peer. Any flow controlling or dropping of the packets
>>>> needs to be done on a per-port basis.
>>> Until we use a big hammer and reset the LDC channel of that peer, we
>>> _absolutely_ should not reorder traffic and send to non-stuck peers.
>> The packet ordering requirement applies only for packets destined to
>> a specific destination. The packets to different destinations can flow
>> as long as they do not cause ordering issues. In case of sunvnet, 
>> each peer except the switch-port would be for a specific destination,
>> so sending packets to other ports would not result in re-ordering of the packets.
>> Even if the peer to peer LDC is the one that is blocked then disconnecting
>> that LDC channel and sending the packets via switch-port would not be
>> expected to change the order of the packet as the LDC reset is expect
>> to drop all packets in the ring so far.  
> Ok, that makes sense.
> So the question is how to manage this on the driver side, and the most
> natural way I see do this would be to use multiple TX netdev queues
> and a custom netdev_ops->ndo_select_queue() method which selects the
> queue based upon the peer that would be selected.

Thanks, if there is a method to accomplish this with multiple Tx netdev queues
would be wonderful. We will research on the custom ndo_selec_queue()
method to direct traffic automatically at the network stack level.
We probably also need to look for methods to increase parallelism to boost
performance, I assume multiple Tx queues would be a method to accomplish
that as well.

To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to
More majordomo info at

Powered by blists - more mailing lists