lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <21cdcfc2-0efc-c0df-6542-05bb8078d866@gmail.com>
Date:   Mon, 7 Aug 2017 15:26:30 -0700
From:   Florian Fainelli <f.fainelli@...il.com>
To:     netdev@...r.kernel.org, jiri@...nulli.us, jhs@...atatu.com,
        xiyou.wangcong@...il.com
Cc:     davem@...emloft.net, andrew@...n.ch,
        vivien.didelot@...oirfairelinux.com
Subject: multi-queue over IFF_NO_QUEUE "virtual" devices

Hi,

Most DSA supported Broadcom switches have multiple queues per ports
(usually 8) and each of these queues can be configured with different
pause, drop, hysteresis thresholds and so on in order to make use of the
switch's internal buffering scheme and have some queues achieve some
kind of lossless behavior (e.g: LAN to LAN traffic for Q7 has a higher
priority than LAN to WAN for Q0).

This is obviously very workload specific, so I'd want maximum
programmability as much as possible.

This brings me to a few questions:

1) If we have the DSA slave network devices currently flagged with
IFF_NO_QUEUE becoming multi-queue (on TX) aware such that an application
can control exactly which switch egress queue is used on a per-flow
basis, would that be a problem (this is the dynamic selection of the TX
queue)?

2) The conduit interface (CPU) port network interface has a congestion
control scheme which requires each of its TX queues (32 or 16) to be
statically mapped to each of the underlying switch port queues because
the congestion/ HW needs to inspect the queue depths of the switch to
accept/reject a packet at the CPU's TX ring level. Do we have a good way
with tc to map a virtual/stacked device's queue(s) on-top of its
physical/underlying device's queues (this is the static queue mapping
necessary for congestion to work)?

Let me know if you think this is the right approach or not.

Thanks!
-- 
Florian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ