lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20230621200622.gaperrvzsv4jidah@zenon.in.qult.net>
Date: Wed, 21 Jun 2023 22:06:22 +0200
From: Ignacy Gawedzki <ignacy.gawedzki@...en-communications.fr>
To: netdev@...r.kernel.org
Subject: Re: Is EDT now expected to work with any qdisc?

On Mon, Jun 19, 2023 at 03:47:46PM +0200, thus spake Ignacy Gawedzki:
> On Sun, Jun 18, 2023 at 03:52:35PM -0700, thus spake Cong Wang:
> > On Fri, Jun 16, 2023 at 07:31:38PM +0200, Ignacy Gawedzki wrote:
> > > I tried very hard to find a confirmation of my hypothesis in the
> > > kernel sources, but after three days of continuous searching for
> > > answers, I'm starting to feel I'm missing something here.
> > > 
> > > So is it so that this requested delivery time is honored before the
> > > packet is handed over to the qdisc or the driver?  Or maybe nowadays
> > > pretty much every driver (including veth) honors that delivery time
> > > itself?
> > 
> > It depends. Some NIC (and its driver) can schedule packets based on
> > tstamp too, otherwise we have to rely on the software layer (Qdisc
> > layer) to do so.
> > 
> > Different Qdisc's have different logics to schedule packets, not all
> > of them use tstamp to order and schedule packets. This is why you have
> > to pick a particular one, like fq, to get the logic you expect.
> 
> This is what I understand from reading both the sources and any
> documentation I can get hold of.  But empirical tests seem to suggest
> otherwise, as I have yet to find a driver where this
> scheduling-according-to-tstamp doesn't actually happen.  I've even
> tested with both a tun and a tap device, with noqueue on root and
> clsact and by BPF code as a filter.  Here again, the packets are
> getting through to the userspace fd according to the pacing enforced
> by setting the tstamp in the BPF filter code.
> 
> I suspect that pacing is happening somewhere around the clsact
> mini-qdisc, before the packet is handed over to the actual qdisc, but
> I'd rather have a confirmation from the actual code, before I can rely
> on that feature.

I eventually found the answer to my question, so I post a follow-up
here just in case somebody else happens to struggle with the same issue.

The pacing was in fact happening in the BPF code itself.  With noqueue
or any qdisc other than fq, the tstamp is ignored and the packets are
passed over to the driver pretty much as they come.

My BPF code was based on tools/testing/selftests/bpf/progs/test_tcp_edt.c
which simply drops any packet for which the EDT falls after the time
horizon.  With any test consisting in actually flooding the socket
with packets, the code effectively drops anything in excess of the
requested rate.

Thanks again and sorry for the noise.

Ignacy

-- 
Ignacy Gawędzki
R&D Engineer
Green Communications

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ