lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 12 Oct 2022 23:26:24 +0200 (CEST)
From:   Thorsten Glaser <t.glaser@...ent.de>
To:     netdev@...r.kernel.org
Subject: qdisc_watchdog_schedule_range_ns granularity

Hi again,

next thing ☺

For my “faked extra latency” I sometimes need to reschedule to
future, when all queued-up packets have receive timestamps in
the future. For this, I have been using:

	qdisc_watchdog_schedule_range_ns(&q->watchdog, rs, 0);

Where rs is the smallest in-the-future enqueue timestamp.

However it was observed that this can add quite a lot more extra
delay than planned, I saw single-digit millisecond figures, which
IMHO is already a lot, but a coworker saw around 17 ms, which is
definitely too much.

What is the granularity of qdisc watchdogs, and how can I aim at
being called again for dequeueing in more precise fashion? I would
prefer to being called within 1 ms, 2 if it must absolutely be, of
the timestamp passed.

Thanks in advance,
//mirabilos
-- 
Infrastrukturexperte • tarent solutions GmbH
Am Dickobskreuz 10, D-53121 Bonn • http://www.tarent.de/
Telephon +49 228 54881-393 • Fax: +49 228 54881-235
HRB AG Bonn 5168 • USt-ID (VAT): DE122264941
Geschäftsführer: Dr. Stefan Barth, Kai Ebenrett, Boris Esser, Alexander Steeg

                        ****************************************************
/⁀\ The UTF-8 Ribbon
╲ ╱ Campaign against      Mit dem tarent-Newsletter nichts mehr verpassen:
 ╳  HTML eMail! Also,     https://www.tarent.de/newsletter
╱ ╲ header encryption!
                        ****************************************************

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ