lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 12 Oct 2022 15:36:21 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Thorsten Glaser <t.glaser@...ent.de>, netdev@...r.kernel.org
Subject: Re: qdisc_watchdog_schedule_range_ns granularity


On 10/12/22 14:26, Thorsten Glaser wrote:
> Hi again,
>
> next thing ☺
>
> For my “faked extra latency” I sometimes need to reschedule to
> future, when all queued-up packets have receive timestamps in
> the future. For this, I have been using:
>
> 	qdisc_watchdog_schedule_range_ns(&q->watchdog, rs, 0);
>
> Where rs is the smallest in-the-future enqueue timestamp.
>
> However it was observed that this can add quite a lot more extra
> delay than planned, I saw single-digit millisecond figures, which
> IMHO is already a lot, but a coworker saw around 17 ms, which is
> definitely too much.

Make sure your .config has

CONFIG_HIGH_RES_TIMERS=y

I don't know how you measure this latency, but net/sched/sch_fq.c has 
instrumentation,

and following command on a random host in my lab shows an average (EWMA) 
latency

smaller than 29 usec, (32 TX queues on the NIC)

tc -s -d qd sh dev eth1 | grep latency
   gc 194315 highprio 0 throttled 902 latency 10.7us
   gc 196277 highprio 0 throttled 156 latency 11.8us
   gc 84107 highprio 0 throttled 286 latency 13.7us
   gc 19408 highprio 0 throttled 324 latency 10.9us
   gc 309405 highprio 0 throttled 370 latency 11.1us
   gc 147821 highprio 0 throttled 154 latency 12.2us
   gc 84768 highprio 0 throttled 2859 latency 10.7us
   gc 181833 highprio 0 throttled 4311 latency 12.9us
   gc 117038 highprio 0 throttled 1127 latency 11.1us
   gc 168430 highprio 0 throttled 1784 latency 22.1us
   gc 71086 highprio 0 throttled 2339 latency 14.3us
   gc 127584 highprio 0 throttled 1396 latency 11.5us
   gc 96239 highprio 0 throttled 297 latency 16.9us
   gc 96490 highprio 0 throttled 6374 latency 11.3us
   gc 117284 highprio 0 throttled 2011 latency 11.5us
   gc 122355 highprio 0 throttled 303 latency 12.8us
   gc 221196 highprio 0 throttled 330 latency 11.3us
   gc 204193 highprio 0 throttled 121 latency 12us
   gc 177423 highprio 0 throttled 1012 latency 11.9us
   gc 70236 highprio 0 throttled 1015 latency 15us
   gc 166721 highprio 0 throttled 488 latency 11.9us
   gc 92794 highprio 0 throttled 963 latency 17.1us
   gc 229031 highprio 0 throttled 274 latency 12.2us
   gc 109511 highprio 0 throttled 234 latency 10.5us
   gc 89160 highprio 0 throttled 729 latency 10.7us
   gc 182940 highprio 0 throttled 234 latency 11.7us
   gc 172111 highprio 0 throttled 2439 latency 11.4us
   gc 101261 highprio 0 throttled 2614 latency 11.6us
   gc 95759 highprio 0 throttled 336 latency 11.3us
   gc 103392 highprio 0 throttled 2990 latency 11.2us
   gc 173068 highprio 0 throttled 955 latency 16.5us
   gc 97893 highprio 0 throttled 748 latency 11.7us

Note that after the timer fires, a TX softirq is scheduled (to send more 
packets from qdisc -> NIC)

Under high cpu pressure, it is possible the softirq is delayed,

because ksoftirqd might compete with user threads.


>
> What is the granularity of qdisc watchdogs, and how can I aim at
> being called again for dequeueing in more precise fashion? I would
> prefer to being called within 1 ms, 2 if it must absolutely be, of
> the timestamp passed.
>
> Thanks in advance,
> //mirabilos

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ