lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 12 Oct 2022 16:32:56 -0700
From:   Eric Dumazet <>
To:     Thorsten Glaser <>
Subject: Re: qdisc_watchdog_schedule_range_ns granularity

On 10/12/22 16:15, Thorsten Glaser wrote:
> On Wed, 12 Oct 2022, Eric Dumazet wrote:
> It does.
>> I don't know how you measure this latency, but net/sched/sch_fq.c has
>> instrumentation,
> On enqueue I add now+extradelay and save that as enqueue timestamp.
> On dequeue I check that now>=timestamp then process the packet,
> measuring now-timestamp as queue delay. This is surprisingly higher.

When packets are eligible, the qdisc itself can be stopped if NIC is 
full (or BQL triggers)

net/sched/sch_fq.c is not using the skb tstamp which could very well be 
in the past,

but an internal variable (q->time_next_delayed_flow)

> I’ll add some printks as well, to see when I’m called next after
> such a watchdog scheduling.
>> Under high cpu pressure, it is possible the softirq is delayed,
>> because ksoftirqd might compete with user threads.
> Is it a good idea to renice these?

Depends if you want to starve user thread(s) under flood/attack, I guess 
you can try.

> Thanks,
> //mirabilos

Powered by blists - more mailing lists