[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220816093314.hqfnzangzamjdpkl@skbuf>
Date: Tue, 16 Aug 2022 09:33:14 +0000
From: Vladimir Oltean <vladimir.oltean@....com>
To: Ferenc Fejes <ferenc.fejes@...csson.com>
CC: "vinicius.gomes@...el.com" <vinicius.gomes@...el.com>,
"marton12050@...il.com" <marton12050@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"peti.antal99@...il.com" <peti.antal99@...il.com>
Subject: Re: igc: missing HW timestamps at TX
Hi Ferenc,
On Mon, Aug 15, 2022 at 06:47:31AM +0000, Ferenc Fejes wrote:
> I just played with those a little. Looks like the --cpu-mask the one it
> helps in my case. For example I checked the CPU core of the
> igc_ptp_tx_work:
>
> # bpftrace -e 'kprobe:igc_ptp_tx_work { printf("%d\n", cpu); exit(); }'
> Attaching 1 probe...
> 0
I think this print is slightly irrelevant in the grand scheme, or at
least not very stable. Because schedule_work() is implemented as
"queue_work(system_wq, work)", and queue_work() is implemented as
"queue_work_on(WORK_CPU_UNBOUND, wq, work)", it means that the work item
associated with igc_ptp_tx_work() is not bound to any requested CPU.
So unless the prints are taken from the actual test rather than just
done once before it, which percpu kthread worker executes it from within
the pool might vary. In turn, __queue_work() selects the CPU based on
raw_smp_processor_id() on which the caller is located (in this case, the
IRQ handler). So it will depend upon the tsync interrupt affinity,
basically.
>
> Looks like its running on core 0, so I run the isochro:
> taskset -c 0 isochron ... --cpu-mask $((1 << 0)) - no lost timestamps
> taskset -c 1 isochron ... --cpu-mask $((1 << 0)) - no lost timestamps
> taskset -c 0 isochron ... --cpu-mask $((1 << 1)) - losing timestamps
> taskset -c 1 isochron ... --cpu-mask $((1 << 1)) - losing timestamps
(...)
> Maybe this is what helps in my case? With funccount tracer I checked
> that when the sender thread and igc_ptp_tx_work running on the same
> core, the worker called exactly as many times as many packets I sent.
>
> However if the worker running on different core, funccount show some
> random number (less than the packets sent) and in that case I also lost
> timestamps.
Thanks.
Note that if igc_ptp_tx_work runs well on the same CPU (0) as the
isochron sender thread, but *not* that well on the other CPU,
I think a simple explanation (for now) might have to do with dynamic
frequency scaling of the CPUs (CONFIG_CPU_FREQ). If the CPU is kept busy
by the sender thread, the governor will increase the CPU frequency and
the tsync interrupt will be processed quicker, and this will unclog the
"single skb in flight" limitation quicker. If the CPU is mostly idle and
woken up only from time to time by a tsync interrupt, then the "single
skb in flight" limitation will kick in more often, and the isochron
thread will have its TX timestamp requests silently dropped in that
meantime until the idle CPU ramps up to execute its scheduled work item.
To prove my point you can try to compile a kernel with CONFIG_CPU_FREQ=n.
Makes sense?
Powered by blists - more mailing lists