[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <871rve5229.fsf@linux.intel.com>
Date: Mon, 14 Oct 2019 16:39:58 -0700
From: Vinicius Costa Gomes <vinicius.gomes@...el.com>
To: Murali Karicheri <m-karicheri2@...com>,
Vladimir Oltean <olteanv@...il.com>
Cc: "netdev\@vger.kernel.org" <netdev@...r.kernel.org>
Subject: Re: taprio testing - Any help?
Murali Karicheri <m-karicheri2@...com> writes:
>
> My expectation is as follows
>
> AAAAAABBBBBCCCCCDDDDDEEEEE
>
> Where AAAAA is traffic from TC0, BBBBB is udp stream for port 10000
> CCCCC is stream for port 20000, DDDDD for 30000 and EEEEE for 40000.
> Each can be max of 4 msec. Is the expection correct? At least that
> is my understanding.
Your expectation is correct.
>
> But what I see is alternating packets with port 10000/20000/30000/40000
> at the wireshark capture and it doesn't make sense to me. If you
> look at the timestamp, there is nothing showing the Gate is honored
> for Tx. Am I missing something?
Remember that taprio (in software mode) has no control after the packet
is delivered to the driver. So, even if taprio obeys your traffic
schedule perfectly, the driver/controller may decide to send packets
according to some other logic.
>
> The tc stats shows packets are going through specific TC/Gate
>
> root@...7xx-evm:~# tc -d -p -s qdisc show dev eth0
> qdisc taprio 100: root refcnt 9 tc 5 map 0 1 2 3 4 4 4 4 4 4 4 4 4 4 4 4
> queues offset 0 count 1 offset 1 count 1 offset 2 count 1 offset 3 count
> 1 offset 4 count 1
> clockid TAI offload 0 base-time 0 cycle-time 0 cycle-time-extension 0
> base-time 1564768921123459533 cycle-time 20000000 cycle-
> time-extension 0
> index 0 cmd S gatemask 0x1 interval 4000000
> index 1 cmd S gatemask 0x2 interval 4000000
> index 2 cmd S gatemask 0x4 interval 4000000
> index 3 cmd S gatemask 0x8 interval 4000000
> index 4 cmd S gatemask 0x10 interval 4000000
>
> Sent 80948029 bytes 53630 pkt (dropped 0, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
> qdisc pfifo 0: parent 100:5 limit 1000p
> Sent 16184448 bytes 10704 pkt (dropped 0, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
> qdisc pfifo 0: parent 100:4 limit 1000p
> Sent 16184448 bytes 10704 pkt (dropped 0, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
> qdisc pfifo 0: parent 100:3 limit 1000p
> Sent 16184448 bytes 10704 pkt (dropped 0, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
> qdisc pfifo 0: parent 100:2 limit 1000p
> Sent 16184448 bytes 10704 pkt (dropped 0, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
> qdisc pfifo 0: parent 100:1 limit 1000p
> Sent 16210237 bytes 10814 pkt (dropped 0, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
>
> Also my hardware queue stats shows frames going through correct queues.
> Am I missing something?
>
What I usually see in these cases, are that the borders (from A to B,
for example) are usually messy, the middle of each entry are more well
behaved.
But there are things that could improve the behavior: reducing TX DMA
coalescing, reducing the number of packet buffers in use in the
controller, disabling power saving features, that kind of thing.
If you are already doing something like this, then I would like to know
more, that could indicate a problem.
[...]
> I am on a 4.19.y kernel with patches specific to taprio
> backported. Am I missing anything related to taprio. I will
> try on the latest master branch as well. But if you can point out
> anything that will be helpful.
>
[...]
> lcpd/ti-linux-4.19.y) Merged TI feature connectivity into
> ti-linux-4.19.y
I can't think of anything else.
>
>>
>> Regards,
>> -Vladimir
>>
Cheers,
--
Vinicius
Powered by blists - more mailing lists