[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <879df38ab1fb6d8fb8f371bfd5e8c213@walle.cc>
Date: Thu, 06 May 2021 16:41:51 +0200
From: Michael Walle <michael@...le.cc>
To: Vladimir Oltean <olteanv@...il.com>
Cc: Vladimir Oltean <vladimir.oltean@....com>,
Xiaoliang Yang <xiaoliang.yang_1@....com>,
UNGLinuxDriver@...rochip.com, alexandre.belloni@...tlin.com,
allan.nielsen@...rochip.com,
Claudiu Manoil <claudiu.manoil@....com>, davem@...emloft.net,
idosch@...lanox.com, joergen.andreasen@...rochip.com,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
Po Liu <po.liu@....com>, vinicius.gomes@...el.com
Subject: Re: [net-next] net: dsa: felix: disable always guard band bit for TAS
config
Am 2021-05-06 15:50, schrieb Vladimir Oltean:
> On Thu, May 06, 2021 at 03:25:07PM +0200, Michael Walle wrote:
>> Am 2021-05-04 23:33, schrieb Vladimir Oltean:
>> > [ trimmed the CC list, as this is most likely spam for most people ]
>> >
>> > On Tue, May 04, 2021 at 10:23:11PM +0200, Michael Walle wrote:
>> > > Am 2021-05-04 21:17, schrieb Vladimir Oltean:
>> > > > On Tue, May 04, 2021 at 09:08:00PM +0200, Michael Walle wrote:
>> > > > > > > > > As explained in another mail in this thread, all queues are marked as
>> > > > > > > > > scheduled. So this is actually a no-op, correct? It doesn't matter if
>> > > > > > > > > it set or not set for now. Dunno why we even care for this bit then.
>> > > > > > > >
>> > > > > > > > It matters because ALWAYS_GUARD_BAND_SCH_Q reduces the available
>> > > > > > > > throughput when set.
>> > > > > > >
>> > > > > > > Ahh, I see now. All queues are "scheduled" but the guard band only
>> > > > > > > applies
>> > > > > > > for "non-scheduled" -> "scheduled" transitions. So the guard band is
>> > > > > > > never
>> > > > > > > applied, right? Is that really what we want?
>> > > > > >
>> > > > > > Xiaoliang explained that yes, this is what we want. If the end user
>> > > > > > wants a guard band they can explicitly add a "sched-entry 00" in the
>> > > > > > tc-taprio config.
>> > > > >
>> > > > > You're disabling the guard band, then. I figured, but isn't that
>> > > > > suprising for the user? Who else implements taprio? Do they do it in
>> > > > > the
>> > > > > same way? I mean this behavior is passed right to the userspace and
>> > > > > have
>> > > > > a direct impact on how it is configured. Of course a user can add it
>> > > > > manually, but I'm not sure that is what we want here. At least it
>> > > > > needs
>> > > > > to be documented somewhere. Or maybe it should be a switchable option.
>> > > > >
>> > > > > Consider the following:
>> > > > > sched-entry S 01 25000
>> > > > > sched-entry S fe 175000
>> > > > > basetime 0
>> > > > >
>> > > > > Doesn't guarantee, that queue 0 is available at the beginning of
>> > > > > the cycle, in the worst case it takes up to
>> > > > > <begin of cycle> + ~12.5us until the frame makes it through (given
>> > > > > gigabit and 1518b frames).
>> > > > >
>> > > > > Btw. there are also other implementations which don't need a guard
>> > > > > band (because they are store-and-forward and cound the remaining
>> > > > > bytes). So yes, using a guard band and scheduling is degrading the
>> > > > > performance.
>> > > >
>> > > > What is surprising for the user, and I mentioned this already in another
>> > > > thread on this patch, is that the Felix switch overruns the time gate (a
>> > > > packet taking 2 us to transmit will start transmission even if there is
>> > > > only 1 us left of its time slot, delaying the packets from the next time
>> > > > slot by 1 us). I guess that this is why the ALWAYS_GUARD_BAND_SCH_Q bit
>> > > > exists, as a way to avoid these overruns, but it is a bit of a poor tool
>> > > > for that job. Anyway, right now we disable it and live with the
>> > > > overruns.
>> > >
>> > > We are talking about the same thing here. Why is that a poor tool?
>> >
>> > It is a poor tool because it revolves around the idea of "scheduled
>> > queues" and "non-scheduled queues".
>> >
>> > Consider the following tc-taprio schedule:
>> >
>> > sched-entry S 81 2000 # TC 7 and 0 open, all others closed
>> > sched-entry S 82 2000 # TC 7 and 1 open, all others closed
>> > sched-entry S 84 2000 # TC 7 and 2 open, all others closed
>> > sched-entry S 88 2000 # TC 7 and 3 open, all others closed
>> > sched-entry S 90 2000 # TC 7 and 4 open, all others closed
>> > sched-entry S a0 2000 # TC 7 and 5 open, all others closed
>> > sched-entry S c0 2000 # TC 7 and 6 open, all others closed
>> >
>> > Otherwise said, traffic class 7 should be able to send any time it
>> > wishes.
>>
>> What is the use case behind that? TC7 (with the highest priority)
>> may always take precedence of the other TCs, thus what is the point
>> of having a dedicated window for the others.
>
> Worst case latency is obviously better for an intermittent stream (not
> more than one packet in flight at a time) in TC7 than it is for any
> stream in TC6-TC0. But intermittent streams in TC6-TC0 also have their
> own worst case guarantees (assuming that 2000 ns is enough to fit one
> TC 7 frame and one frame from the TC6-TC0 range).
Oh and I missed that, TC0-TC6 probably won't work because that gate is
too narrow (12.5us guard band) unless of course you set MAXSDU to a
smaller value. Which would IMHO be the correct thing to do here.
-michael
Powered by blists - more mailing lists