lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 08 May 2017 11:22:35 -0700
From:   Stefan Agner <stefan@...er.ch>
To:     Andy Duan <fugang.duan@....com>, Andrew Lunn <andrew@...n.ch>
Cc:     festevam@...il.com, netdev@...r.kernel.org,
        netdev-owner@...r.kernel.org
Subject: Re: FEC on i.MX 7 transmit queue timeout

On 2017-05-07 19:13, Andy Duan wrote:
> From: Andrew Lunn <andrew@...n.ch> Sent: Friday, May 05, 2017 8:24 PM
>>To: Andy Duan <fugang.duan@....com>
>>Cc: Stefan Agner <stefan@...er.ch>; festevam@...il.com;
>>netdev@...r.kernel.org; netdev-owner@...r.kernel.org
>>Subject: Re: FEC on i.MX 7 transmit queue timeout
>>
>>> No, it is not workaround. As i said, quque1 and queue2 are for AVB
>>> paths have higher priority in transmition.
>>
>>Does this higher priority result in the low priority queue being starved? Is that
>>why the timer goes off? What happens when somebody does use AVB. Are
>>we back to the same problem? This is what seems to make is sounds like a
>>work around, not a fix.
>>
>>     Andrew
> Yes, queue0 may be blocked by queue1 and queue2, then the queue0
> watchdog time maybe triggered.
> If somebody use AVB quque1 and queue2, the remaining bandwidth is for
> queue0, for example, in 100Mbps system, quque1 cost 50Mbps bandwidth
> and queue2 cost 50Mbps bandwidth for audio and video streaming, then
> queue0 (best effort) has 0 bandwidth that limit user case cannot have 
> asynchronous frames (IP(tcp/udp)) on networking. Of course these is
> extreme case.
> In essentially,  asynchronous frames (IP) go queue0 for the original
> design. To do these just implement .ndo_select_queue() callback in
> driver like fsl tree.

I guess you refer to this commit?

http://git.freescale.com/git/cgit.cgi/imx/linux-imx.git/commit/?h=imx_4.1.15_2.0.0_ga&id=b0d8fa989651baf847287f6888f4d7b723e2a207

It seems that by default there is a Credit-based scheme enabled
(ENETx_QOS[TX_SCHEME] = 000). The driver enables the queue1/2 and
assigns it each 50% of the bandwidth (IDLE_SLOPE_1/2 is set to 0x200,
which according to the register description of IDLE_SLOPE means BW
fraction of 0.5!). This actually violates even the note in register
ENETx_DMAnCFG:

"NOTE: For AVB applications, the BW fraction of class 1 and class 2
combined must not exceed .75."

Instead of using the credit based scheme we could switch to round robin,
but not sure if that is what we want.

What is the default criteria to select queues when .ndo_select_queue is
not provided? I guess it tries to balance individual streams/processes
for better SMP performance?


--
Stefan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ