lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3346a456-48f7-44e4-b9f4-c7f13032d820@huawei.com>
Date: Tue, 24 Sep 2024 10:06:04 +0800
From: Jijie Shao <shaojijie@...wei.com>
To: Miao Wang <shankerwangmiao@...il.com>
CC: <shaojijie@...wei.com>, <netdev@...r.kernel.org>, Shengqi Chen
	<harry-chen@...look.com>, Yuxiang Zhang <zz593141477@...il.com>, Jiajie Chen
	<jiegec@...com>, Mirror Admin Tuna <mirroradmin@...a.tsinghua.edu.cn>, Salil
 Mehta <salil.mehta@...wei.com>
Subject: Re: [BUG Report] hns3: tx_timeout on high memory pressure


on 2024/9/24 0:12, Miao Wang wrote:
>> 2024年9月23日 21:37,Jijie Shao <shaojijie@...wei.com> 写道:
>>
>>
>> our analysis. I wonder how can I verify the scheduling of NAPI.
>> You can use napi trace to verify it:
>> echo 1 > /sys/kernel/debug/tracing/events/napi/napi_poll/enable
>> cat /sys/kernel/debug/tracing/trace
> I managed to make a reproduce. Attached is the dmesg and the trace log. It seems
> that in the trace log, napi_poll() is kept called.
>
> My reproducing environment is a smart git http server, which is using nginx as
> the frontend, git-http-backend as the CGI server and fastcgiwrapper for
> connecting them, running on a Taishan server. The served git repo is linux.git.
> At the same time, start some programs taking up about 70% of the system memory.
> Using several other hosts as git client, start as many git clone processes as
> possible on the client hosts, about 2000 in total, at the same time, pointing
> to the git server, letting the forked git processes on the server side take up
> all the memory left, and causing OOM on the server.
>
Hi Miao,
Thanks for the reproduce. I checked the dmesg and trace log of napi.
We can see the first tx timeout occured at the time [19555675.553853], and we
can see the napi poll seems keep being called all the time. Exactly the trace
log is for all the napi context, and we can differentiate the napis by the
address of napi struct.

For we can't direct map the queue id with the napi poll, so I just searched
several poll records of them.

napi struct ffff003ffffab350, running on core45: the time interval of twice calling near time [19555675.553853] is 11.8s
Line 1:            <...>-2530388 [045] ..s1 19555667.046219: napi_poll: napi poll on napi struct ffff003ffffab350 for device (no_device) work 2 budget 64
Line 1622:           <idle>-0       [045] ..s. 19555678.885859: napi_poll: napi poll on napi struct ffff003ffffab350 for device (no_device) work 3 budget 64
Line 1630:            <...>-240     [045] ..s. 19555680.934259: napi_poll: napi poll on napi struct ffff003ffffab350 for device (no_device) work 18 budget 64
Line 1664:            <...>-952404  [045] ..s1 19555682.647716: napi_poll: napi poll on napi struct ffff003ffffab350 for device (no_device) work 2 budget 64

napi struct ffff203fdfe83350, running on core88: the time interval of twice callingtime [19555675.553853] is 0.1s~4s
Line 12:            <...>-1370850 [088] ..s1 19555667.066796: napi_poll: napi poll on napi struct ffff203fdfe83350 for device (no_device) work 1 budget 64
Line 65:            <...>-2530388 [088] ..s1 19555669.357841: napi_poll: napi poll on napi struct ffff203fdfe83350 for device (no_device) work 1 budget 64
Line 70:           <idle>-0       [088] ..s. 19555669.921949: napi_poll: napi poll on napi struct ffff203fdfe83350 for device (no_device) work 2 budget 64
Line 913:            <...>-1627419 [088] ..s. 19555670.945995: napi_poll: napi poll on napi struct ffff203fdfe83350 for device (no_device) work 2 budget 64
Line 1117:            <...>-1627580 [088] ..s1 19555674.376050: napi_poll: napi poll on napi struct ffff203fdfe83350 for device (no_device) work 4 budget 64
Line 1118:            <...>-1627580 [088] ..s1 19555674.376113: napi_poll: napi poll on napi struct ffff203fdfe83350 for device (no_device) work 2 budget 64
Line 1119:            <...>-1627580 [088] ..s1 19555674.376202: napi_poll: napi poll on napi struct ffff203fdfe83350 for device (no_device) work 4 budget 64
Line 1120:            <...>-1627580 [088] ..s1 19555674.428469: napi_poll: napi poll on napi struct ffff203fdfe83350 for device (no_device) work 2 budget 64
Line 1121:            <...>-1627189 [088] ..s. 19555674.469891: napi_poll: napi poll on napi struct ffff203fdfe83350 for device (no_device) work 2 budget 64
Line 1131:            <...>-952227  [088] ..s1 19555675.760740: napi_poll: napi poll on napi struct ffff203fdfe83350 for device (no_device) work 2 budget 64

napi struct ffff202fffe7f350, running on core60: the time interval of twice callingtime [19555675.553853] is 7s
Line 668:            <...>-952213  [060] ..s1 19555669.970071: napi_poll: napi poll on napi struct ffff202fffe7f350 for device (no_device) work 2 budget 64
Line 670:            <...>-952213  [060] ..s1 19555669.970094: napi_poll: napi poll on napi struct ffff202fffe7f350 for device (no_device) work 2 budget 64
Line 1154:            <...>-952202  [060] ..s1 19555676.156687: napi_poll: napi poll on napi struct ffff202fffe7f350 for device (no_device) work 2 budget 64
Line 1183:            <...>-952140  [060] ..s1 19555676.239415: napi_poll: napi poll on napi struct ffff202fffe7f350 for device (no_device) work 2 budget 64

We can see some napi poll haven't been called for more than 5s, exceed the tx tiemout interval. It may caused by CPU busy, or no tx traffic for the queue during
the time.

Thanks,
Jijie Shao


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ