[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56bbcfbd-149f-4f78-ae73-3bba3bbdd146@huawei.com>
Date: Mon, 23 Sep 2024 20:58:11 +0800
From: Jijie Shao <shaojijie@...wei.com>
To: Miao Wang <shankerwangmiao@...il.com>
CC: <shaojijie@...wei.com>, <netdev@...r.kernel.org>, 陈晟祺
<harry-chen@...look.com>, 张宇翔 <zz593141477@...il.com>,
陈嘉杰 <jiegec@...com>, Mirror Admin Tuna
<mirroradmin@...a.tsinghua.edu.cn>, Salil Mehta <salil.mehta@...wei.com>,
Yisen Zhuang <yisen.zhuang@...wei.com>
Subject: Re: [BUG Report] hns3: tx_timeout on high memory pressure
on 2024/9/23 0:38, Miao Wang wrote:
> It seems that hns3 driver is trying to allocating 16 continuous pages of memory
> when initializing, which could fail when the system is under high memory
> pressure.
>
> I have two questions about this:
>
> 1. Is it expected that tx timeout really related to the high memory pressure,
> or the driver does not work properly under such condition?
>
> 2. Can allocating continuous pages of memory on initialization can be avoided?
> I previously met similar problem on the veth driver, which was latter fixed
> by commit 1ce7d306ea63 ("veth: try harder when allocating queue memory"),
> where the memory allocating was changed to kvcalloc() to reduces the
> possibility of allocation failure. I wonder if similar changes can be applied
> to hns3 when allocating memory regions for non-DMA usage.
>
Hi:
in dmesg, we can see:
tx_timeout count: 35, queue id: 1, SW_NTU: 0x346, SW_NTC: 0x334, napi state: 17
BD_NUM: 0x7f HW_HEAD: 0x346, HW_TAIL: 0x346, BD_ERR: 0x0, INT: 0x0
Because HW_HEAD==HW_TAIL, the hardware has sent all the packets.
napi state: 17, Therefore, the TX interrupt is received and npai scheduling is triggered.
However, napi scheduling is not complete, Maybe napi.poll() is not executed.
Is npai not scheduled in time due to high CPU load in the environment?
To solve the memory allocating failure problem,
you can use kvcalloc to prevent continuous page memory allocating and
reduce the probability of failure in OOM.
Thanks,
Jijie Shao
Powered by blists - more mailing lists