[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2d87cb86-3513-08dd-edb3-96a117b6da2c@linux.alibaba.com>
Date: Fri, 29 Apr 2022 12:58:08 +0800
From: Xunlei Pang <xlpang@...ux.alibaba.com>
To: maqiao <mqaio@...ux.alibaba.com>, luobin9@...wei.com,
davem@...emloft.net, kuba@...nel.org
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
pabeni@...hat.com, huangguangbin2@...wei.com,
keescook@...omium.org, gustavoars@...nel.org,
Xunlei Pang <xlpang@...ux.alibaba.com>
Subject: Re: [PATCH net-next] hinic: fix bug of wq out of bound access
On 2022/4/28 PM8:46, maqiao wrote:
> cc Paolo Abeni, Guangbin Huang, Kees Cook, Gustavo A. R. Silva
>
> On 2022/4/28 PM8:30, Qiao Ma wrote:
>> If wq has only one page, we need to check wqe rolling over page by
>> compare end_idx and curr_idx, and then copy wqe to shadow wqe to
>> avoid out of bound access.
>> This work has been done in hinic_get_wqe, but missed for hinic_read_wqe.
>> This patch fixes it, and removes unnecessary MASKED_WQE_IDX().
>>
>> Fixes: 7dd29ee12865 ("hinic: add sriov feature support")
>> Signed-off-by: Qiao Ma <mqaio@...ux.alibaba.com>
>> ---
>> drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c | 7 +++++--
>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
>> b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
>> index 5dc3743f8091..f04ac00e3e70 100644
>> --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
>> +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
>> @@ -771,7 +771,7 @@ struct hinic_hw_wqe *hinic_get_wqe(struct hinic_wq
>> *wq, unsigned int wqe_size,
>> /* If we only have one page, still need to get shadown wqe when
>> * wqe rolling-over page
>> */
>> - if (curr_pg != end_pg || MASKED_WQE_IDX(wq, end_prod_idx) <
>> *prod_idx) {
>> + if (curr_pg != end_pg || end_prod_idx < *prod_idx) {
>> void *shadow_addr = &wq->shadow_wqe[curr_pg *
>> wq->max_wqe_size];
>> copy_wqe_to_shadow(wq, shadow_addr, num_wqebbs, *prod_idx);
>> @@ -841,7 +841,10 @@ struct hinic_hw_wqe *hinic_read_wqe(struct
>> hinic_wq *wq, unsigned int wqe_size,
>> *cons_idx = curr_cons_idx;
>> - if (curr_pg != end_pg) {
>> + /* If we only have one page, still need to get shadown wqe when
>> + * wqe rolling-over page
>> + */
>> + if (curr_pg != end_pg || end_cons_idx < curr_cons_idx) {
>> void *shadow_addr = &wq->shadow_wqe[curr_pg *
>> wq->max_wqe_size];
>> copy_wqe_to_shadow(wq, shadow_addr, num_wqebbs, *cons_idx);
This is a fundamental problem, and caused kernel panic as follows:
Unable to handle kernel paging request at virtual address ffff800041371000
Call trace:
hinic_sq_get_sges+0x50/0x84 [hinic]
free_tx_poll+0x84/0x2fc [hinic]
napi_poll+0xcc/0x270
net_rx_action+0xd8/0x280
__do_softirq+0x120/0x37c
__irq_exit_rcu+0x108/0x140
irq_exit+0x14/0x20
__handle_domain_irq+0x84/0xe0
gic_handle_irq+0x80/0x108
Reviewed-by: Xunlei Pang <xlpang@...ux.alibaba.com>
Powered by blists - more mailing lists