lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 28 Apr 2022 20:46:51 +0800
From:   maqiao <mqaio@...ux.alibaba.com>
To:     luobin9@...wei.com, davem@...emloft.net, kuba@...nel.org
Cc:     netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
        pabeni@...hat.com, huangguangbin2@...wei.com,
        keescook@...omium.org, gustavoars@...nel.org
Subject: Re: [PATCH net-next] hinic: fix bug of wq out of bound access

cc Paolo Abeni, Guangbin Huang, Kees Cook, Gustavo A. R. Silva

在 2022/4/28 PM8:30, Qiao Ma 写道:
> If wq has only one page, we need to check wqe rolling over page by
> compare end_idx and curr_idx, and then copy wqe to shadow wqe to
> avoid out of bound access.
> This work has been done in hinic_get_wqe, but missed for hinic_read_wqe.
> This patch fixes it, and removes unnecessary MASKED_WQE_IDX().
> 
> Fixes: 7dd29ee12865 ("hinic: add sriov feature support")
> Signed-off-by: Qiao Ma <mqaio@...ux.alibaba.com>
> ---
>   drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c | 7 +++++--
>   1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
> index 5dc3743f8091..f04ac00e3e70 100644
> --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
> +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
> @@ -771,7 +771,7 @@ struct hinic_hw_wqe *hinic_get_wqe(struct hinic_wq *wq, unsigned int wqe_size,
>   	/* If we only have one page, still need to get shadown wqe when
>   	 * wqe rolling-over page
>   	 */
> -	if (curr_pg != end_pg || MASKED_WQE_IDX(wq, end_prod_idx) < *prod_idx) {
> +	if (curr_pg != end_pg || end_prod_idx < *prod_idx) {
>   		void *shadow_addr = &wq->shadow_wqe[curr_pg * wq->max_wqe_size];
>   
>   		copy_wqe_to_shadow(wq, shadow_addr, num_wqebbs, *prod_idx);
> @@ -841,7 +841,10 @@ struct hinic_hw_wqe *hinic_read_wqe(struct hinic_wq *wq, unsigned int wqe_size,
>   
>   	*cons_idx = curr_cons_idx;
>   
> -	if (curr_pg != end_pg) {
> +	/* If we only have one page, still need to get shadown wqe when
> +	 * wqe rolling-over page
> +	 */
> +	if (curr_pg != end_pg || end_cons_idx < curr_cons_idx) {
>   		void *shadow_addr = &wq->shadow_wqe[curr_pg * wq->max_wqe_size];
>   
>   		copy_wqe_to_shadow(wq, shadow_addr, num_wqebbs, *cons_idx);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ