lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 30 May 2021 04:36:59 -0400
From:   "Michael S. Tsirkin" <mst@...hat.com>
To:     Eli Cohen <elic@...dia.com>
Cc:     jasowang@...hat.com, virtualization@...ts.linux-foundation.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] vdpa/mlx5: Fix possible failure in umem size
 calculation

On Sun, May 30, 2021 at 11:27:48AM +0300, Eli Cohen wrote:
> On Sun, May 30, 2021 at 04:21:07AM -0400, Michael S. Tsirkin wrote:
> > On Sun, May 30, 2021 at 11:17:21AM +0300, Eli Cohen wrote:
> > > On Sun, May 30, 2021 at 04:05:57AM -0400, Michael S. Tsirkin wrote:
> > > > On Sun, May 30, 2021 at 09:32:14AM +0300, Eli Cohen wrote:
> > > > > umem size is a 32 bit unsigned value so assigning it to an int could
> > > > > cause false failures. Set the calculated value inside the function and
> > > > > modify function name to reflect the fact it updates the size.
> > > > > 
> > > > > Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices")
> > > > > Signed-off-by: Eli Cohen <elic@...dia.com>
> > > > 
> > > > could you clarify the impact of the bug please?
> > > > 
> > > 
> > > This was found by code revew. I did not see it causing trouble becuase
> > > umem sizes are small enough to fit in int. Nevertheless it's a bug that
> > > deserves a fix.
> > 
> > ok pls include this info in the commit log.
> > 
> 
> Not sure what info do you want me to include. Seems to me that the
> changelog already provides the required information.


that this was found by code review and has no practical impact.

> > > > 
> > > > > ---
> > > > >  drivers/vdpa/mlx5/net/mlx5_vnet.c | 15 +++++----------
> > > > >  1 file changed, 5 insertions(+), 10 deletions(-)
> > > > > 
> > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > index 53312f0460ad..fdf3e74bffbd 100644
> > > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > @@ -610,8 +610,8 @@ static void cq_destroy(struct mlx5_vdpa_net *ndev, u16 idx)
> > > > >  	mlx5_db_free(ndev->mvdev.mdev, &vcq->db);
> > > > >  }
> > > > >  
> > > > > -static int umem_size(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq, int num,
> > > > > -		     struct mlx5_vdpa_umem **umemp)
> > > > > +static void set_umem_size(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq, int num,
> > > > > +			  struct mlx5_vdpa_umem **umemp)
> > > > >  {
> > > > >  	struct mlx5_core_dev *mdev = ndev->mvdev.mdev;
> > > > >  	int p_a;
> > > > > @@ -634,7 +634,7 @@ static int umem_size(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq
> > > > >  		*umemp = &mvq->umem3;
> > > > >  		break;
> > > > >  	}
> > > > > -	return p_a * mvq->num_ent + p_b;
> > > > > +	(*umemp)->size = p_a * mvq->num_ent + p_b;
> > > > >  }
> > > > >  
> > > > >  static void umem_frag_buf_free(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_umem *umem)
> > > > > @@ -650,15 +650,10 @@ static int create_umem(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m
> > > > >  	void *in;
> > > > >  	int err;
> > > > >  	__be64 *pas;
> > > > > -	int size;
> > > > >  	struct mlx5_vdpa_umem *umem;
> > > > >  
> > > > > -	size = umem_size(ndev, mvq, num, &umem);
> > > > > -	if (size < 0)
> > > > > -		return size;
> > > > > -
> > > > > -	umem->size = size;
> > > > > -	err = umem_frag_buf_alloc(ndev, umem, size);
> > > > > +	set_umem_size(ndev, mvq, num, &umem);
> > > > > +	err = umem_frag_buf_alloc(ndev, umem, umem->size);
> > > > >  	if (err)
> > > > >  		return err;
> > > > >  
> > > > > -- 
> > > > > 2.31.1
> > > > 
> > 

Powered by blists - more mailing lists