lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190625200404.GA17378@mellanox.com>
Date:   Tue, 25 Jun 2019 20:04:08 +0000
From:   Jason Gunthorpe <jgg@...lanox.com>
To:     Michal Kalderon <michal.kalderon@...vell.com>
CC:     "ariel.elior@...vell.com" <ariel.elior@...vell.com>,
        "dledford@...hat.com" <dledford@...hat.com>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH v4 rdma-next 2/3] RDMA/qedr: Add doorbell overflow
 recovery support

On Mon, Jun 24, 2019 at 01:28:08PM +0300, Michal Kalderon wrote:

> +/* Map the kernel doorbell recovery memory entry */
> +int qedr_mmap_db_rec(struct vm_area_struct *vma)
> +{
> +	unsigned long len = vma->vm_end - vma->vm_start;
> +
> +	return remap_pfn_range(vma, vma->vm_start,
> +			       vma->vm_pgoff,
> +			       len, vma->vm_page_prot);
> +}
> +
>  int qedr_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
>  {
>  	struct qedr_ucontext *ucontext = get_qedr_ucontext(context);
> @@ -390,6 +446,8 @@ int qedr_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
>  	unsigned long phys_addr = vma->vm_pgoff << PAGE_SHIFT;
>  	unsigned long len = (vma->vm_end - vma->vm_start);
>  	unsigned long dpi_start;
> +	struct qedr_mm *mm;
> +	int rc;
>  
>  	dpi_start = dev->db_phys_addr + (ucontext->dpi * ucontext->dpi_size);
>  
> @@ -405,29 +463,28 @@ int qedr_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
>  		return -EINVAL;
>  	}
>  
> -	if (!qedr_search_mmap(ucontext, phys_addr, len)) {
> -		DP_ERR(dev, "failed mmap, vm_pgoff=0x%lx is not authorized\n",
> +	mm = qedr_remove_mmap(ucontext, phys_addr, len);
> +	if (!mm) {
> +		DP_ERR(dev, "failed to remove mmap, vm_pgoff=0x%lx\n",
>  		       vma->vm_pgoff);
>  		return -EINVAL;
>  	

This is so gross, please follow the pattern other drivers use for
managing the mmap cookie

In fact I am sick of seeing drivers wrongly re-implement this, so you
now get the job to make some proper core helpers to manage mmap
cookies for drivers.

The EFA driver is probably the best example, I suggest you move that
code to a common file in ib-core and use it here instead of redoing
yet again another broken version.

siw has another copy of basically the same thing.

> +static int qedr_init_user_db_rec(struct ib_udata *udata,
> +				 struct qedr_dev *dev, struct qedr_userq *q,
> +				 bool requires_db_rec)
> +{
> +	struct qedr_ucontext *uctx =
> +		rdma_udata_to_drv_context(udata, struct qedr_ucontext,
> +					  ibucontext);
> +
> +	/* Aborting for non doorbell userqueue (SRQ) or non-supporting lib */
> +	if (requires_db_rec == 0 || !uctx->db_rec)
> +		return 0;
> +
> +	/* Allocate a page for doorbell recovery, add to mmap ) */
> +	q->db_rec_data = (void *)get_zeroed_page(GFP_KERNEL);

Pages obtained by get_zeroed_page shuld not be inserted by
remap_pfn_range, those cases need to use vm_insert_page instead.

>  struct qedr_alloc_ucontext_resp {
>  	__aligned_u64 db_pa;
> @@ -74,6 +83,7 @@ struct qedr_create_cq_uresp {
>  	__u32 db_offset;
>  	__u16 icid;
>  	__u16 reserved;
> +	__u64 db_rec_addr;
>  };

All uapi u64s need to be __aligned_u64 in this file.

> +/* doorbell recovery entry allocated and populated by userspace doorbelling
> + * entities and mapped to kernel. Kernel uses this to register doorbell
> + * information with doorbell drop recovery mechanism.
> + */
> +struct qedr_user_db_rec {
> +	__aligned_u64 db_data; /* doorbell data */
> +};

like this one :\

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ