lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d2ccef1e-2bea-4596-8787-8d2491ce0278@acm.org>
Date:   Fri, 3 Nov 2023 08:04:15 -0700
From:   Bart Van Assche <bvanassche@....org>
To:     Li Zhijian <lizhijian@...itsu.com>, zyjzyj2000@...il.com,
        jgg@...pe.ca, leon@...nel.org, linux-rdma@...r.kernel.org
Cc:     linux-kernel@...r.kernel.org, rpearsonhpe@...il.com,
        matsuda-daisuke@...itsu.com, yi.zhang@...hat.com
Subject: Re: [PATCH RFC V2 6/6] RDMA/rxe: Support PAGE_SIZE aligned MR


On 11/3/23 02:55, Li Zhijian wrote:
> -	return ib_sg_to_pages(ibmr, sgl, sg_nents, sg_offset, rxe_set_page);
> +	for_each_sg(sgl, sg, sg_nents, i) {
> +		u64 dma_addr = sg_dma_address(sg) + sg_offset;
> +		unsigned int dma_len = sg_dma_len(sg) - sg_offset;
> +		u64 end_dma_addr = dma_addr + dma_len;
> +		u64 page_addr = dma_addr & PAGE_MASK;
> +
> +		if (sg_dma_len(sg) == 0) {
> +			rxe_dbg_mr(mr, "empty SGE\n");
> +			return -EINVAL;
> +		}
> +		do {
> +			int ret = rxe_store_page(mr, page_addr);
> +			if (ret)
> +				return ret;
> +
> +			page_addr += PAGE_SIZE;
> +		} while (page_addr < end_dma_addr);
> +		sg_offset = 0;
> +	}
> +
> +	return ib_sg_to_pages(ibmr, sgl, sg_nents, sg_offset_p, rxe_set_page);
>   }

Is this change necessary? There is already a loop in ib_sg_to_pages()
that splits SG entries that are larger than mr->page_size into entries
with size mr->page_size.

Bart.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ