[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f83087dfb41043648825c382ce6efa61@intel.com>
Date: Thu, 10 Sep 2020 09:24:51 +0000
From: "Loftus, Ciara" <ciara.loftus@...el.com>
To: Björn Töpel <bjorn.topel@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>,
"ast@...nel.org" <ast@...nel.org>,
"daniel@...earbox.net" <daniel@...earbox.net>
CC: "Topel, Bjorn" <bjorn.topel@...el.com>,
"maximmi@...dia.com" <maximmi@...dia.com>,
"Karlsson, Magnus" <magnus.karlsson@...el.com>,
"jonathan.lemon@...il.com" <jonathan.lemon@...il.com>
Subject: RE: [PATCH bpf] xsk: fix number of pinned pages/umem size discrepancy
>
> From: Björn Töpel <bjorn.topel@...el.com>
>
> For AF_XDP sockets, there was a discrepancy between the number of of
> pinned pages and the size of the umem region.
>
> The size of the umem region is used to validate the AF_XDP descriptor
> addresses. The logic that pinned the pages covered by the region only
> took whole pages into consideration, creating a mismatch between the
> size and pinned pages. A user could then pass AF_XDP addresses outside
> the range of pinned pages, but still within the size of the region,
> crashing the kernel.
>
> This change correctly calculates the number of pages to be
> pinned. Further, the size check for the aligned mode is
> simplified. Now the code simply checks if the size is divisible by the
> chunk size.
>
> Fixes: bbff2f321a86 ("xsk: new descriptor addressing scheme")
> Reported-by: Ciara Loftus <ciara.loftus@...el.com>
> Signed-off-by: Björn Töpel <bjorn.topel@...el.com>
Thanks for the patch Björn.
Tested-by: Ciara Loftus <ciara.loftus@...el.com>
> ---
> net/xdp/xdp_umem.c | 17 ++++++++---------
> 1 file changed, 8 insertions(+), 9 deletions(-)
>
> diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
> index e97db37354e4..b010bfde0149 100644
> --- a/net/xdp/xdp_umem.c
> +++ b/net/xdp/xdp_umem.c
> @@ -303,10 +303,10 @@ static int xdp_umem_account_pages(struct
> xdp_umem *umem)
>
> static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg
> *mr)
> {
> + u32 npgs_rem, chunk_size = mr->chunk_size, headroom = mr-
> >headroom;
> bool unaligned_chunks = mr->flags &
> XDP_UMEM_UNALIGNED_CHUNK_FLAG;
> - u32 chunk_size = mr->chunk_size, headroom = mr->headroom;
> u64 npgs, addr = mr->addr, size = mr->len;
> - unsigned int chunks, chunks_per_page;
> + unsigned int chunks, chunks_rem;
> int err;
>
> if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size >
> PAGE_SIZE) {
> @@ -336,19 +336,18 @@ static int xdp_umem_reg(struct xdp_umem
> *umem, struct xdp_umem_reg *mr)
> if ((addr + size) < addr)
> return -EINVAL;
>
> - npgs = size >> PAGE_SHIFT;
> + npgs = div_u64_rem(size, PAGE_SIZE, &npgs_rem);
> + if (npgs_rem)
> + npgs++;
> if (npgs > U32_MAX)
> return -EINVAL;
>
> - chunks = (unsigned int)div_u64(size, chunk_size);
> + chunks = (unsigned int)div_u64_rem(size, chunk_size,
> &chunks_rem);
> if (chunks == 0)
> return -EINVAL;
>
> - if (!unaligned_chunks) {
> - chunks_per_page = PAGE_SIZE / chunk_size;
> - if (chunks < chunks_per_page || chunks %
> chunks_per_page)
> - return -EINVAL;
> - }
> + if (!unaligned_chunks && chunks_rem)
> + return -EINVAL;
>
> if (headroom >= chunk_size - XDP_PACKET_HEADROOM)
> return -EINVAL;
>
> base-commit: 746f534a4809e07f427f7d13d10f3a6a9641e5c3
> --
> 2.25.1
Powered by blists - more mailing lists