[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9f62247b-ae36-49d9-9ccc-6ea5a238e147@grimberg.me>
Date: Sun, 2 Jun 2024 10:48:12 +0300
From: Sagi Grimberg <sagi@...mberg.me>
To: Jakub Kicinski <kuba@...nel.org>, Ofir Gal <ofir.gal@...umez.com>
Cc: davem@...emloft.net, linux-block@...r.kernel.org,
linux-nvme@...ts.infradead.org, netdev@...r.kernel.org,
ceph-devel@...r.kernel.org, dhowells@...hat.com, edumazet@...gle.com,
pabeni@...hat.com, kbusch@...nel.org, axboe@...nel.dk, hch@....de,
philipp.reisner@...bit.com, lars.ellenberg@...bit.com,
christoph.boehmwalder@...bit.com, idryomov@...il.com, xiubli@...hat.com
Subject: Re: [PATCH v2 0/4] bugfix: Introduce sendpages_ok() to check
sendpage_ok() on contiguous pages
On 02/06/2024 1:34, Jakub Kicinski wrote:
> On Thu, 30 May 2024 17:24:10 +0300 Ofir Gal wrote:
>> skbuff: before sendpage_ok - i: 0. page: 0x654eccd7 (pfn: 120755)
>> skbuff: before sendpage_ok - i: 1. page: 0x1666a4da (pfn: 120756)
>> skbuff: before sendpage_ok - i: 2. page: 0x54f9f140 (pfn: 120757)
> noob question, how do you get 3 contiguous pages, the third of which
> is slab? is_slab doesn't mean what I think it does, or we got extremely
> lucky with kmalloc?
>
The contig range according to the trace is 256K, the third page was just the
first time that it saw this !ok page.
I asked the same thing. nvme-tcp gets a bio and sets up its own iov_iter
on the bio bvec for sending it over the wire. The test that reproduces this
creates an raid1 md device which probably has at least some effect into how
we got this buffer.
With the recent multipage bvecs work from Ming, nvme-tcp bvec entries will
often point to contiguous ranges that are > PAGE_SIZE. I didn't look
into the
implementation of skb_splice_from_iter, but I think its not very
efficient to
extract a contiguous range in PAGE_SIZE granular vector...
Powered by blists - more mailing lists