[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200821.141400.594703865403700191.davem@davemloft.net>
Date: Fri, 21 Aug 2020 14:14:00 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: hch@....de
Cc: kuba@...nel.org, colyli@...e.de, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] net: bypass ->sendpage for slab pages
From: Christoph Hellwig <hch@....de>
Date: Thu, 20 Aug 2020 06:37:44 +0200
> If you look at who uses sendpage outside the networking layer itself
> you see that it is basically block driver and file systems. These
> have no way to control what memory they get passed and have to deal
> with everything someone throws at them.
I see nvme doing virt_to_page() on several things when it calls into
kernel_sendpage().
This is the kind of stuff I want cleaned up, and which your patch
will not trap nor address.
In nvme it sometimes seems to check for sendpage validity:
/* can't zcopy slab pages */
if (unlikely(PageSlab(page))) {
ret = sock_no_sendpage(queue->sock, page, offset, len,
flags);
} else {
ret = kernel_sendpage(queue->sock, page, offset, len,
flags);
}
Yet elsewhere does not and just blindly calls:
ret = kernel_sendpage(queue->sock, virt_to_page(pdu),
offset_in_page(pdu) + req->offset, len, flags);
This pdu seems to come from a page frag allocation.
That's the target side. On the host side:
ret = kernel_sendpage(cmd->queue->sock, page, cmd->offset,
left, flags);
No page slab check or anything like that.
I'm hesitent to put in the kernel_sendpage() patch, becuase it provides a
disincentive to fix up code like this.
Powered by blists - more mailing lists