[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B018703.3080507@redhat.com>
Date: Mon, 16 Nov 2009 19:08:19 +0200
From: Avi Kivity <avi@...hat.com>
To: David Miller <davem@...emloft.net>
CC: gregory.haskins@...il.com, herbert@...dor.apana.org.au,
ghaskins@...ell.com, mst@...hat.com,
alacrityvm-devel@...ts.sourceforge.net,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [RFC PATCH] net: add dataref destructor to sk_buff
On 11/14/2009 05:04 AM, David Miller wrote:
> From: Gregory Haskins<gregory.haskins@...il.com>
> Date: Fri, 13 Nov 2009 20:33:35 -0500
>
>
>> Well, not with respect to the overall protocol, of course not. But with
>> respect to the buffer in question, it _has_ to be. Or am I missing
>> something?
>>
> sendfile() absolutely, and positively, is not.
>
> Any entity can write to the pages being send via sendfile(), at will,
> and those writes will show up in the packet stream if they occur
> before the NIC DMA's the memory backed by those pages into it's
> buffer.
>
> There is zero data synchronization whatsoever, we don't lock the
> pages, we don't block their usage while they are queued up in the
> socket send queue, nothing like that.
>
>
But it must maintain a reference count on the page being dmaed and drop
it only after dma is complete. Otherwise we risk the page being
recycled and arbitrary memory sent out on the wire; and an application
can trivially cause this by truncate()ing a sendfile.
> The user returns long before it every hits the wire and there is zero
> "notification" to the user that the pages in question for the
> sendfile() request are no longer in use.
>
The put_page() is a notification except it doesn't reach the caller.
Gregory's patch (and previous shared info destructor patches) is an
attempt to make it reach the caller, IIUC.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists