[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALx6S37kxOgZBE+8jvuLQDXSM+C86DSstfkhX4NjUpiT+-g6eQ@mail.gmail.com>
Date: Thu, 3 Aug 2017 14:07:28 -0700
From: Tom Herbert <tom@...bertland.com>
To: John Fastabend <john.fastabend@...il.com>
Cc: Tom Herbert <tom@...ntonium.net>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
Rohit Seth <rohit@...ntonium.net>
Subject: Re: [PATCH net-next 1/3] proto_ops: Add locked held versions of
sendmsg and sendpage
On Thu, Aug 3, 2017 at 1:21 PM, John Fastabend <john.fastabend@...il.com> wrote:
> On 07/28/2017 04:22 PM, Tom Herbert wrote:
>> Add new proto_ops sendmsg_locked and sendpage_locked that can be
>> called when the socket lock is already held. Correspondingly, add
>> kernel_sendmsg_locked and kernel_sendpage_locked as front end
>> functions.
>>
>> These functions will be used in zero proxy so that we can take
>> the socket lock in a ULP sendmsg/sendpage and then directly call the
>> backend transport proto_ops functions.
>>
>
> [...]
>
>>
>> +int kernel_sendpage_locked(struct sock *sk, struct page *page, int offset,
>> + size_t size, int flags)
>> +{
>> + struct socket *sock = sk->sk_socket;
>> +
>> + if (sock->ops->sendpage_locked)
>> + return sock->ops->sendpage_locked(sk, page, offset, size,
>> + flags);
>> +
>> + return sock_no_sendpage_locked(sk, page, offset, size, flags);
>> +}
>
> How about just returning EOPNOTSUPP here and force implementations to do both
> sendmsg and sendpage. The only implementation of these callbacks already does
> this. And if its any other socket it will just wind its way through a few
> layers of calls before returning EOPNOTSUPP.
>
Seems reasonable, but we should probably make the same change to
kernel_sendpage to be consistent.
Tom
> .John
Powered by blists - more mailing lists