lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALrKORrZ3Kcuqc1RajQKkZcot0yiswh4VR_WuXHqfRTjn9oGQQ@mail.gmail.com>
Date:	Tue, 21 Jan 2014 18:08:05 +0800
From:	shaobingqing <shaobingqing@...tor.com.cn>
To:	Trond Myklebust <trond.myklebust@...marydata.com>
Cc:	bfields@...hat.com, davem@...emloft.net, linux-nfs@...r.kernel.org,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] SUNRPC: Allow one callback request to be received from
 two sk_buff

2014/1/21 Trond Myklebust <trond.myklebust@...marydata.com>:
> On Mon, 2014-01-20 at 14:59 +0800, shaobingqing wrote:
>> In current code, there only one struct rpc_rqst is prealloced. If one
>> callback request is received from two sk_buff, the xprt_alloc_bc_request
>> would be execute two times with the same transport->xid. The first time
>> xprt_alloc_bc_request will alloc one struct rpc_rqst and the TCP_RCV_COPY_DATA
>> bit of transport->tcp_flags will not be cleared. The second time
>> xprt_alloc_bc_request could not alloc struct rpc_rqst any more and NULL
>> pointer will be returned, then xprt_force_disconnect occur. I think one
>> callback request can be allowed to be received from two sk_buff.
>>
>> Signed-off-by: shaobingqing <shaobingqing@...tor.com.cn>
>> ---
>>  net/sunrpc/xprtsock.c |   11 +++++++++--
>>  1 files changed, 9 insertions(+), 2 deletions(-)
>>
>> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
>> index ee03d35..606950d 100644
>> --- a/net/sunrpc/xprtsock.c
>> +++ b/net/sunrpc/xprtsock.c
>> @@ -1271,8 +1271,13 @@ static inline int xs_tcp_read_callback(struct rpc_xprt *xprt,
>>       struct sock_xprt *transport =
>>                               container_of(xprt, struct sock_xprt, xprt);
>>       struct rpc_rqst *req;
>> +     static struct rpc_rqst *req_partial;
>> +
>> +     if (req_partial == NULL)
>> +             req = xprt_alloc_bc_request(xprt);
>> +     else if (req_partial->rq_xid == transport->tcp_xid)
>> +             req = req_partial;
>
> What happens here if req_partial->rq_xid != transport->tcp_xid? AFAICS,
> req will be undefined. Either way, you cannot use a static variable for
> storage here: that isn't re-entrant.

Because metadata sever only have one slot for backchannel request,
req_partial->rq_xid == transport->tcp_xid always happens, if the callback
request just being splited in two sk_buffs. But req_partial->rq_xid !=
transport->tcp_xid may also happens in some special cases, such as
retransmission occurs?
If one callback request is splited in two sk_buffs, xs_tcp_read_callback
will be execute two times. The req_partial should be a static variable,
because  the second execution of xs_tcp_read_callback should use
the rpc_rqst allocated for the first execution, which saves information
copies from the first sk_buff.
I think perhaps the code should be modified like bellows:

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 606950d..02dbb82 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1273,10 +1273,14 @@ static inline int xs_tcp_read_callback(struct
rpc_xprt *xprt,
        struct rpc_rqst *req;
        static struct rpc_rqst *req_partial;

-       if (req_partial == NULL)
+       if (req_partial == NULL) {
                req = xprt_alloc_bc_request(xprt);
-       else if (req_partial->rq_xid == transport->tcp_xid)
+       } else if (req_partial->rq_xid == transport->tcp_xid) {
                req = req_partial;
+       } else {
+               xprt_free_bc_request(req_partial);
+               req = xprt_alloc_bc_request(xprt);
+       }

        if (req == NULL) {
                printk(KERN_WARNING "Callback slot table overflowed\n");
@@ -1303,8 +1307,9 @@ static inline int xs_tcp_read_callback(struct
rpc_xprt *xprt,
                list_add(&req->rq_bc_list, &bc_serv->sv_cb_list);
                spin_unlock(&bc_serv->sv_cb_lock);
                wake_up(&bc_serv->sv_cb_waitq);
-       } else
+       } else {
                req_partial = req;
+       }

        req->rq_private_buf.len = transport->tcp_copied;


>
>> -     req = xprt_alloc_bc_request(xprt);
>>       if (req == NULL) {
>>               printk(KERN_WARNING "Callback slot table overflowed\n");
>>               xprt_force_disconnect(xprt);
>> @@ -1285,6 +1290,7 @@ static inline int xs_tcp_read_callback(struct rpc_xprt *xprt,
>>
>>       if (!(transport->tcp_flags & TCP_RCV_COPY_DATA)) {
>>               struct svc_serv *bc_serv = xprt->bc_serv;
>> +             req_partial = NULL;
>>
>>               /*
>>                * Add callback request to callback list.  The callback
>> @@ -1297,7 +1303,8 @@ static inline int xs_tcp_read_callback(struct rpc_xprt *xprt,
>>               list_add(&req->rq_bc_list, &bc_serv->sv_cb_list);
>>               spin_unlock(&bc_serv->sv_cb_lock);
>>               wake_up(&bc_serv->sv_cb_waitq);
>> -     }
>> +     } else
>> +             req_partial = req;
>>
>>       req->rq_private_buf.len = transport->tcp_copied;
>>
>
>
> --
> Trond Myklebust
> Linux NFS client maintainer
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ