[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4ce6c13946803700d235082b9c52460ed38dab1e.1588242081.git.ashwinh@vmware.com>
Date: Wed, 6 May 2020 19:50:53 +0530
From: ashwin-h <ashwinh@...are.com>
To: <vyasevich@...il.com>, <nhorman@...driver.com>
CC: <davem@...emloft.net>, <linux-sctp@...r.kernel.org>,
<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<srivatsab@...are.com>, <srivatsa@...il.mit.edu>,
<rostedt@...dmis.org>, <srostedt@...are.com>,
<gregkh@...uxfoundation.org>, <ashwin.hiranniah@...il.com>,
Xin Long <lucien.xin@...il.com>, Ashwin H <ashwinh@...are.com>
Subject: [PATCH 1/2] sctp: implement memory accounting on tx path
From: Xin Long <lucien.xin@...il.com>
commit 1033990ac5b2ab6cee93734cb6d301aa3a35bcaa upstream.
Now when sending packets, sk_mem_charge() and sk_mem_uncharge() have been
used to set sk_forward_alloc. We just need to call sk_wmem_schedule() to
check if the allocated should be raised, and call sk_mem_reclaim() to
check if the allocated should be reduced when it's under memory pressure.
If sk_wmem_schedule() returns false, which means no memory is allowed to
allocate, it will block and wait for memory to become available.
Note different from tcp, sctp wait_for_buf happens before allocating any
skb, so memory accounting check is done with the whole msg_len before it
too.
Reported-by: Matteo Croce <mcroce@...hat.com>
Tested-by: Matteo Croce <mcroce@...hat.com>
Acked-by: Neil Horman <nhorman@...driver.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@...il.com>
Signed-off-by: Xin Long <lucien.xin@...il.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Ashwin H <ashwinh@...are.com>
---
net/sctp/socket.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index c93be3b..df4a7d7 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -1931,7 +1931,10 @@ static int sctp_sendmsg_to_asoc(struct sctp_association *asoc,
if (sctp_wspace(asoc) < (int)msg_len)
sctp_prsctp_prune(asoc, sinfo, msg_len - sctp_wspace(asoc));
- if (sctp_wspace(asoc) <= 0) {
+ if (sk_under_memory_pressure(sk))
+ sk_mem_reclaim(sk);
+
+ if (sctp_wspace(asoc) <= 0 || !sk_wmem_schedule(sk, msg_len)) {
timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
err = sctp_wait_for_sndbuf(asoc, &timeo, msg_len);
if (err)
@@ -8515,7 +8518,10 @@ static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
goto do_error;
if (signal_pending(current))
goto do_interrupted;
- if ((int)msg_len <= sctp_wspace(asoc))
+ if (sk_under_memory_pressure(sk))
+ sk_mem_reclaim(sk);
+ if ((int)msg_len <= sctp_wspace(asoc) &&
+ sk_wmem_schedule(sk, msg_len))
break;
/* Let another process have a go. Since we are going
--
2.7.4
Powered by blists - more mailing lists