[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <396556a20806040909q7e5eb8abi7cbc8b5ed11ed54e@mail.gmail.com>
Date: Wed, 4 Jun 2008 09:09:29 -0700
From: "Adam Langley" <agl@...erialviolet.org>
To: "Maxim Levitsky" <maximlevitsky@...il.com>
Cc: netdev@...r.kernel.org
Subject: Re: How I can reset TCP sockets after long suspend/resume cyscle
On Wed, Jun 4, 2008 at 8:34 AM, Maxim Levitsky <maximlevitsky@...il.com> wrote:
>> Is there a way to close all TCP sockets before/after suspend to ram?
As with most things, you should consider how this can be done from
userspace first.
You can find the processes with TCP connections open by walking
/proc/*/fd and readlink()ing the dents therein. Then you can match the
inode numbers up with /proc/net/tcp to see if the given TCP connection
is remote or not.
Now you want to kill those connections somehow. You could imagine
doing it by injecting RST packets back into the kernel, but for that
you would need to know the SEQ/ACK numbers for the connection. Since
that's sensitive information, /proc/net/tcp doesn't carry it. It would
have to be CAP_NET_ADMIN (read: root user) only and changing the
formats of proc files based on the reading user is a no-no. So that
would require another proc file; I've no idea how well that patch
would be received.
Another option would be to close the TCP connections from within the
processes which have them. You could enumerate the processes, ptrace
attach each one, wait() for SIGSTOP, get the current instr pointer and
patch in some code to close the fds then unpatch the process and let
it continue. That would be architecture specific, of couse.
When the process comes to reading/selecting the fds again it would get
a 0 read and act like they had been closed.
I'll admit that neither solution is terribly wonderful.
AGL
--
Adam Langley agl@...erialviolet.org http://www.imperialviolet.org
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists