[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xhsmhjzmxg40f.mognet@vschneid-thinkpadt14sgen2i.remote.csb>
Date: Wed, 21 Feb 2024 17:45:36 +0100
From: Valentin Schneider <vschneid@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: dccp@...r.kernel.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org, "David S.
Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, Paolo
Abeni <pabeni@...hat.com>, mleitner@...hat.com, David Ahern
<dsahern@...nel.org>, Juri Lelli <juri.lelli@...hat.com>, Tomas Glozar
<tglozar@...hat.com>, Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v3 1/1] tcp/dcpp: Un-pin tw_timer
On 20/02/24 18:42, Eric Dumazet wrote:
> On Tue, Feb 20, 2024 at 6:38 PM Valentin Schneider <vschneid@...hat.com> wrote:
>> Hm so that would indeed prevent a concurrent inet_twsk_schedule() from
>> re-arming the timer, but in case the calls are interleaved like so:
>>
>> tcp_time_wait()
>> inet_twsk_hashdance()
>> inet_twsk_deschedule_put()
>> timer_shutdown_sync()
>> inet_twsk_schedule()
>>
>> inet_twsk_hashdance() will have left the refcounts including a count for
>> the timer, and we won't arm the timer to clear it via the timer callback
>> (via inet_twsk_kill()) - the patch in its current form relies on the timer
>> being re-armed for that.
>>
>> I don't know if there's a cleaner way to do this, but we could catch that
>> in inet_twsk_schedule() and issue the inet_twsk_kill() directly if we can
>> tell the timer has been shutdown:
>> ---
>> diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sockc
>> index 61a053fbd329c..c272da5046bb4 100644
>> --- a/net/ipv4/inet_timewait_sock.c
>> +++ b/net/ipv4/inet_timewait_sock.c
>> @@ -227,7 +227,7 @@ void inet_twsk_deschedule_put(struct inet_timewait_sock *tw)
>> * have already gone through {tcp,dcpp}_time_wait(), and we can safely
>> * call inet_twsk_kill().
>> */
>> - if (del_timer_sync(&tw->tw_timer))
>> + if (timer_shutdown_sync(&tw->tw_timer))
>> inet_twsk_kill(tw);
>> inet_twsk_put(tw);
>> }
>> @@ -267,6 +267,10 @@ void __inet_twsk_schedule(struct inet_timewait_sock *tw, int timeo, bool rearm)
>> LINUX_MIB_TIMEWAITED);
>> BUG_ON(mod_timer(&tw->tw_timer, jiffies + timeo));
>
> Would not a shutdown timer return a wrong mod_timer() value here ?
>
> Instead of BUG_ON(), simply release the refcount ?
>
Unfortunately a shutdown timer would return the same as a non-shutdown one:
* Return:
* * %0 - The timer was inactive and started or was in shutdown
* state and the operation was discarded
and now that you've pointed this out, I realize it's racy to check the
state of the timer after the mod_timer():
BUG_ON(mod_timer(&tw->tw_timer, jiffies + timeo));
inet_twsk_deschedule_put()
timer_shutdown_sync()
inet_twsk_kill()
if (!tw->tw_timer.function)
inet_twsk_kill()
I've looked into messing about with the return values of mod_timer() to get
the info that the timer was shutdown, but the only justification for this
is that here we rely on the timer_base lock to serialize
inet_twsk_schedule() vs inet_twsk_deschedule_put().
AFAICT the alternative is adding local serialization like so, which I'm not
the biggest fan of but couldn't think of a neater approach:
---
diff --git a/include/net/inet_timewait_sock.h b/include/net/inet_timewait_sock.h
index f28da08a37b4e..39bb0c148d4ee 100644
--- a/include/net/inet_timewait_sock.h
+++ b/include/net/inet_timewait_sock.h
@@ -75,6 +75,7 @@ struct inet_timewait_sock {
struct timer_list tw_timer;
struct inet_bind_bucket *tw_tb;
struct inet_bind2_bucket *tw_tb2;
+ struct spinlock tw_timer_lock;
};
#define tw_tclass tw_tos
diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
index 61a053fbd329c..2471516f9c61d 100644
--- a/net/ipv4/inet_timewait_sock.c
+++ b/net/ipv4/inet_timewait_sock.c
@@ -193,6 +193,7 @@ struct inet_timewait_sock *inet_twsk_alloc(const struct sock *sk,
atomic64_set(&tw->tw_cookie, atomic64_read(&sk->sk_cookie));
twsk_net_set(tw, sock_net(sk));
timer_setup(&tw->tw_timer, tw_timer_handler, 0);
+ spin_lock_init(&tw->tw_timer_lock);
/*
* Because we use RCU lookups, we should not set tw_refcnt
* to a non null value before everything is setup for this
@@ -227,8 +228,11 @@ void inet_twsk_deschedule_put(struct inet_timewait_sock *tw)
* have already gone through {tcp,dcpp}_time_wait(), and we can safely
* call inet_twsk_kill().
*/
- if (del_timer_sync(&tw->tw_timer))
+ spin_lock(&tw->tw_timer_lock);
+ if (timer_shutdown_sync(&tw->tw_timer))
inet_twsk_kill(tw);
+ spin_unlock(&tw->tw_timer_lock);
+
inet_twsk_put(tw);
}
EXPORT_SYMBOL(inet_twsk_deschedule_put);
@@ -262,11 +266,25 @@ void __inet_twsk_schedule(struct inet_timewait_sock *tw, int timeo, bool rearm)
if (!rearm) {
bool kill = timeo <= 4*HZ;
+ bool pending;
__NET_INC_STATS(twsk_net(tw), kill ? LINUX_MIB_TIMEWAITKILLED :
LINUX_MIB_TIMEWAITED);
+ spin_lock(&tw->tw_timer_lock);
BUG_ON(mod_timer(&tw->tw_timer, jiffies + timeo));
+ pending = timer_pending(&tw->tw_timer);
refcount_inc(&tw->tw_dr->tw_refcount);
+
+ /*
+ * If the timer didn't become pending under tw_timer_lock, then
+ * it means it has been shutdown by inet_twsk_deschedule_put()
+ * prior to this invocation. All that remains is to clean up the
+ * timewait.
+ */
+ if (!pending)
+ inet_twsk_kill(tw);
+
+ spin_unlock(&tw->tw_timer_lock);
} else {
mod_timer_pending(&tw->tw_timer, jiffies + timeo);
}
Powered by blists - more mailing lists