[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <MW2PR2101MB0892BE7BAA9D2E57764AD3A3BF519@MW2PR2101MB0892.namprd21.prod.outlook.com>
Date: Thu, 13 May 2021 20:40:30 +0000
From: Dexuan Cui <decui@...rosoft.com>
To: Phil Sutter <phil@....cc>, Pablo Neira Ayuso <pablo@...filter.org>
CC: "'netfilter-devel@...r.kernel.org'" <netfilter-devel@...r.kernel.org>,
"'netdev@...r.kernel.org'" <netdev@...r.kernel.org>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
"'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>
Subject: RE: netfilter: iptables-restore: setsockopt(3, SOL_IP,
IPT_SO_SET_REPLACE, "security...", ...) return -EAGAIN
> From: n0-1@...yte.nwl.cc <n0-1@...yte.nwl.cc> On Behalf Of Phil Sutter
> Sent: Thursday, May 13, 2021 10:08 AM
> >
> > There's -w and -W to serialize ruleset updates. You could follow a
> > similar approach from userspace if you don't use iptables userspace
> > binary.
>
> My guess is the xtables lock is not effective here, so waiting for it
> probably won't help.
Here iptables-restore v1.6.0 is used, and it does not support the -w and -W
options. :-)
New iptables-restore versions, e.g. 1.8.4-3ubuntu2, do support the -w/-W
options.
> Dexuan, concurrent access is avoided in user space using a file-based
> lock. So if multiple iptables(-restore) processes run in different
> mount-namespaces, they might miss the other's /run/xtables.lock. Another
> option would be if libiptc is used instead of calling iptables, but
> that's more a shot in the dark - I don't know if libiptc doesn't support
> obtaining the xtables lock.
>
> > > I think we need a real fix.
> >
> > iptables-nft already fixes this.
>
> nftables (and therefore iptables-nft) implement transactional logic in
> kernel, user space automatically retries if a transaction's commit
> fails.
>
> Cheers, Phil
Good to know. Thanks for the explanation!
It sounds like I need to either migrate to iptables-nft/nft or use a retry
workarouond (if iptables-restore-legacy fails due to EAGAIN).
Thanks,
-- Dexuan
Powered by blists - more mailing lists