[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a51d867a-3ca9-fd36-528a-353aa6c42f42@thelounge.net>
Date: Fri, 5 Feb 2021 15:42:53 +0100
From: Reindl Harald <h.reindl@...lounge.net>
To: Jozsef Kadlecsik <kadlec@...filter.org>
Cc: Pablo Neira Ayuso <pablo@...filter.org>,
netfilter-devel@...r.kernel.org, davem@...emloft.net,
netdev@...r.kernel.org, kuba@...nel.org
Subject: Re: [PATCH net 1/4] netfilter: xt_recent: Fix attempt to update
deleted entry
Am 05.02.21 um 14:54 schrieb Jozsef Kadlecsik:
> Hi Harald,
>
> On Fri, 5 Feb 2021, Reindl Harald wrote:
>
>> "Reap only entries which won't be updated" sounds for me like the could
>> be some optimization: i mean when you first update and then check what
>> can be reaped the recently updated entry would not match to begin with
>
> When the entry is new and the given recent table is full we cannot update
> (add) it, unless old entries are deleted (reaped) first. So it'd require
> more additional checkings to be introduced to reverse the order of the two
> operations.
well, the most important thing is that the firewall-vm stops to
kernel-panic, built that beast in autumn 2018 and until april 2019 i
went trough hell with random crashes all the time (connlimit regression,
driver issues, vmware issues and that one where i removed --reap on the
most called one with some other changes when it crashed 5 or 10 times a
day and then 3 days not at all so never figured out what was the gamechanger
on the other hand if you can't reap old entries because everything is
fresh (real DDOS) you can't update / add it anyways
what makes me thinking about the ones without --reap - how is it
handeled in that case, i mean there must be some LRU logic present
anyways given that --reap is not enabled by default (otherwise that bug
would not have hitted me so long randomly)
my first xt_recent-rule on top don't have --reap by intention because
it's the DDOS stuff with total connections to any machine per two
seconds, my guess what that --reap don't come for free and the
roudnabout 200 MB RAM overhead is OK, for the other 12 not hitting that
much the VM would consume 1.5 GB RAM after a few days instead 240 MB -
but they where obviosuly the trigger for random crashes
how does that one work after "it's full" to track recent attackers
instead just consume memory and no longer work properly?
Powered by blists - more mailing lists