lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 10 May 2021 22:05:59 -0700
From:   Joe Stringer <joe@...ium.io>
To:     Cong Wang <xiyou.wangcong@...il.com>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        bpf <bpf@...r.kernel.org>,
        Xiongchun Duan <duanxiongchun@...edance.com>,
        Dongdong Wang <wangdongdong.6@...edance.com>,
        Muchun Song <songmuchun@...edance.com>,
        Cong Wang <cong.wang@...edance.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        Pedro Tammela <pctammela@...atatu.com>,
        Jamal Hadi Salim <jhs@...atatu.com>
Subject: Re: [RFC Patch bpf-next] bpf: introduce bpf timer

Hi Cong,

On Sat, May 8, 2021 at 10:39 PM Cong Wang <xiyou.wangcong@...il.com> wrote:
>
> On Tue, Apr 27, 2021 at 11:34 AM Alexei Starovoitov
> <alexei.starovoitov@...il.com> wrote:
> >
> > On Tue, Apr 27, 2021 at 9:36 AM Cong Wang <xiyou.wangcong@...il.com> wrote:
> > >
> > > We don't go back to why user-space cleanup is inefficient again,
> > > do we? ;)
> >
> > I remain unconvinced that cilium conntrack _needs_ timer apis.
> > It works fine in production and I don't hear any complaints
> > from cilium users. So 'user space cleanup inefficiencies' is
> > very subjective and cannot be the reason to add timer apis.
>
> I am pretty sure I showed the original report to you when I sent
> timeout hashmap patch, in case you forgot here it is again:
> https://github.com/cilium/cilium/issues/5048
>
> and let me quote the original report here:
>
> "The current implementation (as of v1.2) for managing the contents of
> the datapath connection tracking map leaves something to be desired:
> Once per minute, the userspace cilium-agent makes a series of calls to
> the bpf() syscall to fetch all of the entries in the map to determine
> whether they should be deleted. For each entry in the map, 2-3 calls
> must be made: One to fetch the next key, one to fetch the value, and
> perhaps one to delete the entry. The maximum size of the map is 1
> million entries, and if the current count approaches this size then
> the garbage collection goroutine may spend a significant number of CPU
> cycles iterating and deleting elements from the conntrack map."

I'm also curious to hear more details as I haven't seen any recent
discussion in the common Cilium community channels (GitHub / Slack)
around deficiencies in the conntrack garbage collection since we
addressed the LRU issues upstream and switched back to LRU maps.
There's an update to the report quoted from the same link above:

"In recent releases, we've moved back to LRU for management of the CT
maps so the core problem is not as bad; furthermore we have
implemented a backoff for GC depending on the size and number of
entries in the conntrack table, so that in active environments the
userspace GC is frequent enough to prevent issues but in relatively
passive environments the userspace GC is only rarely run (to minimize
CPU impact)."

By "core problem is not as bad", I would have been referring to the
way that failing to garbage collect hashtables in a timely manner can
lead to rejecting new connections due to lack of available map space.
Switching back to LRU mitigated this concern. With a reduced frequency
of running the garbage collection logic, the CPU impact is lower as
well. I don't think we've explored batched map ops for this use case
yet either, which would already serve to improve the CPU usage
situation without extending the kernel.

The main outstanding issue I'm aware of is that we will often have a
1:1 mapping of entries in the CT map and the NAT map, and ideally we'd
like them to have tied fates but currently we have no mechanism to do
this with LRU. When LRU eviction occurs, the entries can get out of
sync until the next GC. I could imagine timers helping with this if we
were to switch back to hash maps since we could handle this problem in
custom eviction logic, but that would reintroduce the entry management
problem above. So then we'd still need more work to figure out how to
address that with a timers approach. If I were to guess right now, the
right solution for this particular problem is probably associating
programs with map entry lifecycle events (like LRU eviction) rather
than adding timers to trigger the logic we want, but that's a whole
different discussion.

Cheers,
Joe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ