lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 1 Feb 2020 00:36:59 +0100
From:   Florian Westphal <fw@...len.de>
To:     Cong Wang <xiyou.wangcong@...il.com>
Cc:     Florian Westphal <fw@...len.de>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        NetFilter <netfilter-devel@...r.kernel.org>,
        syzbot <syzbot+adf6c6c2be1c3a718121@...kaller.appspotmail.com>,
        Pablo Neira Ayuso <pablo@...filter.org>,
        Jozsef Kadlecsik <kadlec@...filter.org>
Subject: Re: [Patch nf 3/3] xt_hashlimit: limit the max size of hashtable

Cong Wang <xiyou.wangcong@...il.com> wrote:
> On Fri, Jan 31, 2020 at 2:08 PM Florian Westphal <fw@...len.de> wrote:
> >
> > Cong Wang <xiyou.wangcong@...il.com> wrote:
> > > The user-specified hashtable size is unbound, this could
> > > easily lead to an OOM or a hung task as we hold the global
> > > mutex while allocating and initializing the new hashtable.
> > >
> > > The max value is derived from the max value when chosen by
> > > the kernel.
> > >
> > > Reported-and-tested-by: syzbot+adf6c6c2be1c3a718121@...kaller.appspotmail.com
> > > Cc: Pablo Neira Ayuso <pablo@...filter.org>
> > > Cc: Jozsef Kadlecsik <kadlec@...filter.org>
> > > Cc: Florian Westphal <fw@...len.de>
> > > Signed-off-by: Cong Wang <xiyou.wangcong@...il.com>
> > > ---
> > >  net/netfilter/xt_hashlimit.c | 6 +++++-
> > >  1 file changed, 5 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
> > > index 57a2639bcc22..6327134c5886 100644
> > > --- a/net/netfilter/xt_hashlimit.c
> > > +++ b/net/netfilter/xt_hashlimit.c
> > > @@ -272,6 +272,8 @@ dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent)
> > >  }
> > >  static void htable_gc(struct work_struct *work);
> > >
> > > +#define HASHLIMIT_MAX_SIZE 8192
> > > +
> > >  static int htable_create(struct net *net, struct hashlimit_cfg3 *cfg,
> > >                        const char *name, u_int8_t family,
> > >                        struct xt_hashlimit_htable **out_hinfo,
> > > @@ -290,7 +292,7 @@ static int htable_create(struct net *net, struct hashlimit_cfg3 *cfg,
> > >               size = (nr_pages << PAGE_SHIFT) / 16384 /
> > >                      sizeof(struct hlist_head);
> > >               if (nr_pages > 1024 * 1024 * 1024 / PAGE_SIZE)
> > > -                     size = 8192;
> > > +                     size = HASHLIMIT_MAX_SIZE;
> > >               if (size < 16)
> > >                       size = 16;
> > >       }
> > > @@ -848,6 +850,8 @@ static int hashlimit_mt_check_common(const struct xt_mtchk_param *par,
> > >
> > >       if (cfg->gc_interval == 0 || cfg->expire == 0)
> > >               return -EINVAL;
> > > +     if (cfg->size > HASHLIMIT_MAX_SIZE)
> > > +             return -ENOMEM;
> >
> > Hmm, won't that break restore of rulesets that have something like
> >
> > --hashlimit-size 10000?
> >
> > AFAIU this limits the module to vmalloc requests of only 64kbyte.
> > I'm not opposed to a limit (or a cap), but 64k seems a bit low to me.
> 
> 8192 is from the current code which handles kernel-chosen size
> (that is cfg->size==0), I personally have no idea what the max
> should be. :)

Me neither :-/

> Please suggest a number.

O would propose a max alloc size (hard limit) of ~8 MByte of vmalloc
space, or maybe 16 at most.

1048576 max upperlimit -> ~8mbyte vmalloc request -> allows to store
up to 2**23 entries.

In order to prevent breaking userspace, perhaps make it so that the
kernel caps cfg.max at twice that value?  Would allow storing up to
16777216 addresses with an average chain depth of 16 (which is quite
large).  We could increase the max limit in case someone presents a use
case.

What do you think?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ