lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8736iuyx28.fsf@toke.dk>
Date:   Thu, 25 Jul 2019 17:05:03 +0200
From:   Toke Høiland-Jørgensen <toke@...hat.com>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     Daniel Borkmann <daniel@...earbox.net>,
        Alexei Starovoitov <ast@...nel.org>, netdev@...r.kernel.org,
        David Miller <davem@...emloft.net>,
        Jakub Kicinski <jakub.kicinski@...ronome.com>,
        Björn Töpel <bjorn.topel@...il.com>,
        Yonghong Song <yhs@...com>, brouer@...hat.com
Subject: Re: [PATCH bpf-next v4 3/6] xdp: Add devmap_hash map type for looking up devices by hashed index

Jesper Dangaard Brouer <brouer@...hat.com> writes:

> On Thu, 25 Jul 2019 12:32:19 +0200
> Toke Høiland-Jørgensen <toke@...hat.com> wrote:
>
>> Jesper Dangaard Brouer <brouer@...hat.com> writes:
>> 
>> > On Mon, 22 Jul 2019 13:52:48 +0200
>> > Toke Høiland-Jørgensen <toke@...hat.com> wrote:
>> >  
>> >> +static inline struct hlist_head *dev_map_index_hash(struct bpf_dtab *dtab,
>> >> +						    int idx)
>> >> +{
>> >> +	return &dtab->dev_index_head[idx & (NETDEV_HASHENTRIES - 1)];
>> >> +}  
>> >
>> > It is good for performance that our "hash" function is simply an AND
>> > operation on the idx.  We want to keep it this way.
>> >
>> > I don't like that you are using NETDEV_HASHENTRIES, because the BPF map
>> > infrastructure already have a way to specify the map size (struct
>> > bpf_map_def .max_entries).  BUT for performance reasons, to keep the
>> > AND operation, we would need to round up the hash-array size to nearest
>> > power of 2 (or reject if user didn't specify a power of 2, if we want
>> > to "expose" this limit to users).  
>> 
>> But do we really want the number of hash buckets to be equal to the max
>> number of entries? The values are not likely to be evenly distributed,
>> so we'll end up with big buckets if the number is small, meaning we'll
>> blow performance on walking long lists in each bucket.
>
> The requested change makes it user-configurable, instead of fixed 256
> entries.  I've seen production use-case with >5000 net_devices, thus
> they need a knob to increase this (to avoid the list walking as you
> mention).

Ah, I see. That makes sense; I thought you wanted to make it smaller
(cf. the previous discussion about it being too big). Still, it seems
counter-intuitive to overload max_entries in this way.

I do see that this is what the existing hash map is also doing, though,
so I guess there is some precedence. I do wonder if we'll end up getting
bad performance from the hash being too simplistic, but I guess we can
always fix that later.

>> Also, if the size is dynamic the size needs to be loaded from memory
>> instead of being a compile-time constant, which will presumably hurt
>> performance (though not sure by how much)?
>
> To counter this, the mask value which need to be loaded from memory,
> needs to be placed next to some other struct member which is already in
> use (at least on same cacheline, Intel have some 16 bytes access micro
> optimizations, which I've never been able to measure, as its in 0.5
> nanosec scale).

In the fast path (i.e., in __xdp_map_lookup_elem) we will have already
loaded map->max_entries since it's on the same cacheline as map_type
which we use to disambiguate which function to call. So it should be
fine to just use that directly.

I'll send a new version with this change :)

-Toke

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ