lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 24 Mar 2014 16:57:48 -0700
From:	Tom Herbert <therbert@...gle.com>
To:	Terry Lam <vtlam@...gle.com>
Cc:	"David S. Miller" <davem@...emloft.net>,
	Linux Netdev List <netdev@...r.kernel.org>,
	Eric Dumazet <edumazet@...gle.com>,
	Nandita Dukkipati <nanditad@...gle.com>
Subject: Re: [PATCH] net-qdisc-hhf: Heavy-Hitter Filter (HHF) qdisc

On Sun, Mar 23, 2014 at 9:54 PM, Terry Lam <vtlam@...gle.com> wrote:
> Hi Tom,
>
> Perturbation is mainly for added security (e.g. intentional hash
> collision to map an elephant to the high priority bucket).
> I just looked at the code and it looks like skb_get_hash (indeed,
> __skb_get_hash) does not have perturbation.
> Do you mean we will soon perturbation for connected sockets?
>
That is an interesting question. We are already using skb->hash in
several cases without any perturbation. I think there are two
possibilities:

1) Add some sort of global perturbation to skb_get_hash
2) Add function skb_get_hash_perturb(skb, perturbation)

I think there may be merit to second, it might resolve one concern I
have is that using the same hash value for multiple purposes may
reduce entropy e.g.

if (hash & 1) .... if (hash &1)-- second condition always hits since
first did same filter.

> Terry
>
>
> On Fri, Mar 21, 2014 at 2:45 PM, Tom Herbert <therbert@...gle.com> wrote:
>>
>> Terry,
>>
>> HHF defines it's own skb_hash function. Do you see any issues if we
>> remove this and call skb_get_hash instead. We'll have functionality in
>> TX to set skb->hash from sk_hash. The only difference is that in
>> non-connected socket case we won't include q->perturbation in the
>> jhash-- how important is this?
>>
>>
>> On Tue, Dec 10, 2013 at 11:26 PM, Terry Lam <vtlam@...gle.com> wrote:
>> > This patch implements the first size-based qdisc that attempts to
>> > differentiate between small flows and heavy-hitters.  The goal is to
>> > catch the heavy-hitters and move them to a separate queue with less
>> > priority so that bulk traffic does not affect the latency of critical
>> > traffic.  Currently "less priority" means less weight (2:1 in
>> > particular) in a Weighted Deficit Round Robin (WDRR) scheduler.
>> >
>> > In essence, this patch addresses the "delay-bloat" problem due to
>> > bloated buffers. In some systems, large queues may be necessary for
>> > obtaining CPU efficiency, or due to the presence of unresponsive
>> > traffic like UDP, or just a large number of connections with each
>> > having a small amount of outstanding traffic. In these circumstances,
>> > HHF aims to reduce the HoL blocking for latency sensitive traffic,
>> > while not impacting the queues built up by bulk traffic.  HHF can also
>> > be used in conjunction with other AQM mechanisms such as CoDel.
>> >
>> > To capture heavy-hitters, we implement the "multi-stage filter" design
>> > in the following paper:
>> > C. Estan and G. Varghese, "New Directions in Traffic Measurement and
>> > Accounting", in ACM SIGCOMM, 2002.
>> >
>> > Some configurable qdisc settings through 'tc':
>> > - hhf_reset_timeout: period to reset counter values in the multi-stage
>> >                      filter (default 40ms)
>> > - hhf_admit_bytes:   threshold to classify heavy-hitters
>> >                      (default 128KB)
>> > - hhf_evict_timeout: threshold to evict idle heavy-hitters
>> >                      (default 1s)
>> > - hhf_non_hh_weight: Weighted Deficit Round Robin (WDRR) weight for
>> >                      non-heavy-hitters (default 2)
>> > - hh_flows_limit:    max number of heavy-hitter flow entries
>> >                      (default 2048)
>> >
>> > Note that the ratio between hhf_admit_bytes and hhf_reset_timeout
>> > reflects the bandwidth of heavy-hitters that we attempt to capture
>> > (25Mbps with the above default settings).
>> >
>> > The false negative rate (heavy-hitter flows getting away unclassified)
>> > is zero by the design of the multi-stage filter algorithm.
>> > With 100 heavy-hitter flows, using four hashes and 4000 counters yields
>> > a false positive rate (non-heavy-hitters mistakenly classified as
>> > heavy-hitters) of less than 1e-4.
>> >
>> > Signed-off-by: Terry Lam <vtlam@...gle.com>
>> > ---
>> >  include/uapi/linux/pkt_sched.h |  25 ++
>> >  net/sched/Kconfig              |   9 +
>> >  net/sched/Makefile             |   1 +
>> >  net/sched/sch_hhf.c            | 746 +++++++++++++++++++++++++++++++++++++++++
>> >  4 files changed, 781 insertions(+)
>> >  create mode 100644 net/sched/sch_hhf.c
>> >
>> > diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h
>> > index a806687..4566993 100644
>> > --- a/include/uapi/linux/pkt_sched.h
>> > +++ b/include/uapi/linux/pkt_sched.h
>> > @@ -790,4 +790,29 @@ struct tc_fq_qd_stats {
>> >         __u32   throttled_flows;
>> >         __u32   pad;
>> >  };
>> > +
>> > +/* Heavy-Hitter Filter */
>> > +
>> > +enum {
>> > +       TCA_HHF_UNSPEC,
>> > +       TCA_HHF_BACKLOG_LIMIT,
>> > +       TCA_HHF_QUANTUM,
>> > +       TCA_HHF_HH_FLOWS_LIMIT,
>> > +       TCA_HHF_RESET_TIMEOUT,
>> > +       TCA_HHF_ADMIT_BYTES,
>> > +       TCA_HHF_EVICT_TIMEOUT,
>> > +       TCA_HHF_NON_HH_WEIGHT,
>> > +       __TCA_HHF_MAX
>> > +};
>> > +
>> > +#define TCA_HHF_MAX    (__TCA_HHF_MAX - 1)
>> > +
>> > +struct tc_hhf_xstats {
>> > +       __u32   drop_overlimit; /* number of times max qdisc packet limit
>> > +                                * was hit
>> > +                                */
>> > +       __u32   hh_overlimit;   /* number of times max heavy-hitters was hit */
>> > +       __u32   hh_tot_count;   /* number of captured heavy-hitters so far */
>> > +       __u32   hh_cur_count;   /* number of current heavy-hitters */
>> > +};
>> >  #endif
>> > diff --git a/net/sched/Kconfig b/net/sched/Kconfig
>> > index ad1f1d8..919847b 100644
>> > --- a/net/sched/Kconfig
>> > +++ b/net/sched/Kconfig
>> > @@ -286,6 +286,15 @@ config NET_SCH_FQ
>> >
>> >           If unsure, say N.
>> >
>> > +config NET_SCH_HHF
>> > +       tristate "Heavy-Hitter Filter (HHF)"
>> > +       help
>> > +         Say Y here if you want to use the Heavy-Hitter Filter (HHF)
>> > +         packet scheduling algorithm.
>> > +
>> > +         To compile this driver as a module, choose M here: the module
>> > +         will be called sch_hhf.
>> > +
>> >  config NET_SCH_INGRESS
>> >         tristate "Ingress Qdisc"
>> >         depends on NET_CLS_ACT
>> > diff --git a/net/sched/Makefile b/net/sched/Makefile
>> > index 35fa47a..3442e5f 100644
>> > --- a/net/sched/Makefile
>> > +++ b/net/sched/Makefile
>> > @@ -40,6 +40,7 @@ obj-$(CONFIG_NET_SCH_QFQ)     += sch_qfq.o
>> >  obj-$(CONFIG_NET_SCH_CODEL)    += sch_codel.o
>> >  obj-$(CONFIG_NET_SCH_FQ_CODEL) += sch_fq_codel.o
>> >  obj-$(CONFIG_NET_SCH_FQ)       += sch_fq.o
>> > +obj-$(CONFIG_NET_SCH_HHF)      += sch_hhf.o
>> >
>> >  obj-$(CONFIG_NET_CLS_U32)      += cls_u32.o
>> >  obj-$(CONFIG_NET_CLS_ROUTE4)   += cls_route.o
>> > diff --git a/net/sched/sch_hhf.c b/net/sched/sch_hhf.c
>> > new file mode 100644
>> > index 0000000..91c723e
>> > --- /dev/null
>> > +++ b/net/sched/sch_hhf.c
>> > @@ -0,0 +1,746 @@
>> > +/* net/sched/sch_hhf.c         Heavy-Hitter Filter (HHF)
>> > + *
>> > + * Copyright (C) 2013 Terry Lam <vtlam@...gle.com>
>> > + * Copyright (C) 2013 Nandita Dukkipati <nanditad@...gle.com>
>> > + */
>> > +
>> > +#include <linux/jhash.h>
>> > +#include <linux/jiffies.h>
>> > +#include <linux/module.h>
>> > +#include <linux/skbuff.h>
>> > +#include <linux/vmalloc.h>
>> > +#include <net/flow_keys.h>
>> > +#include <net/pkt_sched.h>
>> > +#include <net/sock.h>
>> > +
>> > +/*     Heavy-Hitter Filter (HHF)
>> > + *
>> > + * Principles :
>> > + * Flows are classified into two buckets: non-heavy-hitter and heavy-hitter
>> > + * buckets. Initially, a new flow starts as non-heavy-hitter. Once classified
>> > + * as heavy-hitter, it is immediately switched to the heavy-hitter bucket.
>> > + * The buckets are dequeued by a Weighted Deficit Round Robin (WDRR) scheduler,
>> > + * in which the heavy-hitter bucket is served with less weight.
>> > + * In other words, non-heavy-hitters (e.g., short bursts of critical traffic)
>> > + * are isolated from heavy-hitters (e.g., persistent bulk traffic) and also have
>> > + * higher share of bandwidth.
>> > + *
>> > + * To capture heavy-hitters, we use the "multi-stage filter" algorithm in the
>> > + * following paper:
>> > + * [EV02] C. Estan and G. Varghese, "New Directions in Traffic Measurement and
>> > + * Accounting", in ACM SIGCOMM, 2002.
>> > + *
>> > + * Conceptually, a multi-stage filter comprises k independent hash functions
>> > + * and k counter arrays. Packets are indexed into k counter arrays by k hash
>> > + * functions, respectively. The counters are then increased by the packet sizes.
>> > + * Therefore,
>> > + *    - For a heavy-hitter flow: *all* of its k array counters must be large.
>> > + *    - For a non-heavy-hitter flow: some of its k array counters can be large
>> > + *      due to hash collision with other small flows; however, with high
>> > + *      probability, not *all* k counters are large.
>> > + *
>> > + * By the design of the multi-stage filter algorithm, the false negative rate
>> > + * (heavy-hitters getting away uncaptured) is zero. However, the algorithm is
>> > + * susceptible to false positives (non-heavy-hitters mistakenly classified as
>> > + * heavy-hitters).
>> > + * Therefore, we also implement the following optimizations to reduce false
>> > + * positives by avoiding unnecessary increment of the counter values:
>> > + *    - Optimization O1: once a heavy-hitter is identified, its bytes are not
>> > + *        accounted in the array counters. This technique is called "shielding"
>> > + *        in Section 3.3.1 of [EV02].
>> > + *    - Optimization O2: conservative update of counters
>> > + *                       (Section 3.3.2 of [EV02]),
>> > + *        New counter value = max {old counter value,
>> > + *                                 smallest counter value + packet bytes}
>> > + *
>> > + * Finally, we refresh the counters periodically since otherwise the counter
>> > + * values will keep accumulating.
>> > + *
>> > + * Once a flow is classified as heavy-hitter, we also save its per-flow state
>> > + * in an exact-matching flow table so that its subsequent packets can be
>> > + * dispatched to the heavy-hitter bucket accordingly.
>> > + *
>> > + *
>> > + * At a high level, this qdisc works as follows:
>> > + * Given a packet p:
>> > + *   - If the flow-id of p (e.g., TCP 5-tuple) is already in the exact-matching
>> > + *     heavy-hitter flow table, denoted table T, then send p to the heavy-hitter
>> > + *     bucket.
>> > + *   - Otherwise, forward p to the multi-stage filter, denoted filter F
>> > + *        + If F decides that p belongs to a non-heavy-hitter flow, then send p
>> > + *          to the non-heavy-hitter bucket.
>> > + *        + Otherwise, if F decides that p belongs to a new heavy-hitter flow,
>> > + *          then set up a new flow entry for the flow-id of p in the table T and
>> > + *          send p to the heavy-hitter bucket.
>> > + *
>> > + * In this implementation:
>> > + *   - T is a fixed-size hash-table with 1024 entries. Hash collision is
>> > + *     resolved by linked-list chaining.
>> > + *   - F has four counter arrays, each array containing 1024 32-bit counters.
>> > + *     That means 4 * 1024 * 32 bits = 16KB of memory.
>> > + *   - Since each array in F contains 1024 counters, 10 bits are sufficient to
>> > + *     index into each array.
>> > + *     Hence, instead of having four hash functions, we chop the 32-bit
>> > + *     skb-hash into three 10-bit chunks, and the remaining 10-bit chunk is
>> > + *     computed as XOR sum of those three chunks.
>> > + *   - We need to clear the counter arrays periodically; however, directly
>> > + *     memsetting 16KB of memory can lead to cache eviction and unwanted delay.
>> > + *     So by representing each counter by a valid bit, we only need to reset
>> > + *     4K of 1 bit (i.e. 512 bytes) instead of 16KB of memory.
>> > + *   - The Deficit Round Robin engine is taken from fq_codel implementation
>> > + *     (net/sched/sch_fq_codel.c). Note that wdrr_bucket corresponds to
>> > + *     fq_codel_flow in fq_codel implementation.
>> > + *
>> > + */
>> > +
>> > +/* Non-configurable parameters */
>> > +#define HH_FLOWS_CNT    1024  /* number of entries in exact-matching table T */
>> > +#define HHF_ARRAYS_CNT  4     /* number of arrays in multi-stage filter F */
>> > +#define HHF_ARRAYS_LEN  1024  /* number of counters in each array of F */
>> > +#define HHF_BIT_MASK_LEN 10    /* masking 10 bits */
>> > +#define HHF_BIT_MASK    0x3FF /* bitmask of 10 bits */
>> > +
>> > +#define WDRR_BUCKET_CNT  2     /* two buckets for Weighted DRR */
>> > +enum wdrr_bucket_idx {
>> > +       WDRR_BUCKET_FOR_HH      = 0, /* bucket id for heavy-hitters */
>> > +       WDRR_BUCKET_FOR_NON_HH  = 1  /* bucket id for non-heavy-hitters */
>> > +};
>> > +
>> > +#define hhf_time_before(a, b)  \
>> > +       (typecheck(u32, a) && typecheck(u32, b) && ((s32)((a) - (b)) < 0))
>> > +
>> > +/* Heavy-hitter per-flow state */
>> > +struct hh_flow_state {
>> > +       u32              hash_id;       /* hash of flow-id (e.g. TCP 5-tuple) */
>> > +       u32              hit_timestamp; /* last time heavy-hitter was seen */
>> > +       struct list_head flowchain;     /* chaining under hash collision */
>> > +};
>> > +
>> > +/* Weighted Deficit Round Robin (WDRR) scheduler */
>> > +struct wdrr_bucket {
>> > +       struct sk_buff    *head;
>> > +       struct sk_buff    *tail;
>> > +       struct list_head  bucketchain;
>> > +       int               deficit;
>> > +};
>> > +
>> > +struct hhf_sched_data {
>> > +       struct wdrr_bucket buckets[WDRR_BUCKET_CNT];
>> > +       u32                perturbation;   /* hash perturbation */
>> > +       u32                quantum;        /* psched_mtu(qdisc_dev(sch)); */
>> > +       u32                drop_overlimit; /* number of times max qdisc packet
>> > +                                           * limit was hit
>> > +                                           */
>> > +       struct list_head   *hh_flows;       /* table T (currently active HHs) */
>> > +       u32                hh_flows_limit;            /* max active HH allocs */
>> > +       u32                hh_flows_overlimit; /* num of disallowed HH allocs */
>> > +       u32                hh_flows_total_cnt;          /* total admitted HHs */
>> > +       u32                hh_flows_current_cnt;        /* total current HHs  */
>> > +       u32                *hhf_arrays[HHF_ARRAYS_CNT]; /* HH filter F */
>> > +       u32                hhf_arrays_reset_timestamp;  /* last time hhf_arrays
>> > +                                                        * was reset
>> > +                                                        */
>> > +       unsigned long      *hhf_valid_bits[HHF_ARRAYS_CNT]; /* shadow valid bits
>> > +                                                            * of hhf_arrays
>> > +                                                            */
>> > +       /* Similar to the "new_flows" vs. "old_flows" concept in fq_codel DRR */
>> > +       struct list_head   new_buckets; /* list of new buckets */
>> > +       struct list_head   old_buckets; /* list of old buckets */
>> > +
>> > +       /* Configurable HHF parameters */
>> > +       u32                hhf_reset_timeout; /* interval to reset counter
>> > +                                              * arrays in filter F
>> > +                                              * (default 40ms)
>> > +                                              */
>> > +       u32                hhf_admit_bytes;   /* counter thresh to classify as
>> > +                                              * HH (default 128KB).
>> > +                                              * With these default values,
>> > +                                              * 128KB / 40ms = 25 Mbps
>> > +                                              * i.e., we expect to capture HHs
>> > +                                              * sending > 25 Mbps.
>> > +                                              */
>> > +       u32                hhf_evict_timeout; /* aging threshold to evict idle
>> > +                                              * HHs out of table T. This should
>> > +                                              * be large enough to avoid
>> > +                                              * reordering during HH eviction.
>> > +                                              * (default 1s)
>> > +                                              */
>> > +       u32                hhf_non_hh_weight; /* WDRR weight for non-HHs
>> > +                                              * (default 2,
>> > +                                              *  i.e., non-HH : HH = 2 : 1)
>> > +                                              */
>> > +};
>> > +
>> > +static inline u32 hhf_time_stamp(void)
>> > +{
>> > +       return jiffies;
>> > +}
>> > +
>> > +static unsigned int skb_hash(const struct hhf_sched_data *q,
>> > +                            const struct sk_buff *skb)
>> > +{
>> > +       struct flow_keys keys;
>> > +       unsigned int hash;
>> > +
>> > +       if (skb->sk && skb->sk->sk_hash)
>> > +               return skb->sk->sk_hash;
>> > +
>> > +       skb_flow_dissect(skb, &keys);
>> > +       hash = jhash_3words((__force u32)keys.dst,
>> > +                           (__force u32)keys.src ^ keys.ip_proto,
>> > +                           (__force u32)keys.ports, q->perturbation);
>> > +       return hash;
>> > +}
>> > +
>> > +/* Looks up a heavy-hitter flow in a chaining list of table T. */
>> > +static inline struct hh_flow_state *seek_list(const u32 hash,
>> > +                                             struct list_head *head,
>> > +                                             struct hhf_sched_data *q)
>> > +{
>> > +       struct hh_flow_state *flow, *next;
>> > +       u32 now = hhf_time_stamp();
>> > +
>> > +       if (list_empty(head))
>> > +               return NULL;
>> > +
>> > +       list_for_each_entry_safe(flow, next, head, flowchain) {
>> > +               u32 prev = flow->hit_timestamp + q->hhf_evict_timeout;
>> > +
>> > +               if (hhf_time_before(prev, now)) {
>> > +                       /* Delete expired heavy-hitters, but preserve one entry
>> > +                        * to avoid kzalloc() when next time this slot is hit.
>> > +                        */
>> > +                       if (list_is_last(&flow->flowchain, head))
>> > +                               return NULL;
>> > +                       list_del(&flow->flowchain);
>> > +                       kfree(flow);
>> > +                       q->hh_flows_current_cnt--;
>> > +               } else if (flow->hash_id == hash) {
>> > +                       return flow;
>> > +               }
>> > +       }
>> > +       return NULL;
>> > +}
>> > +
>> > +/* Returns a flow state entry for a new heavy-hitter.  Either reuses an expired
>> > + * entry or dynamically alloc a new entry.
>> > + */
>> > +static inline struct hh_flow_state *alloc_new_hh(struct list_head *head,
>> > +                                                struct hhf_sched_data *q)
>> > +{
>> > +       struct hh_flow_state *flow;
>> > +       u32 now = hhf_time_stamp();
>> > +
>> > +       if (!list_empty(head)) {
>> > +               /* Find an expired heavy-hitter flow entry. */
>> > +               list_for_each_entry(flow, head, flowchain) {
>> > +                       u32 prev = flow->hit_timestamp + q->hhf_evict_timeout;
>> > +
>> > +                       if (hhf_time_before(prev, now))
>> > +                               return flow;
>> > +               }
>> > +       }
>> > +
>> > +       if (q->hh_flows_current_cnt >= q->hh_flows_limit) {
>> > +               q->hh_flows_overlimit++;
>> > +               return NULL;
>> > +       }
>> > +       /* Create new entry. */
>> > +       flow = kzalloc(sizeof(struct hh_flow_state), GFP_ATOMIC);
>> > +       if (!flow)
>> > +               return NULL;
>> > +
>> > +       q->hh_flows_current_cnt++;
>> > +       INIT_LIST_HEAD(&flow->flowchain);
>> > +       list_add_tail(&flow->flowchain, head);
>> > +
>> > +       return flow;
>> > +}
>> > +
>> > +/* Assigns packets to WDRR buckets.  Implements a multi-stage filter to
>> > + * classify heavy-hitters.
>> > + */
>> > +static enum wdrr_bucket_idx hhf_classify(struct sk_buff *skb, struct Qdisc *sch)
>> > +{
>> > +       struct hhf_sched_data *q = qdisc_priv(sch);
>> > +       u32 tmp_hash, hash;
>> > +       u32 xorsum, filter_pos[HHF_ARRAYS_CNT], flow_pos;
>> > +       struct hh_flow_state *flow;
>> > +       u32 pkt_len, min_hhf_val;
>> > +       int i;
>> > +       u32 prev;
>> > +       u32 now = hhf_time_stamp();
>> > +
>> > +       /* Reset the HHF counter arrays if this is the right time. */
>> > +       prev = q->hhf_arrays_reset_timestamp + q->hhf_reset_timeout;
>> > +       if (hhf_time_before(prev, now)) {
>> > +               for (i = 0; i < HHF_ARRAYS_CNT; i++)
>> > +                       bitmap_zero(q->hhf_valid_bits[i], HHF_ARRAYS_LEN);
>> > +               q->hhf_arrays_reset_timestamp = now;
>> > +       }
>> > +
>> > +       /* Get hashed flow-id of the skb. */
>> > +       hash = skb_hash(q, skb);
>> > +
>> > +       /* Check if this packet belongs to an already established HH flow. */
>> > +       flow_pos = hash & HHF_BIT_MASK;
>> > +       flow = seek_list(hash, &q->hh_flows[flow_pos], q);
>> > +       if (flow) { /* found its HH flow */
>> > +               flow->hit_timestamp = now;
>> > +               return WDRR_BUCKET_FOR_HH;
>> > +       }
>> > +
>> > +       /* Now pass the packet through the multi-stage filter. */
>> > +       tmp_hash = hash;
>> > +       xorsum = 0;
>> > +       for (i = 0; i < HHF_ARRAYS_CNT - 1; i++) {
>> > +               /* Split the skb_hash into three 10-bit chunks. */
>> > +               filter_pos[i] = tmp_hash & HHF_BIT_MASK;
>> > +               xorsum ^= filter_pos[i];
>> > +               tmp_hash >>= HHF_BIT_MASK_LEN;
>> > +       }
>> > +       /* The last chunk is computed as XOR sum of other chunks. */
>> > +       filter_pos[HHF_ARRAYS_CNT - 1] = xorsum ^ tmp_hash;
>> > +
>> > +       pkt_len = qdisc_pkt_len(skb);
>> > +       min_hhf_val = ~0U;
>> > +       for (i = 0; i < HHF_ARRAYS_CNT; i++) {
>> > +               u32 val;
>> > +
>> > +               if (!test_bit(filter_pos[i], q->hhf_valid_bits[i])) {
>> > +                       q->hhf_arrays[i][filter_pos[i]] = 0;
>> > +                       __set_bit(filter_pos[i], q->hhf_valid_bits[i]);
>> > +               }
>> > +
>> > +               val = q->hhf_arrays[i][filter_pos[i]] + pkt_len;
>> > +               if (min_hhf_val > val)
>> > +                       min_hhf_val = val;
>> > +       }
>> > +
>> > +       /* Found a new HH iff all counter values > HH admit threshold. */
>> > +       if (min_hhf_val > q->hhf_admit_bytes) {
>> > +               /* Just captured a new heavy-hitter. */
>> > +               flow = alloc_new_hh(&q->hh_flows[flow_pos], q);
>> > +               if (!flow) /* memory alloc problem */
>> > +                       return WDRR_BUCKET_FOR_NON_HH;
>> > +               flow->hash_id = hash;
>> > +               flow->hit_timestamp = now;
>> > +               q->hh_flows_total_cnt++;
>> > +
>> > +               /* By returning without updating counters in q->hhf_arrays,
>> > +                * we implicitly implement "shielding" (see Optimization O1).
>> > +                */
>> > +               return WDRR_BUCKET_FOR_HH;
>> > +       }
>> > +
>> > +       /* Conservative update of HHF arrays (see Optimization O2). */
>> > +       for (i = 0; i < HHF_ARRAYS_CNT; i++) {
>> > +               if (q->hhf_arrays[i][filter_pos[i]] < min_hhf_val)
>> > +                       q->hhf_arrays[i][filter_pos[i]] = min_hhf_val;
>> > +       }
>> > +       return WDRR_BUCKET_FOR_NON_HH;
>> > +}
>> > +
>> > +/* Removes one skb from head of bucket. */
>> > +static inline struct sk_buff *dequeue_head(struct wdrr_bucket *bucket)
>> > +{
>> > +       struct sk_buff *skb = bucket->head;
>> > +
>> > +       bucket->head = skb->next;
>> > +       skb->next = NULL;
>> > +       return skb;
>> > +}
>> > +
>> > +/* Tail-adds skb to bucket. */
>> > +static inline void bucket_add(struct wdrr_bucket *bucket, struct sk_buff *skb)
>> > +{
>> > +       if (bucket->head == NULL)
>> > +               bucket->head = skb;
>> > +       else
>> > +               bucket->tail->next = skb;
>> > +       bucket->tail = skb;
>> > +       skb->next = NULL;
>> > +}
>> > +
>> > +static unsigned int hhf_drop(struct Qdisc *sch)
>> > +{
>> > +       struct hhf_sched_data *q = qdisc_priv(sch);
>> > +       struct wdrr_bucket *bucket;
>> > +
>> > +       /* Always try to drop from heavy-hitters first. */
>> > +       bucket = &q->buckets[WDRR_BUCKET_FOR_HH];
>> > +       if (!bucket->head)
>> > +               bucket = &q->buckets[WDRR_BUCKET_FOR_NON_HH];
>> > +
>> > +       if (bucket->head) {
>> > +               struct sk_buff *skb = dequeue_head(bucket);
>> > +
>> > +               sch->q.qlen--;
>> > +               sch->qstats.drops++;
>> > +               sch->qstats.backlog -= qdisc_pkt_len(skb);
>> > +               kfree_skb(skb);
>> > +       }
>> > +
>> > +       /* Return id of the bucket from which the packet was dropped. */
>> > +       return bucket - q->buckets;
>> > +}
>> > +
>> > +static int hhf_enqueue(struct sk_buff *skb, struct Qdisc *sch)
>> > +{
>> > +       struct hhf_sched_data *q = qdisc_priv(sch);
>> > +       enum wdrr_bucket_idx idx;
>> > +       struct wdrr_bucket *bucket;
>> > +
>> > +       idx = hhf_classify(skb, sch);
>> > +
>> > +       bucket = &q->buckets[idx];
>> > +       bucket_add(bucket, skb);
>> > +       sch->qstats.backlog += qdisc_pkt_len(skb);
>> > +
>> > +       if (list_empty(&bucket->bucketchain)) {
>> > +               unsigned int weight;
>> > +
>> > +               /* The logic of new_buckets vs. old_buckets is the same as
>> > +                * new_flows vs. old_flows in the implementation of fq_codel,
>> > +                * i.e., short bursts of non-HHs should have strict priority.
>> > +                */
>> > +               if (idx == WDRR_BUCKET_FOR_HH) {
>> > +                       /* Always move heavy-hitters to old bucket. */
>> > +                       weight = 1;
>> > +                       list_add_tail(&bucket->bucketchain, &q->old_buckets);
>> > +               } else {
>> > +                       weight = q->hhf_non_hh_weight;
>> > +                       list_add_tail(&bucket->bucketchain, &q->new_buckets);
>> > +               }
>> > +               bucket->deficit = weight * q->quantum;
>> > +       }
>> > +       if (++sch->q.qlen < sch->limit)
>> > +               return NET_XMIT_SUCCESS;
>> > +
>> > +       q->drop_overlimit++;
>> > +       /* Return Congestion Notification only if we dropped a packet from this
>> > +        * bucket.
>> > +        */
>> > +       if (hhf_drop(sch) == idx)
>> > +               return NET_XMIT_CN;
>> > +
>> > +       /* As we dropped a packet, better let upper stack know this. */
>> > +       qdisc_tree_decrease_qlen(sch, 1);
>> > +       return NET_XMIT_SUCCESS;
>> > +}
>> > +
>> > +static struct sk_buff *hhf_dequeue(struct Qdisc *sch)
>> > +{
>> > +       struct hhf_sched_data *q = qdisc_priv(sch);
>> > +       struct sk_buff *skb = NULL;
>> > +       struct wdrr_bucket *bucket;
>> > +       struct list_head *head;
>> > +
>> > +begin:
>> > +       head = &q->new_buckets;
>> > +       if (list_empty(head)) {
>> > +               head = &q->old_buckets;
>> > +               if (list_empty(head))
>> > +                       return NULL;
>> > +       }
>> > +       bucket = list_first_entry(head, struct wdrr_bucket, bucketchain);
>> > +
>> > +       if (bucket->deficit <= 0) {
>> > +               int weight = (bucket - q->buckets == WDRR_BUCKET_FOR_HH) ?
>> > +                             1 : q->hhf_non_hh_weight;
>> > +
>> > +               bucket->deficit += weight * q->quantum;
>> > +               list_move_tail(&bucket->bucketchain, &q->old_buckets);
>> > +               goto begin;
>> > +       }
>> > +
>> > +       if (bucket->head) {
>> > +               skb = dequeue_head(bucket);
>> > +               sch->q.qlen--;
>> > +               sch->qstats.backlog -= qdisc_pkt_len(skb);
>> > +       }
>> > +
>> > +       if (!skb) {
>> > +               /* Force a pass through old_buckets to prevent starvation. */
>> > +               if ((head == &q->new_buckets) && !list_empty(&q->old_buckets))
>> > +                       list_move_tail(&bucket->bucketchain, &q->old_buckets);
>> > +               else
>> > +                       list_del_init(&bucket->bucketchain);
>> > +               goto begin;
>> > +       }
>> > +       qdisc_bstats_update(sch, skb);
>> > +       bucket->deficit -= qdisc_pkt_len(skb);
>> > +
>> > +       return skb;
>> > +}
>> > +
>> > +static void hhf_reset(struct Qdisc *sch)
>> > +{
>> > +       struct sk_buff *skb;
>> > +
>> > +       while ((skb = hhf_dequeue(sch)) != NULL)
>> > +               kfree_skb(skb);
>> > +}
>> > +
>> > +static void *hhf_zalloc(size_t sz)
>> > +{
>> > +       void *ptr = kzalloc(sz, GFP_KERNEL | __GFP_NOWARN);
>> > +
>> > +       if (!ptr)
>> > +               ptr = vzalloc(sz);
>> > +
>> > +       return ptr;
>> > +}
>> > +
>> > +static void hhf_free(void *addr)
>> > +{
>> > +       if (addr) {
>> > +               if (is_vmalloc_addr(addr))
>> > +                       vfree(addr);
>> > +               else
>> > +                       kfree(addr);
>> > +       }
>> > +}
>> > +
>> > +static void hhf_destroy(struct Qdisc *sch)
>> > +{
>> > +       int i;
>> > +       struct hhf_sched_data *q = qdisc_priv(sch);
>> > +
>> > +       for (i = 0; i < HHF_ARRAYS_CNT; i++) {
>> > +               hhf_free(q->hhf_arrays[i]);
>> > +               hhf_free(q->hhf_valid_bits[i]);
>> > +       }
>> > +
>> > +       for (i = 0; i < HH_FLOWS_CNT; i++) {
>> > +               struct hh_flow_state *flow, *next;
>> > +               struct list_head *head = &q->hh_flows[i];
>> > +
>> > +               if (list_empty(head))
>> > +                       continue;
>> > +               list_for_each_entry_safe(flow, next, head, flowchain) {
>> > +                       list_del(&flow->flowchain);
>> > +                       kfree(flow);
>> > +               }
>> > +       }
>> > +       hhf_free(q->hh_flows);
>> > +}
>> > +
>> > +static const struct nla_policy hhf_policy[TCA_HHF_MAX + 1] = {
>> > +       [TCA_HHF_BACKLOG_LIMIT]  = { .type = NLA_U32 },
>> > +       [TCA_HHF_QUANTUM]        = { .type = NLA_U32 },
>> > +       [TCA_HHF_HH_FLOWS_LIMIT] = { .type = NLA_U32 },
>> > +       [TCA_HHF_RESET_TIMEOUT]  = { .type = NLA_U32 },
>> > +       [TCA_HHF_ADMIT_BYTES]    = { .type = NLA_U32 },
>> > +       [TCA_HHF_EVICT_TIMEOUT]  = { .type = NLA_U32 },
>> > +       [TCA_HHF_NON_HH_WEIGHT]  = { .type = NLA_U32 },
>> > +};
>> > +
>> > +static int hhf_change(struct Qdisc *sch, struct nlattr *opt)
>> > +{
>> > +       struct hhf_sched_data *q = qdisc_priv(sch);
>> > +       struct nlattr *tb[TCA_HHF_MAX + 1];
>> > +       unsigned int qlen;
>> > +       int err;
>> > +       u64 non_hh_quantum;
>> > +       u32 new_quantum = q->quantum;
>> > +       u32 new_hhf_non_hh_weight = q->hhf_non_hh_weight;
>> > +
>> > +       if (!opt)
>> > +               return -EINVAL;
>> > +
>> > +       err = nla_parse_nested(tb, TCA_HHF_MAX, opt, hhf_policy);
>> > +       if (err < 0)
>> > +               return err;
>> > +
>> > +       sch_tree_lock(sch);
>> > +
>> > +       if (tb[TCA_HHF_BACKLOG_LIMIT])
>> > +               sch->limit = nla_get_u32(tb[TCA_HHF_BACKLOG_LIMIT]);
>> > +
>> > +       if (tb[TCA_HHF_QUANTUM])
>> > +               new_quantum = nla_get_u32(tb[TCA_HHF_QUANTUM]);
>> > +
>> > +       if (tb[TCA_HHF_NON_HH_WEIGHT])
>> > +               new_hhf_non_hh_weight = nla_get_u32(tb[TCA_HHF_NON_HH_WEIGHT]);
>> > +
>> > +       non_hh_quantum = (u64)new_quantum * new_hhf_non_hh_weight;
>> > +       if (non_hh_quantum > INT_MAX)
>> > +               return -EINVAL;
>> > +       q->quantum = new_quantum;
>> > +       q->hhf_non_hh_weight = new_hhf_non_hh_weight;
>> > +
>> > +       if (tb[TCA_HHF_HH_FLOWS_LIMIT])
>> > +               q->hh_flows_limit = nla_get_u32(tb[TCA_HHF_HH_FLOWS_LIMIT]);
>> > +
>> > +       if (tb[TCA_HHF_RESET_TIMEOUT]) {
>> > +               u32 ms = nla_get_u32(tb[TCA_HHF_RESET_TIMEOUT]);
>> > +
>> > +               q->hhf_reset_timeout = msecs_to_jiffies(ms);
>> > +       }
>> > +
>> > +       if (tb[TCA_HHF_ADMIT_BYTES])
>> > +               q->hhf_admit_bytes = nla_get_u32(tb[TCA_HHF_ADMIT_BYTES]);
>> > +
>> > +       if (tb[TCA_HHF_EVICT_TIMEOUT]) {
>> > +               u32 ms = nla_get_u32(tb[TCA_HHF_EVICT_TIMEOUT]);
>> > +
>> > +               q->hhf_evict_timeout = msecs_to_jiffies(ms);
>> > +       }
>> > +
>> > +       qlen = sch->q.qlen;
>> > +       while (sch->q.qlen > sch->limit) {
>> > +               struct sk_buff *skb = hhf_dequeue(sch);
>> > +
>> > +               kfree_skb(skb);
>> > +       }
>> > +       qdisc_tree_decrease_qlen(sch, qlen - sch->q.qlen);
>> > +
>> > +       sch_tree_unlock(sch);
>> > +       return 0;
>> > +}
>> > +
>> > +static int hhf_init(struct Qdisc *sch, struct nlattr *opt)
>> > +{
>> > +       struct hhf_sched_data *q = qdisc_priv(sch);
>> > +       int i;
>> > +
>> > +       sch->limit = 1000;
>> > +       q->quantum = psched_mtu(qdisc_dev(sch));
>> > +       q->perturbation = net_random();
>> > +       INIT_LIST_HEAD(&q->new_buckets);
>> > +       INIT_LIST_HEAD(&q->old_buckets);
>> > +
>> > +       /* Configurable HHF parameters */
>> > +       q->hhf_reset_timeout = HZ / 25; /* 40  ms */
>> > +       q->hhf_admit_bytes = 131072;    /* 128 KB */
>> > +       q->hhf_evict_timeout = HZ;      /* 1  sec */
>> > +       q->hhf_non_hh_weight = 2;
>> > +
>> > +       if (opt) {
>> > +               int err = hhf_change(sch, opt);
>> > +
>> > +               if (err)
>> > +                       return err;
>> > +       }
>> > +
>> > +       if (!q->hh_flows) {
>> > +               /* Initialize heavy-hitter flow table. */
>> > +               q->hh_flows = hhf_zalloc(HH_FLOWS_CNT *
>> > +                                        sizeof(struct list_head));
>> > +               if (!q->hh_flows)
>> > +                       return -ENOMEM;
>> > +               for (i = 0; i < HH_FLOWS_CNT; i++)
>> > +                       INIT_LIST_HEAD(&q->hh_flows[i]);
>> > +
>> > +               /* Cap max active HHs at twice len of hh_flows table. */
>> > +               q->hh_flows_limit = 2 * HH_FLOWS_CNT;
>> > +               q->hh_flows_overlimit = 0;
>> > +               q->hh_flows_total_cnt = 0;
>> > +               q->hh_flows_current_cnt = 0;
>> > +
>> > +               /* Initialize heavy-hitter filter arrays. */
>> > +               for (i = 0; i < HHF_ARRAYS_CNT; i++) {
>> > +                       q->hhf_arrays[i] = hhf_zalloc(HHF_ARRAYS_LEN *
>> > +                                                     sizeof(u32));
>> > +                       if (!q->hhf_arrays[i]) {
>> > +                               hhf_destroy(sch);
>> > +                               return -ENOMEM;
>> > +                       }
>> > +               }
>> > +               q->hhf_arrays_reset_timestamp = hhf_time_stamp();
>> > +
>> > +               /* Initialize valid bits of heavy-hitter filter arrays. */
>> > +               for (i = 0; i < HHF_ARRAYS_CNT; i++) {
>> > +                       q->hhf_valid_bits[i] = hhf_zalloc(HHF_ARRAYS_LEN /
>> > +                                                         BITS_PER_BYTE);
>> > +                       if (!q->hhf_valid_bits[i]) {
>> > +                               hhf_destroy(sch);
>> > +                               return -ENOMEM;
>> > +                       }
>> > +               }
>> > +
>> > +               /* Initialize Weighted DRR buckets. */
>> > +               for (i = 0; i < WDRR_BUCKET_CNT; i++) {
>> > +                       struct wdrr_bucket *bucket = q->buckets + i;
>> > +
>> > +                       INIT_LIST_HEAD(&bucket->bucketchain);
>> > +               }
>> > +       }
>> > +
>> > +       return 0;
>> > +}
>> > +
>> > +static int hhf_dump(struct Qdisc *sch, struct sk_buff *skb)
>> > +{
>> > +       struct hhf_sched_data *q = qdisc_priv(sch);
>> > +       struct nlattr *opts;
>> > +
>> > +       opts = nla_nest_start(skb, TCA_OPTIONS);
>> > +       if (opts == NULL)
>> > +               goto nla_put_failure;
>> > +
>> > +       if (nla_put_u32(skb, TCA_HHF_BACKLOG_LIMIT, sch->limit) ||
>> > +           nla_put_u32(skb, TCA_HHF_QUANTUM, q->quantum) ||
>> > +           nla_put_u32(skb, TCA_HHF_HH_FLOWS_LIMIT, q->hh_flows_limit) ||
>> > +           nla_put_u32(skb, TCA_HHF_RESET_TIMEOUT,
>> > +                       jiffies_to_msecs(q->hhf_reset_timeout)) ||
>> > +           nla_put_u32(skb, TCA_HHF_ADMIT_BYTES, q->hhf_admit_bytes) ||
>> > +           nla_put_u32(skb, TCA_HHF_EVICT_TIMEOUT,
>> > +                       jiffies_to_msecs(q->hhf_evict_timeout)) ||
>> > +           nla_put_u32(skb, TCA_HHF_NON_HH_WEIGHT, q->hhf_non_hh_weight))
>> > +               goto nla_put_failure;
>> > +
>> > +       nla_nest_end(skb, opts);
>> > +       return skb->len;
>> > +
>> > +nla_put_failure:
>> > +       return -1;
>> > +}
>> > +
>> > +static int hhf_dump_stats(struct Qdisc *sch, struct gnet_dump *d)
>> > +{
>> > +       struct hhf_sched_data *q = qdisc_priv(sch);
>> > +       struct tc_hhf_xstats st = {
>> > +               .drop_overlimit = q->drop_overlimit,
>> > +               .hh_overlimit   = q->hh_flows_overlimit,
>> > +               .hh_tot_count   = q->hh_flows_total_cnt,
>> > +               .hh_cur_count   = q->hh_flows_current_cnt,
>> > +       };
>> > +
>> > +       return gnet_stats_copy_app(d, &st, sizeof(st));
>> > +}
>> > +
>> > +struct Qdisc_ops hhf_qdisc_ops __read_mostly = {
>> > +       .id             =       "hhf",
>> > +       .priv_size      =       sizeof(struct hhf_sched_data),
>> > +
>> > +       .enqueue        =       hhf_enqueue,
>> > +       .dequeue        =       hhf_dequeue,
>> > +       .peek           =       qdisc_peek_dequeued,
>> > +       .drop           =       hhf_drop,
>> > +       .init           =       hhf_init,
>> > +       .reset          =       hhf_reset,
>> > +       .destroy        =       hhf_destroy,
>> > +       .change         =       hhf_change,
>> > +       .dump           =       hhf_dump,
>> > +       .dump_stats     =       hhf_dump_stats,
>> > +       .owner          =       THIS_MODULE,
>> > +};
>> > +EXPORT_SYMBOL(hhf_qdisc_ops);
>> > +
>> > +static int __init hhf_module_init(void)
>> > +{
>> > +       return register_qdisc(&hhf_qdisc_ops);
>> > +}
>> > +
>> > +static void __exit hhf_module_exit(void)
>> > +{
>> > +       unregister_qdisc(&hhf_qdisc_ops);
>> > +}
>> > +
>> > +module_init(hhf_module_init)
>> > +module_exit(hhf_module_exit)
>> > +MODULE_AUTHOR("Terry Lam");
>> > +MODULE_AUTHOR("Nandita Dukkipati");
>> > +MODULE_LICENSE("GPL");
>> > --
>> > 1.8.5.1
>> >
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe netdev" in
>> > the body of a message to majordomo@...r.kernel.org
>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ