lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 15 Jan 2020 23:22:47 +0100
From:   Toke Høiland-Jørgensen <toke@...hat.com>
To:     John Fastabend <john.fastabend@...il.com>, netdev@...r.kernel.org
Cc:     bpf@...r.kernel.org, Daniel Borkmann <daniel@...earbox.net>,
        Alexei Starovoitov <ast@...nel.org>,
        David Miller <davem@...emloft.net>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Björn Töpel <bjorn.topel@...il.com>,
        John Fastabend <john.fastabend@...il.com>
Subject: RE: [PATCH bpf-next v2 1/2] xdp: Move devmap bulk queue into struct net_device

John Fastabend <john.fastabend@...il.com> writes:

> Toke Høiland-Jørgensen wrote:
>> From: Toke Høiland-Jørgensen <toke@...hat.com>
>> 
>> Commit 96360004b862 ("xdp: Make devmap flush_list common for all map
>> instances"), changed devmap flushing to be a global operation instead of a
>> per-map operation. However, the queue structure used for bulking was still
>> allocated as part of the containing map.
>> 
>> This patch moves the devmap bulk queue into struct net_device. The
>> motivation for this is reusing it for the non-map variant of XDP_REDIRECT,
>> which will be changed in a subsequent commit.  To avoid other fields of
>> struct net_device moving to different cache lines, we also move a couple of
>> other members around.
>> 
>> We defer the actual allocation of the bulk queue structure until the
>> NETDEV_REGISTER notification devmap.c. This makes it possible to check for
>> ndo_xdp_xmit support before allocating the structure, which is not possible
>> at the time struct net_device is allocated. However, we keep the freeing in
>> free_netdev() to avoid adding another RCU callback on NETDEV_UNREGISTER.
>> 
>> Because of this change, we lose the reference back to the map that
>> originated the redirect, so change the tracepoint to always return 0 as the
>> map ID and index. Otherwise no functional change is intended with this
>> patch.
>> 
>> Acked-by: Björn Töpel <bjorn.topel@...el.com>
>> Signed-off-by: Toke Høiland-Jørgensen <toke@...hat.com>
>> ---
>
> LGTM. I didn't check the net_device layout with pahole though so I'm
> trusting they are good from v1 discussion.

I believe so; looks like this now:

	/* --- cacheline 14 boundary (896 bytes) --- */
	struct netdev_queue *      _tx __attribute__((__aligned__(64))); /*   896     8 */
	unsigned int               num_tx_queues;        /*   904     4 */
	unsigned int               real_num_tx_queues;   /*   908     4 */
	struct Qdisc *             qdisc;                /*   912     8 */
	unsigned int               tx_queue_len;         /*   920     4 */
	spinlock_t                 tx_global_lock;       /*   924     4 */
	struct xdp_dev_bulk_queue * xdp_bulkq;           /*   928     8 */
	struct xps_dev_maps *      xps_cpus_map;         /*   936     8 */
	struct xps_dev_maps *      xps_rxqs_map;         /*   944     8 */
	struct mini_Qdisc *        miniq_egress;         /*   952     8 */
	/* --- cacheline 15 boundary (960 bytes) --- */
	struct hlist_head  qdisc_hash[16];               /*   960   128 */
	/* --- cacheline 17 boundary (1088 bytes) --- */
	struct timer_list  watchdog_timer;               /*  1088    40 */

	/* XXX last struct has 4 bytes of padding */

	int                        watchdog_timeo;       /*  1128     4 */

	/* XXX 4 bytes hole, try to pack */

	int *                      pcpu_refcnt;          /*  1136     8 */
	struct list_head   todo_list;                    /*  1144    16 */
	/* --- cacheline 18 boundary (1152 bytes) was 8 bytes ago --- */

-Toke

Powered by blists - more mailing lists