[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cbf0fd9611c76e557b759ecbecf6bcf712b44f55.camel@codeconstruct.com.au>
Date: Tue, 26 Oct 2021 19:34:16 +0800
From: Jeremy Kerr <jk@...econstruct.com.au>
To: David Laight <David.Laight@...LAB.COM>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Cc: "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Matt Johnston <matt@...econstruct.com.au>,
Eugene Syromiatnikov <esyr@...hat.com>
Subject: Re: [PATCH net-next v6] mctp: Implement extended addressing
Hi David,
> > +struct sockaddr_mctp_ext {
> > + struct sockaddr_mctp smctp_base;
> > + int smctp_ifindex;
> > + __u8 smctp_halen;
> > + __u8 __smctp_pad0[3];
> > + __u8 smctp_haddr[MAX_ADDR_LEN];
> > +};
>
> You'd be better off 8-byte aligning smctp_haddr.
> I also suspect that always copying the 32 bytes will be faster
> and generate less code than the memset() + memcpy().
The padding here is more to avoid layout variations between ABIs
rather than performance.
The largest current hardware address size that we need (for the i2c
transport) is... 1 byte. If we were to implement the PCIe VDM binding
for MCTP that'd then be the largest, now at 2 bytes. If anyone's crazy
enough to do MCTP over ethernet, we're still only at 6.
So, we'll be a long way off needing to optimise for 8-byte aligned
accesses here; I don't think the extra padding would be worth it.
Cheers,
Jeremy
Powered by blists - more mailing lists