lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 14 Nov 2013 16:22:32 -0500 From: Chris Metcalf <cmetcalf@...era.com> To: Pete Zaitcev <zaitcev@...hat.com> CC: Evgeniy Polyakov <zbr@...emap.net>, Erik Jacobson <erikj@....com>, Andrew Morton <akpm@...l.org>, Matt Helsley <matthltc@...ibm.com>, <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org> Subject: Re: [PATCH] connector: improved unaligned access error fix On 11/14/2013 2:45 PM, Pete Zaitcev wrote: > On Thu, 14 Nov 2013 12:09:21 -0500 > Chris Metcalf <cmetcalf@...era.com> wrote: > >> - __u8 buffer[CN_PROC_MSG_SIZE]; >> + __u8 buffer[CN_PROC_MSG_SIZE] __aligned(8); >> - msg = (struct cn_msg *)buffer; >> + msg = buffer_to_cn_msg(buffer); >> ev = (struct proc_event *)msg->data; >> memset(&ev->event_data, 0, sizeof(ev->event_data)); > Why is memset(buffer,0,CN_PROC_MSG_SIZE) not acceptable? That would be fine from a correctness point of view; I'm happy either way. My patch nominally has better performance, for what that's worth, since the memset() call is for a smaller range (24 bytes instead of 60). It also avoids the need for put_unaligned(), which even on platforms that allow unaligned stores can still be slower. I can certainly do a v2 with the larger memset() instead if that's the consensus. -- Chris Metcalf, Tilera Corp. http://www.tilera.com -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists