lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200801144030.GM24045@ziepe.ca>
Date:   Sat, 1 Aug 2020 11:40:30 -0300
From:   Jason Gunthorpe <jgg@...pe.ca>
To:     Dan Carpenter <dan.carpenter@...cle.com>
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Leon Romanovsky <leon@...nel.org>,
        Peilin Ye <yepeilin.cs@...il.com>,
        Santosh Shilimkar <santosh.shilimkar@...cle.com>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Arnd Bergmann <arnd@...db.de>,
        linux-kernel-mentees@...ts.linuxfoundation.org,
        netdev@...r.kernel.org, linux-rdma@...r.kernel.org,
        rds-devel@....oracle.com, linux-kernel@...r.kernel.org
Subject: Re: [Linux-kernel-mentees] [PATCH net] rds: Prevent kernel-infoleak
 in rds_notify_queue_get()

On Sat, Aug 01, 2020 at 11:00:26AM +0300, Dan Carpenter wrote:
> > Without an actual example where this doesn't work right it is hard to
> > say anything more..
> 
> Here is the example that set off the recent patches:
> 
> https://lkml.org/lkml/2020/7/27/199

Oh, that is something completely different. This thread was talking
about '= {}'.

>From a C11 perspective the above link is complete initialization of an
aggregate and does not trigger the rule requiring that padding be
zero'd.

C11 only zeros padding during *partial* initialization of an aggregate.

ie this does not zero padding:

void test(void)
{
        extern void copy(const void *ptr, size_t len);
	struct rds_rdma_notify {
		unsigned long user_token;
		unsigned char status __attribute__((aligned(32)));
	} foo = {1, 1};

	// Padding NOT zeroed
	copy(&foo, sizeof(foo));
}

While the addition of a xxx member to make it partial initialization
does zero:

void test(void)
{
        extern void copy(const void *ptr, size_t len);
	struct rds_rdma_notify {
		unsigned long user_token;
		unsigned char status __attribute__((aligned(32)));
		unsigned long xx;
	} foo = {1, 1};

	// Padding NOT zeroed
	copy(&foo, sizeof(foo));
}

(and godbolt confirms this on a wide range of compilers)

> The rest of these patches were based on static analysis from Smatch.
> They're all "theoretical" bugs based on the C standard but it's
> impossible to know if and when they'll turn into real life bugs.

Any patches replaing '= {}' (and usually '= {0}') with memset are not
fixing anything.

The C11 standard requires zeroing padding in these case. It is just
useless churn and in some cases results in worse codegen.

smatch should only warn about this if the aggregate initialization is
not partial.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ