[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1286483338.2271.34.camel@achroite.uk.solarflarecom.com>
Date: Thu, 07 Oct 2010 21:28:58 +0100
From: Ben Hutchings <bhutchings@...arflare.com>
To: Kees Cook <kees.cook@...onical.com>
Cc: linux-kernel@...r.kernel.org,
"David S. Miller" <davem@...emloft.net>,
Jeff Garzik <jgarzik@...hat.com>,
Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@...el.com>,
netdev@...r.kernel.org
Subject: Re: [PATCH] net: clear heap allocation for ETHTOOL_GRXCLSRLALL
On Thu, 2010-10-07 at 13:03 -0700, Kees Cook wrote:
> Calling ETHTOOL_GRXCLSRLALL with a large rule_cnt will allocate kernel
> heap without clearing it. For the one driver (niu) that implements it,
> it will leave the unused portion of heap unchanged and copy the full
> contents back to userspace.
>
> Cc: stable@...nel.org
> Signed-off-by: Kees Cook <kees.cook@...onical.com>
Acked-by: Ben Hutchings <bhutchings@...arflare.com>
Should have spotted this myself. :-(
Ben.
> ---
> net/core/ethtool.c | 2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/net/core/ethtool.c b/net/core/ethtool.c
> index 7a85367..4016ac6 100644
> --- a/net/core/ethtool.c
> +++ b/net/core/ethtool.c
> @@ -348,7 +348,7 @@ static noinline_for_stack int ethtool_get_rxnfc(struct net_device *dev,
> if (info.cmd == ETHTOOL_GRXCLSRLALL) {
> if (info.rule_cnt > 0) {
> if (info.rule_cnt <= KMALLOC_MAX_SIZE / sizeof(u32))
> - rule_buf = kmalloc(info.rule_cnt * sizeof(u32),
> + rule_buf = kzalloc(info.rule_cnt * sizeof(u32),
> GFP_USER);
> if (!rule_buf)
> return -ENOMEM;
> --
> 1.7.1
>
--
Ben Hutchings, Senior Software Engineer, Solarflare Communications
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists