[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140422120006.GA1960@order.stressinduktion.org>
Date: Tue, 22 Apr 2014 14:00:06 +0200
From: Hannes Frederic Sowa <hannes@...essinduktion.org>
To: Alexey Preobrazhensky <preobr@...gle.com>
Cc: netdev@...r.kernel.org, Kostya Serebryany <kcc@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>, yoshfuji@...ux-ipv6.org,
maze@...gle.com, edumazet@...gle.com, brutus@...gle.com
Subject: Re: Potential out-of-bounds access in ip6_finish_output2
On Mon, Apr 21, 2014 at 05:22:05PM +0400, Alexey Preobrazhensky wrote:
> Hi,
>
> I’m working on AddressSanitizer[1] -- a tool that detects
> use-after-free and out-of-bounds bugs in kernel.
>
> I’ve encountered a heap-buffer-overflow in ip6_finish_output2 in linux
> kernel 3.14 (revision 455c6fdbd219161bd09b1165f11699d6d73de11c). A
> similar problem was reported earlier[2] and resulted in a patch[3],
> but we’ve hit this report again, so it seems the issue weren’t fixed,
> or there is another issue. The offending code in
> include/net/neighbour.h:401 is:
>
> do {
> seq = read_seqbegin(&hh->hh_lock);
> hh_len = hh->hh_len;
> if (likely(hh_len <= HH_DATA_MOD)) {
> /* this is inlined by gcc */
> /* 401: */ memcpy(skb->data - HH_DATA_MOD, hh->hh_data, HH_DATA_MOD);
> } else {
> int hh_alen = HH_DATA_ALIGN(hh_len);
>
> memcpy(skb->data - hh_alen, hh->hh_data, hh_alen);
> }
> } while (read_seqretry(&hh->hh_lock, seq));
>
> This heap-buffer-overflow was triggered under trinity syscall fuzzer,
> so there is no reproducer. The report is followed by crash (included).
>
> It would be great if someone familiar with the code took time to look
> into this report.
Thanks for the report!
By only looking at the report I doubt it is the same problem. I still have a
lot of mails to catch up but hope that I can audit the code in the next days.
Thanks,
Hannes
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists