lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Date:   Mon, 21 Feb 2022 09:02:38 -0800
From:   Stephen Hemminger <stephen@...workplumber.org>
To:     yoshfuji@...ux-ipv6.org, davem@...emloft.net, dsahern@...nel.org
Cc:     netdev@...r.kernel.org
Subject: Fw: [Bug 215629] New: heap overflow in net/ipv6/esp6.c



Begin forwarded message:

Date: Mon, 21 Feb 2022 16:52:26 +0000
From: bugzilla-daemon@...nel.org
To: stephen@...workplumber.org
Subject: [Bug 215629] New: heap overflow in net/ipv6/esp6.c


https://bugzilla.kernel.org/show_bug.cgi?id=215629

            Bug ID: 215629
           Summary: heap overflow in net/ipv6/esp6.c
           Product: Networking
           Version: 2.5
    Kernel Version: 5.17
          Hardware: All
                OS: Linux
              Tree: Mainline
            Status: NEW
          Severity: normal
          Priority: P1
         Component: Other
          Assignee: stephen@...workplumber.org
          Reporter: slipper.alive@...il.com
        Regression: No

I found a heap out-of-bound write vulnerability in net/ipv6/esp6.c by reviewing
a syzkaller bug report
(https://syzkaller.appspot.com/bug?id=57375340ab81a369df5da5eb16cfcd4aef9dfb9d).
This bug could lead to privilege escalation.


The bug is caused by the incorrect use of `skb_page_frag_refill`
(https://github.com/torvalds/linux/blob/v5.17-rc3/net/core/sock.c#L2700). 

/**
 * skb_page_frag_refill - check that a page_frag contains enough room
 * @sz: minimum size of the fragment we want to get
 * @pfrag: pointer to page_frag
 * @gfp: priority for memory allocation
 *
 * Note: While this allocator tries to use high order pages, there is
 * no guarantee that allocations succeed. Therefore, @sz MUST be
 * less or equal than PAGE_SIZE.
 */
bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp)
{
        if (pfrag->page) {
                if (page_ref_count(pfrag->page) == 1) {
                        pfrag->offset = 0;
                        return true;
                }
                if (pfrag->offset + sz <= pfrag->size)
                        return true;
                put_page(pfrag->page);
        }

        pfrag->offset = 0;
        if (SKB_FRAG_PAGE_ORDER &&
            !static_branch_unlikely(&net_high_order_alloc_disable_key)) {
                /* Avoid direct reclaim but allow kswapd to wake */
                pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) |
                                          __GFP_COMP | __GFP_NOWARN |
                                          __GFP_NORETRY,
                                          SKB_FRAG_PAGE_ORDER);
                if (likely(pfrag->page)) {
                        pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER;
                        return true;
                }
        }
        pfrag->page = alloc_page(gfp);
        if (likely(pfrag->page)) {
                pfrag->size = PAGE_SIZE;
                return true;
        }
        return false;
}
EXPORT_SYMBOL(skb_page_frag_refill);


In the comment it says the `sz` parameter must be less than PAGE_SIZE, but it
is not enforced in the vulnerable code
https://github.com/torvalds/linux/blob/v5.17-rc3/net/ipv6/esp6.c#L512 


int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
*esp)
{
...
                        allocsize = ALIGN(tailen, L1_CACHE_BYTES);

                        spin_lock_bh(&x->lock);

                        if (unlikely(!skb_page_frag_refill(allocsize, pfrag,
GFP_ATOMIC))) {
                                spin_unlock_bh(&x->lock);
                                goto cow;
                        }
...

and https://github.com/torvalds/linux/blob/v5.17-rc3/net/ipv6/esp6.c#L623

int esp6_output_tail(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
*esp)
{
...
        if (!esp->inplace) {
                int allocsize;
                struct page_frag *pfrag = &x->xfrag;

                allocsize = ALIGN(skb->data_len, L1_CACHE_BYTES);

                spin_lock_bh(&x->lock);
                if (unlikely(!skb_page_frag_refill(allocsize, pfrag,
GFP_ATOMIC))) {
                        spin_unlock_bh(&x->lock);
                        goto error_free;
                }


The `allocsize` here can be manipulated by the `tfcpad` of the `xfrm_state`. 


static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)
{
        int alen;
        int blksize;
        struct ip_esp_hdr *esph;
        struct crypto_aead *aead;
        struct esp_info esp;

        esp.inplace = true;

        esp.proto = *skb_mac_header(skb);
        *skb_mac_header(skb) = IPPROTO_ESP;

        /* skb is pure payload to encrypt */

        aead = x->data;
        alen = crypto_aead_authsize(aead);

        esp.tfclen = 0;
        if (x->tfcpad) {
                struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
                u32 padto;

                padto = min(x->tfcpad, __xfrm_state_mtu(x,
dst->child_mtu_cached));
                if (skb->len < padto)
                        esp.tfclen = padto - skb->len;
        }
        blksize = ALIGN(crypto_aead_blocksize(aead), 4);
        esp.clen = ALIGN(skb->len + 2 + esp.tfclen, blksize);
        esp.plen = esp.clen - skb->len - esp.tfclen;
        esp.tailen = esp.tfclen + esp.plen + alen;



If it is set to a value greater than 0x8000, the page next to the allocated
page frag will be overwritten by the padding message.


The bug requires CAP_NET_ADMIN to be triggered. It seems to be introduced by
commit 03e2a30f6a27e2f3e5283b777f6ddd146b38c738. The same bug exists in the
ipv4 code net/ipv4/esp4.c introduced by commit
cac2661c53f35cbe651bef9b07026a5a05ab8ce0.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are the assignee for the bug.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ