lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191122145243.6ece9bed@suzdal.zaitcev.lan>
Date:   Fri, 22 Nov 2019 14:52:43 -0600
From:   Pete Zaitcev <zaitcev@...hat.com>
To:     Alan Stern <stern@...land.harvard.edu>
Cc:     syzbot <syzbot+56f9673bb4cdcbeb0e92@...kaller.appspotmail.com>,
        <arnd@...db.de>, <gregkh@...uxfoundation.org>,
        <jrdr.linux@...il.com>, <keescook@...omium.org>,
        <kstewart@...uxfoundation.org>,
        Kernel development list <linux-kernel@...r.kernel.org>,
        USB list <linux-usb@...r.kernel.org>,
        <syzkaller-bugs@...glegroups.com>, <tglx@...utronix.de>,
        <viro@...iv.linux.org.uk>, zaitcev@...hat.com
Subject: Re: possible deadlock in mon_bin_vma_fault

On Fri, 22 Nov 2019 10:27:10 -0500 (EST)
Alan Stern <stern@...land.harvard.edu> wrote:

> As Dmitry mentioned, you should put the Reported-by: line from the
> original syzbot bug report (see
> <https://marc.info/?l=linux-usb&m=153601206710985&w=2>) in the patch.

Thanks, got it. I also dropped all the cosmetic changes.

> >  		mutex_lock(&rp->fetch_lock);
> >  		spin_lock_irqsave(&rp->b_lock, flags);
> > -		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> > -		kfree(rp->b_vec);
> > -		rp->b_vec  = vec;
> > -		rp->b_size = size;
> > -		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> > -		rp->cnt_lost = 0;
> > +		if (rp->mmap_active) {
> > +			mon_free_buff(vec, size/CHUNK_SIZE);
> > +			kfree(vec);
> > +			ret = -EBUSY;  
> 
> It would be more elegant to do the rp->mmap_active test before calling
> kcalloc and mon_alloc_buf.  But of course that's a pretty minor thing.

Indeed it feels wrong that so much work gets discarded. However, memory
allocations can block, right? In the same time, our main objective here is
to make sure that when a page fault happens, we fill in the page that VMA
is intended to refer, and not one that was re-allocated. Therefore, I'm
trying to avoid a situation where:

1. thread A checks mmap_active, finds it at zero and proceeds into the
reallocation ioctl
2. thread A sleeps in get_free_page()
3. thread B runs mmap() and succeeds
4. thread A obtains its pages and proceeds to substitute the buffer
5. thread B (or any other) pagefaults and ends with the new, unexpected page

The code is not pretty, but I don't see an alternative. Heck, I would
love you to find more races if you can.

-- Pete

commit 5252eb4c8297fedbf1c5f1e67da44efe00e6ef6b
Author: Pete Zaitcev <zaitcev@...ori.zaitcev.us>
Date:   Thu Nov 21 17:24:00 2019 -0600

    usb: Fix a deadlock in usbmon between mmap and read
    
    Signed-off-by: Pete Zaitcev <zaitcev@...hat.com>
    Reported-by: syzbot+56f9673bb4cdcbeb0e92@...kaller.appspotmail.com

diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
index ac2b4fcc265f..f48a23adbc35 100644
--- a/drivers/usb/mon/mon_bin.c
+++ b/drivers/usb/mon/mon_bin.c
@@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
 
 		mutex_lock(&rp->fetch_lock);
 		spin_lock_irqsave(&rp->b_lock, flags);
-		mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
-		kfree(rp->b_vec);
-		rp->b_vec  = vec;
-		rp->b_size = size;
-		rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
-		rp->cnt_lost = 0;
+		if (rp->mmap_active) {
+			mon_free_buff(vec, size/CHUNK_SIZE);
+			kfree(vec);
+			ret = -EBUSY;
+		} else {
+			mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
+			kfree(rp->b_vec);
+			rp->b_vec  = vec;
+			rp->b_size = size;
+			rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
+			rp->cnt_lost = 0;
+		}
 		spin_unlock_irqrestore(&rp->b_lock, flags);
 		mutex_unlock(&rp->fetch_lock);
 		}
@@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct poll_table_struct *wait)
 static void mon_bin_vma_open(struct vm_area_struct *vma)
 {
 	struct mon_reader_bin *rp = vma->vm_private_data;
+	unsigned long flags;
+
+	spin_lock_irqsave(&rp->b_lock, flags);
 	rp->mmap_active++;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
 }
 
 static void mon_bin_vma_close(struct vm_area_struct *vma)
 {
+	unsigned long flags;
+
 	struct mon_reader_bin *rp = vma->vm_private_data;
+	spin_lock_irqsave(&rp->b_lock, flags);
 	rp->mmap_active--;
+	spin_unlock_irqrestore(&rp->b_lock, flags);
 }
 
 /*
@@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct vm_fault *vmf)
 	unsigned long offset, chunk_idx;
 	struct page *pageptr;
 
-	mutex_lock(&rp->fetch_lock);
 	offset = vmf->pgoff << PAGE_SHIFT;
-	if (offset >= rp->b_size) {
-		mutex_unlock(&rp->fetch_lock);
+	if (offset >= rp->b_size)
 		return VM_FAULT_SIGBUS;
-	}
 	chunk_idx = offset / CHUNK_SIZE;
 	pageptr = rp->b_vec[chunk_idx].pg;
 	get_page(pageptr);
-	mutex_unlock(&rp->fetch_lock);
 	vmf->page = pageptr;
 	return 0;
 }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ