[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <000000000000dd2ff40597f6b812@google.com>
Date: Fri, 22 Nov 2019 14:13:22 -0800
From: syzbot <syzbot+56f9673bb4cdcbeb0e92@...kaller.appspotmail.com>
To: Alan Stern <stern@...land.harvard.edu>
Cc: arnd@...db.de, gregkh@...uxfoundation.org, jrdr.linux@...il.com,
keescook@...omium.org, kstewart@...uxfoundation.org,
linux-kernel@...r.kernel.org, linux-usb@...r.kernel.org,
stern@...land.harvard.edu, syzkaller-bugs@...glegroups.com,
tglx@...utronix.de, viro@...iv.linux.org.uk, zaitcev@...hat.com
Subject: Re: Re: possible deadlock in mon_bin_vma_fault
> On Fri, 22 Nov 2019, Pete Zaitcev wrote:
>> > It would be more elegant to do the rp->mmap_active test before calling
>> > kcalloc and mon_alloc_buf. But of course that's a pretty minor thing.
>> Indeed it feels wrong that so much work gets discarded. However, memory
>> allocations can block, right? In the same time, our main objective here
>> is
>> to make sure that when a page fault happens, we fill in the page that VMA
>> is intended to refer, and not one that was re-allocated. Therefore, I'm
>> trying to avoid a situation where:
>> 1. thread A checks mmap_active, finds it at zero and proceeds into the
>> reallocation ioctl
>> 2. thread A sleeps in get_free_page()
>> 3. thread B runs mmap() and succeeds
>> 4. thread A obtains its pages and proceeds to substitute the buffer
>> 5. thread B (or any other) pagefaults and ends with the new, unexpected
>> page
>> The code is not pretty, but I don't see an alternative. Heck, I would
>> love you to find more races if you can.
> The alternative is to have the routines for mmap() hold fetch_lock
> instead of b_lock. mmap() is allowed to sleep, so that would be okay.
> Then you would also hold fetch_lock while checking mmap_active and
> doing the memory allocations. That would prevent any races -- in your
> example above, thread A would acquire fetch_lock in step 1, so thread B
> would block in step 3 until step 4 was finished. Hence B would end up
> mapping the correct pages.
> In practice, I don't see this being a routine problem. How often do
> multiple threads independently try to mmap the same usbmon buffer?
> Still, let's see syzbot reacts to your current patch. The line below
> is how you ask syzbot to test a candidate patch.
> Alan Stern
> #syz test: linux-4.19.y f6e27dbb1afa
"linux-4.19.y" does not look like a valid git repo address.
> commit 5252eb4c8297fedbf1c5f1e67da44efe00e6ef6b
> Author: Pete Zaitcev <zaitcev@...ori.zaitcev.us>
> Date: Thu Nov 21 17:24:00 2019 -0600
> usb: Fix a deadlock in usbmon between mmap and read
> Signed-off-by: Pete Zaitcev <zaitcev@...hat.com>
> Reported-by: syzbot+56f9673bb4cdcbeb0e92@...kaller.appspotmail.com
> diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
> index ac2b4fcc265f..f48a23adbc35 100644
> --- a/drivers/usb/mon/mon_bin.c
> +++ b/drivers/usb/mon/mon_bin.c
> @@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file,
> unsigned int cmd, unsigned long arg
> mutex_lock(&rp->fetch_lock);
> spin_lock_irqsave(&rp->b_lock, flags);
> - mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> - kfree(rp->b_vec);
> - rp->b_vec = vec;
> - rp->b_size = size;
> - rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> - rp->cnt_lost = 0;
> + if (rp->mmap_active) {
> + mon_free_buff(vec, size/CHUNK_SIZE);
> + kfree(vec);
> + ret = -EBUSY;
> + } else {
> + mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE);
> + kfree(rp->b_vec);
> + rp->b_vec = vec;
> + rp->b_size = size;
> + rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0;
> + rp->cnt_lost = 0;
> + }
> spin_unlock_irqrestore(&rp->b_lock, flags);
> mutex_unlock(&rp->fetch_lock);
> }
> @@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct
> poll_table_struct *wait)
> static void mon_bin_vma_open(struct vm_area_struct *vma)
> {
> struct mon_reader_bin *rp = vma->vm_private_data;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&rp->b_lock, flags);
> rp->mmap_active++;
> + spin_unlock_irqrestore(&rp->b_lock, flags);
> }
> static void mon_bin_vma_close(struct vm_area_struct *vma)
> {
> + unsigned long flags;
> +
> struct mon_reader_bin *rp = vma->vm_private_data;
> + spin_lock_irqsave(&rp->b_lock, flags);
> rp->mmap_active--;
> + spin_unlock_irqrestore(&rp->b_lock, flags);
> }
> /*
> @@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct
> vm_fault *vmf)
> unsigned long offset, chunk_idx;
> struct page *pageptr;
> - mutex_lock(&rp->fetch_lock);
> offset = vmf->pgoff << PAGE_SHIFT;
> - if (offset >= rp->b_size) {
> - mutex_unlock(&rp->fetch_lock);
> + if (offset >= rp->b_size)
> return VM_FAULT_SIGBUS;
> - }
> chunk_idx = offset / CHUNK_SIZE;
> pageptr = rp->b_vec[chunk_idx].pg;
> get_page(pageptr);
> - mutex_unlock(&rp->fetch_lock);
> vmf->page = pageptr;
> return 0;
> }
Powered by blists - more mailing lists