[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210615080618.GF29751@quack2.suse.cz>
Date: Tue, 15 Jun 2021 10:06:18 +0200
From: Jan Kara <jack@...e.cz>
To: Ming Lei <ming.lei@...hat.com>
Cc: Ingo Franzki <ifranzki@...ux.ibm.com>, Karel Zak <kzak@...hat.com>,
Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org,
Juergen Christ <jchrist@...ux.ibm.com>, Jan Kara <jack@...e.cz>
Subject: Re: loop_set_block_size: loop0 () has still dirty pages (nrpages=2)
On Tue 15-06-21 06:37:25, Ming Lei wrote:
> On Mon, Jun 14, 2021 at 09:35:30AM +0200, Ingo Franzki wrote:
> > On 10.06.2021 16:45, Ming Lei wrote:
> > > On Tue, Jun 08, 2021 at 02:01:29PM +0200, Ingo Franzki wrote:
> > >> Hi all,
> > >>
> > >> we occasionally encounter a problem when setting up a loop device in one of our automated testcases.
> > >>
> > >> We set up a loop device as follows:
> > >>
> > >> # dd if=/dev/zero of=/var/tmp/loopbackfile1.img bs=1M count=2500 status=none
> > >> # losetup --sector-size 4096 -fP --show /var/tmp/loopbackfile1.img
> > >>
> > >> This works fine most of the times, but in the seldom case of the error, we get 'losetup: /var/tmp/loopbackfile1.img: failed to set up loop device: Resource temporarily unavailable'.
> > >>
> > >> I am sure that no other loop device is currently defined, so we don't run out of loop devices.
> > >>
> > >> We also see the following message in the syslog when the error occurs:
> > >>
> > >> loop_set_block_size: loop0 () has still dirty pages (nrpages=2)
> > >>
> > >> The nrpages number varies from time to time.
> > >>
> > >> "Resource temporarily unavailable" is EAGAIN, and function loop_set_block_size() in drivers/block/loop.c returns this after printing the syslog message via pr_warn:
> > >>
> > >> static int loop_set_block_size(struct loop_device *lo, unsigned long arg)
> > >> {
> > >> int err = 0;
> > >>
> > >> if (lo->lo_state != Lo_bound)
> > >> return -ENXIO;
> > >>
> > >> err = loop_validate_block_size(arg);
> > >> if (err)
> > >> return err;
> > >>
> > >> if (lo->lo_queue->limits.logical_block_size == arg)
> > >> return 0;
> > >>
> > >> sync_blockdev(lo->lo_device);
> > >> invalidate_bdev(lo->lo_device);
> > >>
> > >> blk_mq_freeze_queue(lo->lo_queue);
> > >>
> > >> /* invalidate_bdev should have truncated all the pages */
> > >> if (lo->lo_device->bd_inode->i_mapping->nrpages) {
> > >> err = -EAGAIN;
> > >> pr_warn("%s: loop%d (%s) has still dirty pages (nrpages=%lu)\n",
> > >> __func__, lo->lo_number, lo->lo_file_name,
> > >> lo->lo_device->bd_inode->i_mapping->nrpages);
> > >> goto out_unfreeze;
> > >> }
> > >>
> > >> blk_queue_logical_block_size(lo->lo_queue, arg);
> > >> blk_queue_physical_block_size(lo->lo_queue, arg);
> > >> blk_queue_io_min(lo->lo_queue, arg);
> > >> loop_update_dio(lo);
> > >> out_unfreeze:
> > >> blk_mq_unfreeze_queue(lo->lo_queue);
> > >>
> > >> return err;
> > >> }
> > >>
> > >> So looks like invalidate_bdev() did actually not truncate all the pages under some circumstances....
> > >>
> > >> The problem only happens when '--sector-size 4096' is specified, with the default sector size is always works. It does not call loop_set_block_size() in the default case I guess.
> > >>
> > >> The loop0 device has certainly be used by other testcases before, most likely with the default block size. But at the time of this run, no loop device is currently active (losetup shows nothing).
> > >>
> > >> Anyone have an idea what goes wrong here?
> > >
> > > It returns '-EAGAIN' to ask userspace to try again.
> > >
> > > I understand loop_set_block_size() doesn't prevent page cache of this
> > > loop disk from being dirtied, so it isn't strange to
> > > see lo_device->bd_inode->i_mapping->nrpages isn't zero after sync_blockdev()
> > > & invalidate_bdev() on loop.
> > >
> >
> > OK, that makes sense from the kernel perspective.
>
> We might improve this code by holding ->i_rwsem / mapping->invalidate_lock in
> loop_set_block_size() to prevent new dirtying pages, but this still
> can't guarantee that i_mapping->nrpages can become 0 after sync &
> revalidate bdev. Or maybe replace invalidate_bdev() with truncate_bdev_range().
i_rwsem won't be enough because even racing with reads into bdev page cache
(which is what I suspect is happening here) will cause EAGAIN error and
reads are not protected by i_rwsem. But after the invalidate_lock work
lands, we should have enough to implement atomic (wrt any page cache
operation) flush & invalidate sequence for a bdevs.
Honza
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists