[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240715112048.2pap6psmbb5f6urg@quack3>
Date: Mon, 15 Jul 2024 13:20:48 +0200
From: Jan Kara <jack@...e.cz>
To: Dan Carpenter <dan.carpenter@...aro.org>
Cc: Jan Kara <jack@...e.cz>, linux-ext4@...r.kernel.org
Subject: Re: [bug report] ext4: Convert data=journal writeback to use
ext4_writepages()
On Sat 08-06-24 17:25:20, Dan Carpenter wrote:
> Hello Jan Kara,
>
> Commit 3f079114bf52 ("ext4: Convert data=journal writeback to use
> ext4_writepages()") from Feb 28, 2023 (linux-next), leads to the
> following Smatch static checker warning:
>
> fs/ext4/inode.c:2478 mpage_prepare_extent_to_map()
> error: we previously assumed 'handle' could be null (see line 2417)
Found this in my inbox :). Yeah, Smatch is getting confused by different
ways how we can test for journalled data. At the beginning of
mpage_prepare_extent_to_map() we have:
if (ext4_should_journal_data(mpd->inode)) {
handle = ext4_journal_start(mpd->inode, EXT4_HT_WRITE_PAGE,
bpp);
...
}
so 'handle' is set only if we are journalling data. Later in the function
we have:
if (handle) {
err = ext4_journal_ensure_credits(handle, bpp,
0);
...
}
So if we are journalling data, make sure we have enough credits in the
handle for the next page. And then we have:
if (folio_test_checked(folio)) {
err = mpage_journal_page_buffers(handle,
mpd, folio);
...
}
which uses the handle. Here the trick is that ext4 sets the 'Checked' folio
flag only for inodes with data journalling. So if Checked flags is set, we
should have started the handle at the beginning of
mpage_prepare_extent_to_map().
Honza
> fs/ext4/inode.c
> 2362 static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
> 2363 {
> 2364 struct address_space *mapping = mpd->inode->i_mapping;
> 2365 struct folio_batch fbatch;
> 2366 unsigned int nr_folios;
> 2367 pgoff_t index = mpd->first_page;
> 2368 pgoff_t end = mpd->last_page;
> 2369 xa_mark_t tag;
> 2370 int i, err = 0;
> 2371 int blkbits = mpd->inode->i_blkbits;
> 2372 ext4_lblk_t lblk;
> 2373 struct buffer_head *head;
> 2374 handle_t *handle = NULL;
> 2375 int bpp = ext4_journal_blocks_per_page(mpd->inode);
> 2376
> 2377 if (mpd->wbc->sync_mode == WB_SYNC_ALL || mpd->wbc->tagged_writepages)
> 2378 tag = PAGECACHE_TAG_TOWRITE;
> 2379 else
> 2380 tag = PAGECACHE_TAG_DIRTY;
> 2381
> 2382 mpd->map.m_len = 0;
> 2383 mpd->next_page = index;
> 2384 if (ext4_should_journal_data(mpd->inode)) {
> 2385 handle = ext4_journal_start(mpd->inode, EXT4_HT_WRITE_PAGE,
> 2386 bpp);
> 2387 if (IS_ERR(handle))
> 2388 return PTR_ERR(handle);
> 2389 }
> 2390 folio_batch_init(&fbatch);
> 2391 while (index <= end) {
> 2392 nr_folios = filemap_get_folios_tag(mapping, &index, end,
> 2393 tag, &fbatch);
> 2394 if (nr_folios == 0)
> 2395 break;
> 2396
> 2397 for (i = 0; i < nr_folios; i++) {
> 2398 struct folio *folio = fbatch.folios[i];
> 2399
> 2400 /*
> 2401 * Accumulated enough dirty pages? This doesn't apply
> 2402 * to WB_SYNC_ALL mode. For integrity sync we have to
> 2403 * keep going because someone may be concurrently
> 2404 * dirtying pages, and we might have synced a lot of
> 2405 * newly appeared dirty pages, but have not synced all
> 2406 * of the old dirty pages.
> 2407 */
> 2408 if (mpd->wbc->sync_mode == WB_SYNC_NONE &&
> 2409 mpd->wbc->nr_to_write <=
> 2410 mpd->map.m_len >> (PAGE_SHIFT - blkbits))
> 2411 goto out;
> 2412
> 2413 /* If we can't merge this page, we are done. */
> 2414 if (mpd->map.m_len > 0 && mpd->next_page != folio->index)
> 2415 goto out;
> 2416
> 2417 if (handle) {
> ^^^^^^
> Checked for NULL
>
> 2418 err = ext4_journal_ensure_credits(handle, bpp,
> 2419 0);
> 2420 if (err < 0)
> 2421 goto out;
> 2422 }
> 2423
> 2424 folio_lock(folio);
> 2425 /*
> 2426 * If the page is no longer dirty, or its mapping no
> 2427 * longer corresponds to inode we are writing (which
> 2428 * means it has been truncated or invalidated), or the
> 2429 * page is already under writeback and we are not doing
> 2430 * a data integrity writeback, skip the page
> 2431 */
> 2432 if (!folio_test_dirty(folio) ||
> 2433 (folio_test_writeback(folio) &&
> 2434 (mpd->wbc->sync_mode == WB_SYNC_NONE)) ||
> 2435 unlikely(folio->mapping != mapping)) {
> 2436 folio_unlock(folio);
> 2437 continue;
> 2438 }
> 2439
> 2440 folio_wait_writeback(folio);
> 2441 BUG_ON(folio_test_writeback(folio));
> 2442
> 2443 /*
> 2444 * Should never happen but for buggy code in
> 2445 * other subsystems that call
> 2446 * set_page_dirty() without properly warning
> 2447 * the file system first. See [1] for more
> 2448 * information.
> 2449 *
> 2450 * [1] https://lore.kernel.org/linux-mm/20180103100430.GE4911@quack2.suse.cz
> 2451 */
> 2452 if (!folio_buffers(folio)) {
> 2453 ext4_warning_inode(mpd->inode, "page %lu does not have buffers attached", folio->index);
> 2454 folio_clear_dirty(folio);
> 2455 folio_unlock(folio);
> 2456 continue;
> 2457 }
> 2458
> 2459 if (mpd->map.m_len == 0)
> 2460 mpd->first_page = folio->index;
> 2461 mpd->next_page = folio_next_index(folio);
> 2462 /*
> 2463 * Writeout when we cannot modify metadata is simple.
> 2464 * Just submit the page. For data=journal mode we
> 2465 * first handle writeout of the page for checkpoint and
> 2466 * only after that handle delayed page dirtying. This
> 2467 * makes sure current data is checkpointed to the final
> 2468 * location before possibly journalling it again which
> 2469 * is desirable when the page is frequently dirtied
> 2470 * through a pin.
> 2471 */
> 2472 if (!mpd->can_map) {
> 2473 err = mpage_submit_folio(mpd, folio);
> 2474 if (err < 0)
> 2475 goto out;
> 2476 /* Pending dirtying of journalled data? */
> 2477 if (folio_test_checked(folio)) {
> --> 2478 err = mpage_journal_page_buffers(handle,
> ^^^^^^
> Unchecked dereferenced inside the function.
>
> 2479 mpd, folio);
> 2480 if (err < 0)
> 2481 goto out;
> 2482 mpd->journalled_more_data = 1;
> 2483 }
> 2484 mpage_folio_done(mpd, folio);
> 2485 } else {
> 2486 /* Add all dirty buffers to mpd */
> 2487 lblk = ((ext4_lblk_t)folio->index) <<
> 2488 (PAGE_SHIFT - blkbits);
> 2489 head = folio_buffers(folio);
> 2490 err = mpage_process_page_bufs(mpd, head, head,
> 2491 lblk);
> 2492 if (err <= 0)
> 2493 goto out;
> 2494 err = 0;
> 2495 }
> 2496 }
> 2497 folio_batch_release(&fbatch);
> 2498 cond_resched();
> 2499 }
> 2500 mpd->scanned_until_end = 1;
> 2501 if (handle)
> 2502 ext4_journal_stop(handle);
> 2503 return 0;
> 2504 out:
> 2505 folio_batch_release(&fbatch);
> 2506 if (handle)
> 2507 ext4_journal_stop(handle);
> 2508 return err;
> 2509 }
>
> regards,
> dan carpenter
>
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists