lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20170126115819.58875-29-kirill.shutemov@linux.intel.com> Date: Thu, 26 Jan 2017 14:58:10 +0300 From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> To: "Theodore Ts'o" <tytso@....edu>, Andreas Dilger <adilger.kernel@...ger.ca>, Jan Kara <jack@...e.com>, Andrew Morton <akpm@...ux-foundation.org> Cc: Alexander Viro <viro@...iv.linux.org.uk>, Hugh Dickins <hughd@...gle.com>, Andrea Arcangeli <aarcange@...hat.com>, Dave Hansen <dave.hansen@...el.com>, Vlastimil Babka <vbabka@...e.cz>, Matthew Wilcox <willy@...radead.org>, Ross Zwisler <ross.zwisler@...ux.intel.com>, linux-ext4@...r.kernel.org, linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org, linux-block@...r.kernel.org, "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> Subject: [PATCHv6 28/37] ext4: make ext4_block_write_begin() aware about huge pages It simply matches changes to __block_write_begin_int(). Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com> --- fs/ext4/inode.c | 35 +++++++++++++++++++++-------------- 1 file changed, 21 insertions(+), 14 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 7e65a5b78cf1..3eae2d058fd0 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1093,9 +1093,8 @@ int do_journal_get_write_access(handle_t *handle, static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len, get_block_t *get_block) { - unsigned from = pos & (PAGE_SIZE - 1); - unsigned to = from + len; - struct inode *inode = page->mapping->host; + unsigned from, to; + struct inode *inode = page_mapping(page)->host; unsigned block_start, block_end; sector_t block; int err = 0; @@ -1103,10 +1102,14 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len, unsigned bbits; struct buffer_head *bh, *head, *wait[2], **wait_bh = wait; bool decrypt = false; + bool uptodate = PageUptodate(page); + page = compound_head(page); + from = pos & ~hpage_mask(page); + to = from + len; BUG_ON(!PageLocked(page)); - BUG_ON(from > PAGE_SIZE); - BUG_ON(to > PAGE_SIZE); + BUG_ON(from > hpage_size(page)); + BUG_ON(to > hpage_size(page)); BUG_ON(from > to); if (!page_has_buffers(page)) @@ -1119,10 +1122,8 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len, block++, block_start = block_end, bh = bh->b_this_page) { block_end = block_start + blocksize; if (block_end <= from || block_start >= to) { - if (PageUptodate(page)) { - if (!buffer_uptodate(bh)) - set_buffer_uptodate(bh); - } + if (uptodate && !buffer_uptodate(bh)) + set_buffer_uptodate(bh); continue; } if (buffer_new(bh)) @@ -1134,19 +1135,25 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len, break; if (buffer_new(bh)) { clean_bdev_bh_alias(bh); - if (PageUptodate(page)) { + if (uptodate) { clear_buffer_new(bh); set_buffer_uptodate(bh); mark_buffer_dirty(bh); continue; } - if (block_end > to || block_start < from) - zero_user_segments(page, to, block_end, - block_start, from); + if (block_end > to || block_start < from) { + BUG_ON(to - from > PAGE_SIZE); + zero_user_segments(page + + block_start / PAGE_SIZE, + to % PAGE_SIZE, + (block_start % PAGE_SIZE) + blocksize, + block_start % PAGE_SIZE, + from % PAGE_SIZE); + } continue; } } - if (PageUptodate(page)) { + if (uptodate) { if (!buffer_uptodate(bh)) set_buffer_uptodate(bh); continue; -- 2.11.0
Powered by blists - more mailing lists