lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6ba91390-83e8-8702-2729-dc432abd3cc5@redhat.com>
Date:   Wed, 6 Apr 2022 19:18:04 +0800
From:   Xiubo Li <xiubli@...hat.com>
To:     Luís Henriques <lhenriques@...e.de>
Cc:     Jeff Layton <jlayton@...nel.org>,
        Ilya Dryomov <idryomov@...il.com>, ceph-devel@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] ceph: invalidate pages when doing DIO in encrypted
 inodes


On 4/6/22 6:57 PM, Luís Henriques wrote:
> Xiubo Li <xiubli@...hat.com> writes:
>
>> On 4/1/22 9:32 PM, Luís Henriques wrote:
>>> When doing DIO on an encrypted node, we need to invalidate the page cache in
>>> the range being written to, otherwise the cache will include invalid data.
>>>
>>> Signed-off-by: Luís Henriques <lhenriques@...e.de>
>>> ---
>>>    fs/ceph/file.c | 11 ++++++++++-
>>>    1 file changed, 10 insertions(+), 1 deletion(-)
>>>
>>> Changes since v1:
>>> - Replaced truncate_inode_pages_range() by invalidate_inode_pages2_range
>>> - Call fscache_invalidate with FSCACHE_INVAL_DIO_WRITE if we're doing DIO
>>>
>>> Note: I'm not really sure this last change is required, it doesn't really
>>> affect generic/647 result, but seems to be the most correct.
>>>
>>> diff --git a/fs/ceph/file.c b/fs/ceph/file.c
>>> index 5072570c2203..b2743c342305 100644
>>> --- a/fs/ceph/file.c
>>> +++ b/fs/ceph/file.c
>>> @@ -1605,7 +1605,7 @@ ceph_sync_write(struct kiocb *iocb, struct iov_iter *from, loff_t pos,
>>>    	if (ret < 0)
>>>    		return ret;
>>>    -	ceph_fscache_invalidate(inode, false);
>>> +	ceph_fscache_invalidate(inode, (iocb->ki_flags & IOCB_DIRECT));
>>>    	ret = invalidate_inode_pages2_range(inode->i_mapping,
>>>    					    pos >> PAGE_SHIFT,
>>>    					    (pos + count - 1) >> PAGE_SHIFT);
>> The above has already invalidated the pages, why doesn't it work ?
> I suspect the reason is because later on we loop through the number of
> pages, call copy_page_from_iter() and then ceph_fscrypt_encrypt_pages().

Checked the 'copy_page_from_iter()', it will do the kmap for the pages 
but will kunmap them again later. And they shouldn't update the 
i_mapping if I didn't miss something important.

For 'ceph_fscrypt_encrypt_pages()' it will encrypt/dencrypt the context 
inplace, IMO if it needs to map the page and it should also unmap it 
just like in 'copy_page_from_iter()'.

I thought it possibly be when we need to do RMW, it may will update the 
i_mapping when reading contents, but I checked the code didn't find any 
place is doing this. So I am wondering where tha page caches come from ? 
If that page caches really from reading the contents, then we should 
discard it instead of flushing it back ?

BTW, what's the problem without this fixing ? xfstest fails ?


-- Xiubo

> Cheers,

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ