lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <995a130b-f07a-4771-1fe3-477d2f3c1e8e@linux.intel.com>
Date:   Fri, 9 Apr 2021 10:17:17 -0700
From:   Tim Chen <tim.c.chen@...ux.intel.com>
To:     Miaohe Lin <linmiaohe@...wei.com>, akpm@...ux-foundation.org
Cc:     hannes@...xchg.org, mhocko@...e.com, iamjoonsoo.kim@....com,
        vbabka@...e.cz, alex.shi@...ux.alibaba.com, willy@...radead.org,
        minchan@...nel.org, richard.weiyang@...il.com,
        ying.huang@...el.com, hughd@...gle.com,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 2/5] swap: fix do_swap_page() race with swapoff



On 4/9/21 1:42 AM, Miaohe Lin wrote:
> On 2021/4/9 5:34, Tim Chen wrote:
>>
>>
>> On 4/8/21 6:08 AM, Miaohe Lin wrote:
>>> When I was investigating the swap code, I found the below possible race
>>> window:
>>>
>>> CPU 1					CPU 2
>>> -----					-----
>>> do_swap_page
>>>   synchronous swap_readpage
>>>     alloc_page_vma
>>> 					swapoff
>>> 					  release swap_file, bdev, or ...
>>
> 
> Many thanks for quick review and reply!
> 
>> Perhaps I'm missing something.  The release of swap_file, bdev etc
>> happens after we have cleared the SWP_VALID bit in si->flags in destroy_swap_extents
>> if I read the swapoff code correctly.
> Agree. Let's look this more close:
> CPU1								CPU2
> -----								-----
> swap_readpage
>   if (data_race(sis->flags & SWP_FS_OPS)) {
> 								swapoff
> 								  p->swap_file = NULL;
>     struct file *swap_file = sis->swap_file;
>     struct address_space *mapping = swap_file->f_mapping;[oops!]
> 								  ...
> 								  p->flags = 0;
>     ...
> 
> Does this make sense for you?

p->swapfile = NULL happens after the 
p->flags &= ~SWP_VALID, synchronize_rcu(), destroy_swap_extents() sequence in swapoff().

So I don't think the sequence you illustrated on CPU2 is in the right order.
That said, without get_swap_device/put_swap_device in swap_readpage, you could
potentially blow pass synchronize_rcu() on CPU2 and causes a problem.  so I think
the problematic race looks something like the following:


CPU1								CPU2
-----								-----
swap_readpage
  if (data_race(sis->flags & SWP_FS_OPS)) {
								swapoff
								  p->flags = &= ~SWP_VALID;
								  ..
								  synchronize_rcu();
								  ..
								  p->swap_file = NULL;
    struct file *swap_file = sis->swap_file;
    struct address_space *mapping = swap_file->f_mapping;[oops!]
								  ...
    ...

By adding get_swap_device/put_swap_device, then the race is fixed.


CPU1								CPU2
-----								-----
swap_readpage
  get_swap_device()
  ..
  if (data_race(sis->flags & SWP_FS_OPS)) {
								swapoff
								  p->flags = &= ~SWP_VALID;
								  ..
    struct file *swap_file = sis->swap_file;
    struct address_space *mapping = swap_file->f_mapping;[valid value]
  ..
  put_swap_device()
								  synchronize_rcu();
								  ..
								  p->swap_file = NULL;


> 
>>>
>>>       swap_readpage
>>> 	check sis->flags is ok
>>> 	  access swap_file, bdev...[oops!]
>>> 					    si->flags = 0
>>
>> This happens after we clear the si->flags
>> 					synchronize_rcu()
>> 					release swap_file, bdev, in destroy_swap_extents()
>>
>> So I think if we have get_swap_device/put_swap_device in do_swap_page,
>> it should fix the race you've pointed out here.  
>> Then synchronize_rcu() will wait till we have completed do_swap_page and
>> call put_swap_device.
> 
> Right, get_swap_device/put_swap_device could fix this race. __But__ rcu_read_lock()
> in get_swap_device() could disable preempt and do_swap_page() may take a really long
> time because it involves I/O. It may not be acceptable to disable preempt for such a
> long time. :(

I can see that it is not a good idea to hold rcu read lock for a long
time over slow file I/O operation, which will be the side effect of
introducing get/put_swap_device to swap_readpage.  So using percpu_ref
will then be preferable for synchronization once we introduce 
get/put_swap_device into swap_readpage.

Tim

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ