lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 15 Oct 2013 10:27:14 -0700
From:	Ning Qu <quning@...gle.com>
To:	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc:	Andrea Arcangeli <aarcange@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Hugh Dickins <hughd@...gle.com>,
	Al Viro <viro@...iv.linux.org.uk>,
	Wu Fengguang <fengguang.wu@...el.com>, Jan Kara <jack@...e.cz>,
	Mel Gorman <mgorman@...e.de>, linux-mm@...ck.org,
	Andi Kleen <ak@...ux.intel.com>,
	Matthew Wilcox <willy@...ux.intel.com>,
	Hillf Danton <dhillf@...il.com>, Dave Hansen <dave@...1.net>,
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 02/12] mm, thp, tmpfs: support to add huge page into page
 cache for tmpfs

Yes, I can try. The code is pretty much similar with some minor difference.

One thing I can do is to move the spin lock part (together with the
corresponding err handling into a common function.

The only problem I can see right now is we need the following
additional line for shm:

__mod_zone_page_state(page_zone(page), NR_SHMEM, nr);

Which means we need to tell if it's coming from shm or not, is that OK
to add additional parameter just for that? Or is there any other
better way we can infer that information? Thanks!
Best wishes,
-- 
Ning Qu (曲宁) | Software Engineer | quning@...gle.com | +1-408-418-6066


On Tue, Oct 15, 2013 at 3:02 AM, Kirill A. Shutemov
<kirill.shutemov@...ux.intel.com> wrote:
> Ning Qu wrote:
>> For replacing a page inside page cache, we assume the huge page
>> has been splitted before getting here.
>>
>> For adding a new page to page cache, huge page support has been added.
>>
>> Also refactor the shm_add_to_page_cache function.
>>
>> Signed-off-by: Ning Qu <quning@...il.com>
>> ---
>>  mm/shmem.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++------
>>  1 file changed, 88 insertions(+), 9 deletions(-)
>>
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index a857ba8..447bd14 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -277,27 +277,23 @@ static bool shmem_confirm_swap(struct address_space *mapping,
>>  }
>>
>>  /*
>> - * Like add_to_page_cache_locked, but error if expected item has gone.
>> + * Replace the swap entry with page cache entry
>>   */
>> -static int shmem_add_to_page_cache(struct page *page,
>> +static int shmem_replace_page_page_cache(struct page *page,
>>                                  struct address_space *mapping,
>>                                  pgoff_t index, gfp_t gfp, void *expected)
>>  {
>>       int error;
>>
>> -     VM_BUG_ON(!PageLocked(page));
>> -     VM_BUG_ON(!PageSwapBacked(page));
>> +     BUG_ON(PageTransHugeCache(page));
>>
>>       page_cache_get(page);
>>       page->mapping = mapping;
>>       page->index = index;
>>
>>       spin_lock_irq(&mapping->tree_lock);
>> -     if (!expected)
>> -             error = radix_tree_insert(&mapping->page_tree, index, page);
>> -     else
>> -             error = shmem_radix_tree_replace(mapping, index, expected,
>> -                                                              page);
>> +
>> +     error = shmem_radix_tree_replace(mapping, index, expected, page);
>>       if (!error) {
>>               mapping->nrpages++;
>>               __inc_zone_page_state(page, NR_FILE_PAGES);
>> @@ -312,6 +308,87 @@ static int shmem_add_to_page_cache(struct page *page,
>>  }
>>
>>  /*
>> + * Insert new page into with page cache
>> + */
>> +static int shmem_insert_page_page_cache(struct page *page,
>> +                                struct address_space *mapping,
>> +                                pgoff_t index, gfp_t gfp)
>> +{
>
> You copy-paste most of add_to_page_cache_locked() code here. Is there a
> way to share the code? Move common part into __add_to_page_cache_locked()
> or something.
>
> --
>  Kirill A. Shutemov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ