lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4h6FFX05U5atqSFRNkdawq81638ydD5PAtmKzWGnkyWoA@mail.gmail.com>
Date:	Tue, 3 Nov 2015 14:50:32 -0800
From:	Dan Williams <dan.j.williams@...el.com>
To:	Ross Zwisler <ross.zwisler@...ux.intel.com>
Cc:	Jens Axboe <axboe@...com>, Jens Axboe <axboe@...nel.dk>,
	Jan Kara <jack@...e.cz>,
	"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
	david <david@...morbit.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Jeff Moyer <jmoyer@...hat.com>, Jan Kara <jack@...e.com>,
	Christoph Hellwig <hch@....de>
Subject: Re: [PATCH v3 03/15] block, dax: fix lifetime of in-kernel dax
 mappings with dax_map_atomic()

On Tue, Nov 3, 2015 at 11:01 AM, Ross Zwisler
<ross.zwisler@...ux.intel.com> wrote:
> On Sun, Nov 01, 2015 at 11:29:58PM -0500, Dan Williams wrote:
>> The DAX implementation needs to protect new calls to ->direct_access()
>> and usage of its return value against unbind of the underlying block
>> device.  Use blk_queue_enter()/blk_queue_exit() to either prevent
>> blk_cleanup_queue() from proceeding, or fail the dax_map_atomic() if the
>> request_queue is being torn down.
>>
>> Cc: Jan Kara <jack@...e.com>
>> Cc: Jens Axboe <axboe@...nel.dk>
>> Cc: Christoph Hellwig <hch@....de>
>> Cc: Dave Chinner <david@...morbit.com>
>> Cc: Ross Zwisler <ross.zwisler@...ux.intel.com>
>> Reviewed-by: Jeff Moyer <jmoyer@...hat.com>
>> Signed-off-by: Dan Williams <dan.j.williams@...el.com>
>> ---
> <>
[ trim the comments that Jeff responded to]

>> @@ -305,11 +353,10 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh,
>>               goto out;
>>       }
>>
>> -     error = bdev_direct_access(bh->b_bdev, sector, &addr, &pfn, bh->b_size);
>> -     if (error < 0)
>> -             goto out;
>> -     if (error < PAGE_SIZE) {
>> -             error = -EIO;
>> +     addr = __dax_map_atomic(bdev, to_sector(bh, inode), bh->b_size,
>> +                     &pfn, NULL);
>> +     if (IS_ERR(addr)) {
>> +             error = PTR_ERR(addr);
>
> Just a note that we lost the check for bdev_direct_access() returning less
> than PAGE_SIZE.  Are we sure this can't happen and that it's safe to remove
> the check?

Yes, as Jeff recommends I'll do a follow on patch to make this an
explicit guarantee of bdev_direct_access() just like the page
alignment.

>
>> @@ -609,15 +655,20 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
>>               result = VM_FAULT_NOPAGE;
>>               spin_unlock(ptl);
>>       } else {
>> -             sector = bh.b_blocknr << (blkbits - 9);
>> -             length = bdev_direct_access(bh.b_bdev, sector, &kaddr, &pfn,
>> -                                             bh.b_size);
>> -             if (length < 0) {
>> +             long length;
>> +             unsigned long pfn;
>> +             void __pmem *kaddr = __dax_map_atomic(bdev,
>> +                             to_sector(&bh, inode), HPAGE_SIZE, &pfn,
>> +                             &length);
>
> Let's use PMD_SIZE instead of HPAGE_SIZE to be consistent with the rest of the
> DAX code.
>

I changed to HPAGE_SIZE on advice from Dave Hansen.  I'll insert a
preceding cleanup patch in this series to do the conversion since we
should be consistent with the use of PAGE_SIZE in the other dax paths.

>> +
>> +             if (IS_ERR(kaddr)) {
>>                       result = VM_FAULT_SIGBUS;
>>                       goto out;
>>               }
>> -             if ((length < PMD_SIZE) || (pfn & PG_PMD_COLOUR))
>> +             if ((length < PMD_SIZE) || (pfn & PG_PMD_COLOUR)) {
>> +                     dax_unmap_atomic(bdev, kaddr);
>>                       goto fallback;
>> +             }
>>
>>               if (buffer_unwritten(&bh) || buffer_new(&bh)) {
>>                       clear_pmem(kaddr, HPAGE_SIZE);
>
> Ditto, let's use PMD_SIZE for consistency (I realize this was changed ealier
> in the series).

Ditto on the rebuttal.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ