lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 14 Nov 2018 16:45:57 -0800 From: John Hubbard <jhubbard@...dia.com> To: Dan Williams <dan.j.williams@...el.com>, Keith Busch <keith.busch@...el.com> CC: John Hubbard <john.hubbard@...il.com>, Linux MM <linux-mm@...ck.org>, Andrew Morton <akpm@...ux-foundation.org>, Linux Kernel Mailing List <linux-kernel@...r.kernel.org>, linux-rdma <linux-rdma@...r.kernel.org>, linux-fsdevel <linux-fsdevel@...r.kernel.org>, "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, Dave Hansen <dave.hansen@...el.com> Subject: Re: [PATCH v2 1/6] mm/gup: finish consolidating error handling On 11/12/18 8:14 AM, Dan Williams wrote: > On Mon, Nov 12, 2018 at 7:45 AM Keith Busch <keith.busch@...el.com> wrote: >> >> On Sat, Nov 10, 2018 at 12:50:36AM -0800, john.hubbard@...il.com wrote: >>> From: John Hubbard <jhubbard@...dia.com> >>> >>> An upcoming patch wants to be able to operate on each page that >>> get_user_pages has retrieved. In order to do that, it's best to >>> have a common exit point from the routine. Most of this has been >>> taken care of by commit df06b37ffe5a4 ("mm/gup: cache dev_pagemap while >>> pinning pages"), but there was one case remaining. >>> >>> Also, there was still an unnecessary shadow declaration (with a >>> different type) of the "ret" variable, which this commit removes. >>> >>> Cc: Keith Busch <keith.busch@...el.com> >>> Cc: Dan Williams <dan.j.williams@...el.com> >>> Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com> >>> Cc: Dave Hansen <dave.hansen@...el.com> >>> Signed-off-by: John Hubbard <jhubbard@...dia.com> >>> --- >>> mm/gup.c | 3 +-- >>> 1 file changed, 1 insertion(+), 2 deletions(-) >>> >>> diff --git a/mm/gup.c b/mm/gup.c >>> index f76e77a2d34b..55a41dee0340 100644 >>> --- a/mm/gup.c >>> +++ b/mm/gup.c >>> @@ -696,12 +696,11 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, >>> if (!vma || start >= vma->vm_end) { >>> vma = find_extend_vma(mm, start); >>> if (!vma && in_gate_area(mm, start)) { >>> - int ret; >>> ret = get_gate_page(mm, start & PAGE_MASK, >>> gup_flags, &vma, >>> pages ? &pages[i] : NULL); >>> if (ret) >>> - return i ? : ret; >>> + goto out; >>> ctx.page_mask = 0; >>> goto next_page; >>> } >> >> This also fixes a potentially leaked dev_pagemap reference count if a >> failure occurs when an iteration crosses a vma boundary. I don't think >> it's normal to have different vma's on a users mapped zone device memory, >> but good to fix anyway. > > Does not sound abnormal to me, we should promote this as a fix for the > current cycle with an updated changelog. > Andrew, should I send this patch separately, or do you have what you need already? thanks, -- John Hubbard NVIDIA
Powered by blists - more mailing lists