lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20151005075318.GE2903@worktop.programming.kicks-ass.net> Date: Mon, 5 Oct 2015 09:53:18 +0200 From: Peter Zijlstra <peterz@...radead.org> To: Vlastimil Babka <vbabka@...e.cz> Cc: linux-mm@...ck.org, Jerome Marchand <jmarchan@...hat.com>, Andrew Morton <akpm@...ux-foundation.org>, Hugh Dickins <hughd@...gle.com>, linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org, Michal Hocko <mhocko@...e.cz>, "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, Cyrill Gorcunov <gorcunov@...nvz.org>, Randy Dunlap <rdunlap@...radead.org>, linux-s390@...r.kernel.org, Martin Schwidefsky <schwidefsky@...ibm.com>, Heiko Carstens <heiko.carstens@...ibm.com>, Paul Mackerras <paulus@...ba.org>, Arnaldo Carvalho de Melo <acme@...nel.org>, Oleg Nesterov <oleg@...hat.com>, Linux API <linux-api@...r.kernel.org>, Konstantin Khlebnikov <khlebnikov@...dex-team.ru> Subject: Re: [PATCH v4 2/4] mm, proc: account for shmem swap in /proc/pid/smaps On Fri, Oct 02, 2015 at 03:35:49PM +0200, Vlastimil Babka wrote: > +static unsigned long smaps_shmem_swap(struct vm_area_struct *vma) > +{ > + struct inode *inode; > + unsigned long swapped; > + pgoff_t start, end; > + > + if (!vma->vm_file) > + return 0; > + > + inode = file_inode(vma->vm_file); > + > + if (!shmem_mapping(inode->i_mapping)) > + return 0; > + > + /* > + * The easier cases are when the shmem object has nothing in swap, or > + * we have the whole object mapped. Then we can simply use the stats > + * that are already tracked by shmem. > + */ > + swapped = shmem_swap_usage(inode); > + > + if (swapped == 0) > + return 0; > + > + if (vma->vm_end - vma->vm_start >= inode->i_size) > + return swapped; > + > + /* > + * Here we have to inspect individual pages in our mapped range to > + * determine how much of them are swapped out. Thanks to RCU, we don't > + * need i_mutex to protect against truncating or hole punching. > + */ At the very least put in an assertion that we hold the RCU read lock, otherwise RCU doesn't guarantee anything and its not obvious it is held here. > + start = linear_page_index(vma, vma->vm_start); > + end = linear_page_index(vma, vma->vm_end); > + > + return shmem_partial_swap_usage(inode->i_mapping, start, end); > +} > + * Determine (in bytes) how much of the whole shmem object is swapped out. > + */ > +unsigned long shmem_swap_usage(struct inode *inode) > +{ > + struct shmem_inode_info *info = SHMEM_I(inode); > + unsigned long swapped; > + > + /* Mostly an overkill, but it's not atomic64_t */ Yeah, that don't make any kind of sense. > + spin_lock(&info->lock); > + swapped = info->swapped; > + spin_unlock(&info->lock); > + > + return swapped << PAGE_SHIFT; > +} -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists