[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <35FD53F367049845BC99AC72306C23D1044A02027E17@CNBJMBX05.corpusers.net>
Date: Tue, 10 Feb 2015 15:05:18 +0800
From: "Wang, Yalin" <Yalin.Wang@...ymobile.com>
To: 'Andrew Morton' <akpm@...ux-foundation.org>
CC: "'Kirill A. Shutemov'" <kirill@...temov.name>,
"'arnd@...db.de'" <arnd@...db.de>,
"'linux-arch@...r.kernel.org'" <linux-arch@...r.kernel.org>,
"'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>,
"'linux@....linux.org.uk'" <linux@....linux.org.uk>,
"'linux-arm-kernel@...ts.infradead.org'"
<linux-arm-kernel@...ts.infradead.org>
Subject: RE: [RFC] change non-atomic bitops method
> -----Original Message-----
> From: Andrew Morton [mailto:akpm@...ux-foundation.org]
> Sent: Tuesday, February 10, 2015 4:34 AM
> To: Wang, Yalin
> Cc: 'Kirill A. Shutemov'; 'arnd@...db.de'; 'linux-arch@...r.kernel.org';
> 'linux-kernel@...r.kernel.org'; 'linux@....linux.org.uk'; 'linux-arm-
> kernel@...ts.infradead.org'
> Subject: Re: [RFC] change non-atomic bitops method
>
> On Mon, 9 Feb 2015 16:18:10 +0800 "Wang, Yalin" <Yalin.Wang@...ymobile.com>
> wrote:
>
> > > That we're running clear_bit against a cleared bit 10% of the time is a
> > > bit alarming. I wonder where that's coming from.
> > >
> > > The enormous miss count in test_and_clear_bit() might indicate an
> > > inefficiency somewhere.
> > I te-test the patch on 3.10 kernel.
> > The result like this:
> >
> > VmallocChunk: 251498164 kB
> > __set_bit_miss_count:11730 __set_bit_success_count:1036316
> > __clear_bit_miss_count:209640 __clear_bit_success_count:4806556
> > __test_and_set_bit_miss_count:0 __test_and_set_bit_success_count:121
> > __test_and_clear_bit_miss_count:0 __test_and_clear_bit_success_count:445
> >
> > __clear_bit miss rate is a little high,
> > I check the log, and most miss coming from this code:
> >
> > <6>[ 442.701798] [<ffffffc00021d084>] warn_slowpath_fmt+0x4c/0x58
> > <6>[ 442.701805] [<ffffffc0002461a8>] __clear_bit+0x98/0xa4
> > <6>[ 442.701813] [<ffffffc0003126ac>] __alloc_fd+0xc8/0x124
> > <6>[ 442.701821] [<ffffffc000312768>] get_unused_fd_flags+0x28/0x34
> > <6>[ 442.701828] [<ffffffc0002f9370>] do_sys_open+0x10c/0x1c0
> > <6>[ 442.701835] [<ffffffc0002f9458>] SyS_openat+0xc/0x18
> > In __clear_close_on_exec(fd, fdt);
> >
> >
> >
> > <6>[ 442.695354] [<ffffffc00021d084>] warn_slowpath_fmt+0x4c/0x58
> > <6>[ 442.695359] [<ffffffc0002461a8>] __clear_bit+0x98/0xa4
> > <6>[ 442.695367] [<ffffffc000312340>] dup_fd+0x1d4/0x280
> > <6>[ 442.695375] [<ffffffc00021b07c>] copy_process.part.56+0x42c/0xe38
> > <6>[ 442.695382] [<ffffffc00021bb9c>] do_fork+0xe0/0x360
> > <6>[ 442.695389] [<ffffffc00021beb4>] SyS_clone+0x10/0x1c
> > In __clear_open_fd(open_files - i, new_fdt);
> >
> > Do we need test_bit() before clear_bit()at these 2 place?
>
> I don't know. I was happily typing in this:
>
> diff -puN include/linux/bitops.h~a include/linux/bitops.h
> --- a/include/linux/bitops.h~a
> +++ a/include/linux/bitops.h
> @@ -226,5 +226,37 @@ extern unsigned long find_last_bit(const
> unsigned long size);
> #endif
>
> +/**
> + * __set_clear_bit - non-atomically set a bit if it is presently clear
> + * @nr: The bit number
> + * @addr: The base address of the operation
> + *
> + * __set_clear_bit() and similar functions avoid unnecessarily dirtying a
> + * cacheline when the operation will have no effect.
> + */
> +static inline void __set_clear_bit(unsigned nr, volatile unsigned long
> *addr)
> +{
> + if (!test_bit(nr, addr))
> + __set_bit(nr, addr);
> +}
> +
> +static inline void __clear_set_bit(unsigned nr, volatile unsigned long
> *addr)
> +{
> + if (test_bit(nr, addr))
> + __clear_bit(nr, addr);
> +}
> +
> +static inline void set_clear_bit(unsigned nr, volatile unsigned long
> *addr)
> +{
> + if (!test_bit(nr, addr))
> + set_bit(nr, addr);
> +}
> +
> +static inline void clear_set_bit(unsigned nr, volatile unsigned long
> *addr)
> +{
> + if (test_bit(nr, addr))
> + clear_bit(nr, addr);
> +}
> +
> #endif /* __KERNEL__ */
> #endif
>
> (maybe __set_bit_if_clear would be a better name)
>
> But I don't know if it will do anything useful. The CPU *should* be
> able to avoid dirtying the cacheline on its own: it has all the info it
> needs to know that no writeback will be needed. But I don't know which
> (if any) CPUs perform this optimisation.
I will send a new patch for your review .
Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists