lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A28EF7D.5030704@linux.vnet.ibm.com>
Date:	Fri, 05 Jun 2009 12:12:13 +0200
From:	Peter Oberparleiter <oberpar@...ux.vnet.ibm.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	Amerigo Wang <xiyou.wangcong@...il.com>,
	linux-kernel@...r.kernel.org, andi@...stfloor.org,
	ying.huang@...el.com, W.Li@....COM, michaele@....ibm.com,
	mingo@...e.hu, heicars2@...ux.vnet.ibm.com,
	mschwid2@...ux.vnet.ibm.com
Subject: Re: [PATCH 3/4] gcov: add gcov profiling infrastructure

Andrew Morton wrote:
> On Fri, 05 Jun 2009 11:23:04 +0200 Peter Oberparleiter <oberpar@...ux.vnet.ibm.com> wrote:
> 
>> Amerigo Wang wrote:
>>> On Wed, Jun 03, 2009 at 05:26:22PM +0200, Peter Oberparleiter wrote:
>>>> Peter Oberparleiter wrote:
>>>>> Andrew Morton wrote:
>>>>>> On Tue, 02 Jun 2009 13:44:02 +0200
>>>>>> Peter Oberparleiter <oberpar@...ux.vnet.ibm.com> wrote:
>>>>>>> +	/* Duplicate gcov_info. */
>>>>>>> +	active = num_counter_active(info);
>>>>>>> +	dup = kzalloc(sizeof(struct gcov_info) +
>>>>>>> +		      sizeof(struct gcov_ctr_info) * active, GFP_KERNEL);
>>>>>> How large can this allocation be?
>>>>> Hm, good question. Having a look at my test system, I see coverage data 
>>>>> files of up to 60kb size. With counters making up the largest part of 
>>>>> those, I'd guess the allocation size can be around ~55kb. I assume that 
>>>>> makes it a candidate for vmalloc?
>>>> A further run with debug output showed that the maximum size is
>>>> actually around 4k, so in my opinion, there is no need to switch
>>>> to vmalloc.
>>> Unless you want virtually continious memory, you don't need to
>>> bother vmalloc().
>>>
>>> kmalloc() and get_free_pages() are all fine for this.
>> kmalloc() requires contiguous pages to serve an allocation request 
>> larger than a single page. The longer a kernel runs, the more fragmented 
>> the pool of free pages gets and the probability to find enough 
>> contiguous free pages is significantly reduced.
>>
>> In this case (having had a 3rd look), I found allocations of up to 
>> ~50kb, so to be sure, I'll switch that particular allocation to vmalloc().
> 
> Well, vmalloc() isn't magic.  It can suffer internal fragmentation of
> the fixed-sized virtual address arena.
> 
> Is it possible to redo the data structures so that the large array
> isn't needed?  Use a list, or move the data elsewhere or such?

Unfortunately not - the format of the data is dictated by gcc. Any 
attempt to break it down into page-sized chunks would only imitate what 
vmalloc() already does.

Note though that this function is not called very often - it's only used 
to preserve coverage data for modules which are unloaded. And I only saw 
the 50kb counter data size for one file: kernel/sched.c (using a 
debugging patch).

So hm, I'm not sure about this anymore. I can also leave it at kmalloc() 
- chances are slim that anyone will actually experience a problem and if 
they do, they get an "order-n allocation failed" message so theres a 
hint at the cause for the problem.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ