lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 8 Oct 2015 23:32:43 +0900
From:	Jungseok Lee <jungseoklee85@...il.com>
To:	Pratyush Anand <panand@...hat.com>
Cc:	catalin.marinas@....com, will.deacon@....com,
	linux-arm-kernel@...ts.infradead.org, james.morse@....com,
	takahiro.akashi@...aro.org, mark.rutland@....com,
	barami97@...il.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 1/2] arm64: Introduce IRQ stack

On Oct 8, 2015, at 7:25 PM, Pratyush Anand wrote:
> Hi Jungseok,

Hi Pratyush,

> 
> On 07/10/2015:03:28:11 PM, Jungseok Lee wrote:
>> Currently, kernel context and interrupts are handled using a single
>> kernel stack navigated by sp_el1. This forces a system to use 16KB
>> stack, not 8KB one. This restriction makes low memory platforms suffer
>> from memory pressure accompanied by performance degradation.
> 
> How will it behave on 64K Page system? There, it would take atleast 64K per cpu,
> right?

It would take 16KB per cpu even on 64KB page system.
The following code snippet from kernel/fork.c would be helpful.

----8<----
# if THREAD_SIZE >= PAGE_SIZE                                                                  
static struct thread_info *alloc_thread_info_node(struct task_struct *tsk,                     
                                                  int node)                                    
{
        struct page *page = alloc_kmem_pages_node(node, THREADINFO_GFP,                        
                                                  THREAD_SIZE_ORDER);                          

        return page ? page_address(page) : NULL;                                               
}

static inline void free_thread_info(struct thread_info *ti)                                    
{
        free_kmem_pages((unsigned long)ti, THREAD_SIZE_ORDER);                                 
}
# else                                                                                         
static struct kmem_cache *thread_info_cache;                                                   

static struct thread_info *alloc_thread_info_node(struct task_struct *tsk,                     
                                                  int node)                                    
{
        return kmem_cache_alloc_node(thread_info_cache, THREADINFO_GFP, node);                 
}

static void free_thread_info(struct thread_info *ti)                                           
{
        kmem_cache_free(thread_info_cache, ti);                                                
}

void thread_info_cache_init(void)                                                              
{
        thread_info_cache = kmem_cache_create("thread_info", THREAD_SIZE,
                                              THREAD_SIZE, 0, NULL);
        BUG_ON(thread_info_cache == NULL);                                                     
}
# endif
----8<----

> 
>> +int alloc_irq_stack(unsigned int cpu)
>> +{
>> +	void *stack;
>> +
>> +	if (per_cpu(irq_stacks, cpu).stack)
>> +		return 0;
>> +
>> +	stack = (void *)__get_free_pages(THREADINFO_GFP, THREAD_SIZE_ORDER);
> 
> Above would not compile for 64K pages as THREAD_SIZE_ORDER is only defined for
> non 64K. This need to be fixed.

Thanks for pointing it out! I will update it.

Best Regards
Jungseok Lee--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ