lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080223000613.123c57b6.akpm@linux-foundation.org>
Date:	Sat, 23 Feb 2008 00:06:13 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	netdev@...r.kernel.org, trond.myklebust@....uio.no
Subject: Re: [PATCH 17/28] netvm: hook skb allocation to reserves

On Wed, 20 Feb 2008 15:46:27 +0100 Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:

> Change the skb allocation api to indicate RX usage and use this to fall back to
> the reserve when needed. SKBs allocated from the reserve are tagged in
> skb->emergency.
> 
> Teach all other skb ops about emergency skbs and the reserve accounting.
> 
> Use the (new) packet split API to allocate and track fragment pages from the
> emergency reserve. Do this using an atomic counter in page->index. This is
> needed because the fragments have a different sharing semantic than that
> indicated by skb_shinfo()->dataref. 
> 
> Note that the decision to distinguish between regular and emergency SKBs allows
> the accounting overhead to be limited to the later kind.
> 
> ...
>
> +static inline void skb_get_page(struct sk_buff *skb, struct page *page)
> +{
> +	get_page(page);
> +	if (skb_emergency(skb))
> +		atomic_inc(&page->frag_count);
> +}
> +
> +static inline void skb_put_page(struct sk_buff *skb, struct page *page)
> +{
> +	if (skb_emergency(skb) && atomic_dec_and_test(&page->frag_count))
> +		rx_emergency_put(PAGE_SIZE);
> +	put_page(page);
> +}

I'm thinking we should do `#define slowcall inline' then use that in the future.

>  static void skb_release_data(struct sk_buff *skb)
>  {
>  	if (!skb->cloned ||
>  	    !atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1,
>  			       &skb_shinfo(skb)->dataref)) {
> +		int size;
> +
> +#ifdef NET_SKBUFF_DATA_USES_OFFSET
> +		size = skb->end;
> +#else
> +		size = skb->end - skb->head;
> +#endif

The patch adds rather a lot of ifdefs.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ