lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <5800c1cc0703241649q27157d9aid3d9d51b52efbf65@mail.gmail.com>
Date:	Sun, 25 Mar 2007 07:49:20 +0800
From:	"Bin Chen" <binary.chen@...il.com>
To:	linux-kernel@...r.kernel.org
Subject: kmem_cache_create loop for find the proper gfporder

I have some doubts about the loop to find the gfporder of a cache. For
the code below, its main purpose is to find a gfporder value that can
make the internal fragmentation less that 1/8 of the total slab size.
It is done by increase gfporder for low number to high(possibly 0 to
MAX_GFP_ORDER). But why increase the gfporder(or slab size) can
decrease the internal fragmentation?)

A simple example, suppose the slab management stuff is kept off-slab,
if the gfporder is zero, and the object size in slab is 1000, the
wasted space is 4096 mod 1000 = 96, but with 4096 * 2(increase
gfporder by 1), the space is 8192 mod 1000 = 192, 192 > 96.

Is it right?

By the way, is the first time gfporder is 0? Who initialized it in
cache_cache?

        /* Cal size (in pages) of slabs, and the num of objs per slab.
         * This could be made much more intelligent.  For now, try to avoid
         * using high page-orders for slabs.  When the gfp() funcs are more
         * friendly towards high-order requests, this should be changed.
         */
        do {
                unsigned int break_flag = 0;
cal_wastage:
                kmem_cache_estimate(cachep->gfporder, size, flags,
                                                &left_over, &cachep->num);
                if (break_flag)
                        break;
                if (cachep->gfporder >= MAX_GFP_ORDER)
                        break;
                if (!cachep->num)
                        goto next;
                if (flags & CFLGS_OFF_SLAB && cachep->num > offslab_limit) {
                        /* Oops, this num of objs will cause problems. */
                        cachep->gfporder--;
                        break_flag++;
                        goto cal_wastage;
                }

                /*
                 * Large num of objs is good, but v. large slabs are currently
                 * bad for the gfp()s.
                 */
                if (cachep->gfporder >= slab_break_gfp_order)
                        break;

                if ((left_over*8) <= (PAGE_SIZE<<cachep->gfporder))
                        break;  /* Acceptable internal fragmentation. */
next:
                cachep->gfporder++;
        } while (1);
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ