[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <f6856be8-54b7-0fa0-1d17-39632bf29ada@oracle.com>
Date: Mon, 18 Oct 2021 11:21:22 +0200
From: Vegard Nossum <vegard.nossum@...cle.com>
To: Masahiro Yamada <masahiroy@...nel.org>,
Sam Ravnborg <sam@...nborg.org>
Cc: Linux Kbuild mailing list <linux-kbuild@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Arnd Bergmann <arnd@...db.de>,
Randy Dunlap <rdunlap@...radead.org>,
"Luis R . Rodriguez" <mcgrof@...nel.org>
Subject: Kconfig issue with LOG_CPU_MAX_BUF_SHIFT + BASE_SMALL
Hi,
I came across a bit of an oddity in the Kconfig files, specifically the
option for LOG_CPU_MAX_BUF_SHIFT:
config LOG_CPU_MAX_BUF_SHIFT
int "CPU kernel log buffer size contribution (13 => 8 KB, 17
=> 128KB)"
depends on SMP
range 0 21
default 12 if !BASE_SMALL
default 0 if BASE_SMALL
depends on PRINTK
If you look at BASE_SMALL, this is actually an int variable:
config BASE_SMALL
int
default 0 if BASE_FULL
default 1 if !BASE_FULL
Therefore, my theory is that the "default 12 if !BASE_SMALL" is _always_
used because ! does not really work for int options.
To test my theory, I created this near-minimal Kconfig example:
$ cat Kconfig.BASE
config BASE_FULL
default y
bool "Enable full-sized data structures for core"
config BASE_SMALL
int
default 0 if BASE_FULL
default 1 if !BASE_FULL
config LOG_CPU_MAX_BUF_SHIFT
int "CPU kernel log buffer size contribution (13 => 8 KB, 17
=> 128KB)"
range 0 21
default 12 if !BASE_SMALL
default 0 if BASE_SMALL
You will notice that the default for LOG_CPU_MAX_BUF_SHIFT is 12
regardless of whether I choose 'y' or 'n' for BASE_FULL:
$ rm -rf .config; scripts/kconfig/conf Kconfig.BASE
*
* Main menu
*
Enable full-sized data structures for core (BASE_FULL) [Y/n/?] (NEW) y
CPU kernel log buffer size contribution (13 => 8 KB, 17 => 128KB)
(LOG_CPU_MAX_BUF_SHIFT) [12] (NEW)
and
$ rm -rf .config; scripts/kconfig/conf Kconfig.BASE
*
* Main menu
*
Enable full-sized data structures for core (BASE_FULL) [Y/n/?] (NEW) n
CPU kernel log buffer size contribution (13 => 8 KB, 17 => 128KB)
(LOG_CPU_MAX_BUF_SHIFT) [12] (NEW)
In the code, I think the problem would be that sym_calc_value() on an
S_INT symbol only sets sym->curr.val and not sym->curr.tri:
expr_calc_value(!BASE_SMALL)
- E_NOT:
- expr_calc_value(BASE_SMALL)
- E_SYMBOL:
- sym_calc_value(BASE_SMALL)
- return BASE_SMALL->curr.tri // this is 0/n regardless of
BASE_SMALL's string value!
I could be wrong here -- but that's how it looks to me.
The most straightforward fix seems to be change init/Kconfig and
LOG_CPU_MAX_BUF_SHIFT to say:
default 12 if BASE_SMALL=0
default 0 if BASE_SMALL!=0
I've tested this and it works -- 'conf' chooses the right default. Or we
could use BASE_FULL directly, i.e.:
default 12 if BASE_FULL
default 0 if !BASE_FULL
In fact, I'm not sure what the point of BASE_SMALL is in the first
place. Should we get rid of it altogether? (git blame says it's been
there since the initial commit.)
Another option would be to change how !SYM is evaluated if SYM is a
string/hex/int, which would maybe save us from other similar mistakes
(whether now or in the future).
Vegard
Powered by blists - more mailing lists