You are not logged in.
Hi there,
I'm trying to start auto-cpufeq at startup with init.d, but the --daemon option makes it harder than expected. Here is the script (copied from cron startup) :
#!/bin/bash
# Start/stop auto-cpufreq daemon.
#
### BEGIN INIT INFO
# Provides: auto-cpufreq
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop:
# Short-Description: Regular background program processing daemon
### END INIT INFO
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
DESC="auto-cpufreq daemon"
NAME=auto-cpufreqd
DAEMON=/usr/local/bin/auto-cpufreq --daemon
PIDFILE=/var/run/auto-cpufreq.pid
SCRIPTNAME=/etc/init.d/"$NAME"
test -f "$DAEMON" || exit 0 && echo "exit 0"
. /lib/lsb/init-functions
case "$1" in
start) log_daemon_msg "Starting auto-cpufreq daemon"
start_daemon -p $PIDFILE $DAEMON
log_end_msg $?
;;
stop) log_daemon_msg "Stopping auto-cpufreq daemon"
killproc -p $PIDFILE $DAEMON
RETVAL=$?
[ $RETVAL -eq 0 ] && [ -e "$PIDFILE" ] && rm -f $PIDFILE
log_end_msg $RETVAL
;;
status)
status_of_proc -p $PIDFILE $DAEMON $NAME && exit 0 || exit $?
;;
*) log_action_msg "Usage: /etc/init.d/auto-cpufreq {start|stop|status}"
exit 2
;;
esac
exit 0auto-cpufreq is is executable and at /usr/local/bin/
Looks like it would start without --daemon option. Sorry for infamous mistakes that mustbe in this script ![]()
Last edited by unixuser (2026-01-30 12:04:00)
Offline
yeh auto-cpufreq has one of the designs of all time... that and it requiring python for doing something so simple as it is to just write to a handful of locations inside the kernel sysfs among other reasons got me to write an alternative, it is tried and tested in devuan under sysvinit and elogind, have not tried on even more minimal setups like ones lacking elogind so cannot comment there, also i know it isn't what you want but just putting it out here since i was also unable to get the auto-cpufreq daemon properly running on devuan
Offline
Thanks EDX, I didn't know afreq, it looks lighter, you did a good work. I think I'll use it instead... I'm still using elogind by default anyway.
If anyone was able to start auto-cpufreq I'd like to know.
Offline
auto-cpufeq at startup
Have you tried linux-cpupower from the repo? It's a CPU scaling tool and starts at boot. Has a conf file in /etc to set governors and/or min/max frequency.
Offline
@EDX, I had to get powermgmt-base for a dependence, looks like your program is effective, but it uses 2 Gi of ram Oo I don't know what causes it yet.
@fanderal, thanks, I'll check it out
Offline
yes powermgmt-base is a dependency tho the afreq.sh repo bundles a copy of on_ac_power for tho whol don't want to install powermgmt-base or for distros that don't ship said package, as for the memory usage that is odd, it really shouldn't use anywhere even close to that amount of memory consumption as it is a shell script after all, how did you got that number for memory consumption? if you check with btop what does it say? i'm not at home rn so cannot check the memory usage on my personal machine.
Offline
I can't understand why it uses that much. Well I check with fastfetch, free and i3status. Htop shows 1960 mib for afreq
Offline
that is extremely odd
you can get the PID of the running afreq instance by running cat /var/run/afreq/status
with vmrss https://github.com/ThePrimeagen/vmrss it says that afreq is using 1.88281 MB of memory
with btop tree view on the sleep part of the cycle it shows that afreq is using 1.8 MB while the busybox usleep is consuming also 1.8 MB (yes it does prefer busybox usleep when available as that is more reliable than just hoping sleep supports decimals as the system could be using a sleep implementation other than gnu sleep), the busybox usleep program runs every 500 milliseconds so at about 2MB per millisecond it uses 4 MB each second (even tho those are 2 different invocations), so if a program is measuring the memory usage every X seconds then it may get that afreq is using 6 MB times X seconds, say 5 seconds that ought to sum up to 30 MB, tho i dunno if some system monitoring program measures that way, unless this is some shenanigans with the caches...

during tick the programs vmstat, tail, awk and others are invocated, they complete so fast that btop could only register vmstat, tail and awk, in the tick step afreq would have a memory footprint of 12MB (i'm ceiling the sum)

looking at htop these are the numbers of resource usage for afreq and as far as i know those are in kilobytes

so i am at a loss of how afreq could balloon all the way to 1960 megabytes on your machine
Offline
this is weird because, with afreq on :
user@~ >>> free -h
total used free shared buff/cache available
Mem: 14Gi 3.1Gi 10Gi 12Mi 635Mi 11Gi
Swap: 11Gi 0B 11Giwithout afreq :
user@~ >>> doas pkill afreq
doas (user@devx) password:
user@~ >>> free -h
total used free shared buff/cache available
Mem: 14Gi 1.1Gi 12Gi 15Mi 638Mi 13Gi
Swap: 11Gi 0B 11Gibut it is really using 1.8 MiB :
user@~ >>> ps aux | grep afreq
root 2075 0.1 0.0 2780 1884 ? S 11:24 0:03 /bin/sh /usr/local/sbin/afreq
user 2625 0.0 0.0 4068 2092 pts/2 S+ 11:54 0:00 grep afreqEdit : yeah, with btop tree view I have almost same numbers as yours. So the problem isn't from afreq BUT the memory increases only when it's running... Still working on it lol
Last edited by unixuser (2026-01-31 12:53:16)
Offline
did ya figure it out?
Offline
not yet, but I suspect a memory leak from something from afreq
user@~ >>> vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
r b swpd free buff cache si so bi bo in cs us sy id wa st gu
2 0 0 12241312 32060 246648 0 0 3363 46 5112 1 0 1 99 0 0 0
user@~ >>> doas slabtop
doas (user@devx) password:
user@~ >>> doas pkill afreq
doas (user@devx) password:
user@~ >>> vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
r b swpd free buff cache si so bi bo in cs us sy id wa st gu
1 0 0 14329644 32228 246996 0 0 1168 17 2130 0 0 1 99 0 0 0detailed output of pmap :
2081: /bin/sh /usr/local/sbin/afreq
Address Perm Offset Device Inode Size Rss Pss Pss_Dirty Referenced Anonymous KSM LazyFree ShmemPmdMapped FilePmdMapped Shared_Hugetlb Private_Hugetlb Swap SwapPss Locked THPeligible ProtectionKey Mapping
55b965845000 r--p 00000000 103:02 7733880 16 16 4 0 16 0 0 0 0 0 0 0 0 0 0 0 0 dash
55b965849000 r-xp 00004000 103:02 7733880 80 80 20 0 80 0 0 0 0 0 0 0 0 0 0 0 0 dash
55b96585d000 r--p 00018000 103:02 7733880 24 24 6 0 24 0 0 0 0 0 0 0 0 0 0 0 0 dash
55b965863000 r--p 0001d000 103:02 7733880 8 8 8 8 8 8 0 0 0 0 0 0 0 0 0 0 0 dash
55b965865000 rw-p 0001f000 103:02 7733880 4 4 4 4 4 4 0 0 0 0 0 0 0 0 0 0 0 dash
55b965866000 rw-p 00000000 00:00 0 8 8 8 8 8 8 0 0 0 0 0 0 0 0 0 0 0
55b99787a000 rw-p 00000000 00:00 0 236 116 116 116 116 116 0 0 0 0 0 0 0 0 0 0 0 [heap]
7fa96db3a000 rw-p 00000000 00:00 0 12 8 8 8 8 8 0 0 0 0 0 0 0 0 0 0 0
7fa96db3d000 r--p 00000000 103:02 7748394 160 160 6 0 160 0 0 0 0 0 0 0 0 0 0 0 0 libc.so.6
7fa96db65000 r-xp 00028000 103:02 7748394 1424 1168 39 0 1168 0 0 0 0 0 0 0 0 0 0 0 0 libc.so.6
7fa96dcc9000 r--p 0018c000 103:02 7748394 344 156 4 0 156 0 0 0 0 0 0 0 0 0 0 0 0 libc.so.6
7fa96dd1f000 r--p 001e1000 103:02 7748394 16 16 16 16 16 16 0 0 0 0 0 0 0 0 0 0 0 libc.so.6
7fa96dd23000 rw-p 001e5000 103:02 7748394 8 8 8 8 8 8 0 0 0 0 0 0 0 0 0 0 0 libc.so.6
7fa96dd25000 rw-p 00000000 00:00 0 52 20 20 20 20 20 0 0 0 0 0 0 0 0 0 0 0
7fa96dd40000 rw-p 00000000 00:00 0 8 4 4 4 4 4 0 0 0 0 0 0 0 0 0 0 0
7fa96dd42000 r--p 00000000 00:00 0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 [vvar]
7fa96dd46000 r-xp 00000000 00:00 0 8 8 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 [vdso]
7fa96dd48000 r--p 00000000 103:02 7748391 4 4 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 ld-linux-x86-64.so.2
7fa96dd49000 r-xp 00001000 103:02 7748391 160 156 5 0 156 0 0 0 0 0 0 0 0 0 0 0 0 ld-linux-x86-64.so.2
7fa96dd71000 r--p 00029000 103:02 7748391 44 44 1 0 44 0 0 0 0 0 0 0 0 0 0 0 0 ld-linux-x86-64.so.2
7fa96dd7c000 r--p 00034000 103:02 7748391 8 8 8 8 8 8 0 0 0 0 0 0 0 0 0 0 0 ld-linux-x86-64.so.2
7fa96dd7e000 rw-p 00036000 103:02 7748391 4 4 4 4 4 4 0 0 0 0 0 0 0 0 0 0 0 ld-linux-x86-64.so.2
7fa96dd7f000 rw-p 00000000 00:00 0 4 4 4 4 4 4 0 0 0 0 0 0 0 0 0 0 0
7fffdd501000 rw-p 00000000 00:00 0 132 12 12 12 12 12 0 0 0 0 0 0 0 0 0 0 0 [stack]
==== ==== === ========= ========== ========= === ======== ============== ============= ============== =============== ==== ======= ====== =========== =============
2780 2036 305 220 2036 220 0 0 0 0 0 0 0 0 0 0 0 KB Nothing suspicious ?
rofl
Last edited by unixuser (2026-02-02 14:14:06)
Offline
did my own digging, it is because of the buffers and caches getting filled by the instances of busybox usleep, i didn't notice earlier because i also use zram (through https://github.com/eylles/zram-service) with zstd so i don't really think of ram usage as all data that goes onto zram is effectively compressed to about 1/4 the size (in average 1/3rd the ram consumption)
Offline
Nice one. Do you know how to avoid that ? Without using zram
Offline
i have no idea how to avoid that tbh, what i did is reduce the speed/amount the cache buffers fill by implementing dynamic polling, sort of, the current master commit of afreq.sh increases the time of the sleep calls from a fixed 500ms up to 5000ms (5 seconds) if the governor and boost stay stable (say at idle) for at least 5 ticks (5 runs of the tick function), it also reduces the sleep time down by 100ms if there is not a stable state and will keep doing so until every sleep is just 100ms with time going up once there are 5 stable states.
in my testing the buffers usage went down by 1GB, but i got other daemons that are also written in posix shell and operate in the same type of wait sleep cycle...
Offline
ok I'll test with the correction, thank you edx.
I mark this topic as RESOLVED
Offline