You are not logged in.
Pages: 1
Hello!
I'd like to run Ceph on Devuan Daedalus, but I have some issues with cephadm tool which expects systemd scripts to be installed. What is recommended way to run Ceph on Devuan? Is it possible? I didn't find any information about other users' experience.
Thank you in advance.
Kind regards,
Elena
Offline
not an answer to your specific question perhaps but still interesting for the larger community
ttps://docs.ceph.com/en/latest/start/os-recommendations/
the above claims ceph can use sysvinit or systemd
the below says systemd is required for cephadm
ttps://docs.ceph.com/en/latest/cephadm/install/
Last edited by stargate-sg1-cheyenne-mtn (2024-01-15 15:08:40)
Be Excellent to each other and Party On!
https://www.youtube.com/watch?v=rph_1DODXDU
https://en.wikipedia.org/wiki/Bill_%26_Ted%27s_Excellent_Adventure
Do unto others as you would have them do instantaneously back to you!
Offline
Hello:
... interesting for the larger community
Indeed.
https://docs.ceph.com/en/latest/start/o … endations/
the above claims ceph can use sysvinit or systemd
Does it?
Or is it open to interpretation?
... any distribution that includes a supported kernel and supported system startup framework ...
Does the Daedalus kernel actually fit into that definition?
ie: is sysvinit a supported system startup framework for Debian?
the below says systemd is required for cephadm
https://docs.ceph.com/en/latest/cephadm/install/
Yes.
BUT at the top of the page it has a banner that reads:
This document is for a development version of Ceph.
That said, maybe it is just showing, as with many other packages, the road to be taken as of sysvinit's demise?
Also see https://pkginfo.devuan.org/cgi-bin/pack … .2.11+ds-2
Package: cephadm
Version: 16.2.11+ds-2
--- snip ---
Depends:
adduser, lvm2, python3:any
--- snip ---
Description-en:
utility to bootstrap ceph daemons with systemd and containers
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
The cephadm utility is used to bootstrap a Ceph cluster and to manage
ceph daemons deployed with systemd and containers.
According to the Devuan Package information page, there is no systemd dependency in that package.
Maybe there is some detail that our (overworked) Devuan maintainers missed.
Best,
A.
Last edited by Altoid (2024-01-15 15:53:07)
Offline
On my daedalus system, aptitude says it will install ceph and cephadm. How did you determine that it requires systemd?
If apt or apt-get won't let you install it, please post the terminal output so we can see what's wrong. Also, if you try it with aptitiude, you might be given some alternate solutions.
apt show ceph*
Will give you information about every package whose name starts with ceph. None of them depend on systemd. One thing I'm not clear on is whether docker containers work in devuan.
Offline
cephadm requires systemctl
root@devuan:~# cephadm bootstrap --mon-ip 192.168.1.74
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
unable to run systemctl: [Errno 2] No such file or directory: 'systemctl'
No time sync service is running; checked for ['chrony.service', 'chronyd.service', 'systemd-timesyncd.service', 'ntpd.service', 'ntp.service', 'ntpsec.service', 'openntpd.service']
ERROR: Distro devuan version 5 not supported
Then I installed systemctl package and added --verbose for more informative output.
# cephadm --verbose bootstrap --mon-ip 192.168.1.74
--------------------------------------------------------------------------------
cephadm ['--verbose', 'bootstrap', '--mon-ip', '192.168.1.74']
/usr/bin/podman: 4.3.1
Verifying podman|docker is present...
/usr/bin/podman: 4.3.1
Verifying lvm2 is present...
Verifying time synchronization is in place...
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-enabled chrony.service --system
systemctl: enabled
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-active chrony.service --system
systemctl: inactive
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-enabled chronyd.service --system
systemctl: enabled
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-active chronyd.service --system
systemctl: inactive
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-enabled systemd-timesyncd.service --system
systemctl: ERROR:systemctl:Unit systemd-timesyncd.service could not be found.
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-active systemd-timesyncd.service --system
systemctl: ERROR:systemctl:Unit systemd-timesyncd.service could not be found.
systemctl: unknown
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-enabled ntpd.service --system
systemctl: ERROR:systemctl:Unit ntpd.service could not be found.
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-active ntpd.service --system
systemctl: ERROR:systemctl:Unit ntpd.service could not be found.
systemctl: unknown
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-enabled ntp.service --system
systemctl: ERROR:systemctl:Unit ntp.service could not be found.
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-active ntp.service --system
systemctl: ERROR:systemctl:Unit ntp.service could not be found.
systemctl: unknown
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-enabled ntpsec.service --system
systemctl: ERROR:systemctl:Unit ntpsec.service could not be found.
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-active ntpsec.service --system
systemctl: ERROR:systemctl:Unit ntpsec.service could not be found.
systemctl: unknown
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-enabled openntpd.service --system
systemctl: ERROR:systemctl:Unit openntpd.service could not be found.
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-active openntpd.service --system
systemctl: ERROR:systemctl:Unit openntpd.service could not be found.
systemctl: unknown
No time sync service is running; checked for ['chrony.service', 'chronyd.service', 'systemd-timesyncd.service', 'ntpd.service', 'ntp.service', 'ntpsec.service', 'openntpd.service']
Traceback (most recent call last):
File "/usr/sbin/cephadm", line 9248, in <module>
main()
File "/usr/sbin/cephadm", line 9236, in main
r = ctx.func(ctx)
^^^^^^^^^^^^^
File "/usr/sbin/cephadm", line 1990, in _default_image
return func(ctx)
^^^^^^^^^
File "/usr/sbin/cephadm", line 4691, in command_bootstrap
command_prepare_host(ctx)
File "/usr/sbin/cephadm", line 6482, in command_prepare_host
pkg = create_packager(ctx)
^^^^^^^^^^^^^^^^^^^^
File "/usr/sbin/cephadm", line 7031, in create_packager
raise Error('Distro %s version %s not supported' % (distro, distro_version))
Error: Distro devuan version 5 not supported
Trying to run chrony:
# systemctl is-enabled chrony
INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-enabled chrony --system
enabled
# systemctl start chrony
INFO:systemctl:EXEC BEGIN /usr/bin/systemctl start chrony --system
INFO:systemctl:system is offline
ERROR:systemctl: chrony.service: Executable path is not absolute, ignoring: !/usr/sbin/chronyd $DAEMON_OPTS
ERROR:systemctl: Exec is not an absolute path: ExecStart=!/usr/sbin/chronyd $DAEMON_OPTS
ERROR:systemctl: Exec command does not exist: (ExecStart) !/usr/sbin/chronyd
ERROR:systemctl: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
ERROR:systemctl: Found 2 problems in /lib/systemd/system/chrony.service
ERROR:systemctl: The SystemD commands must always be absolute paths by definition.
ERROR:systemctl: Earlier versions of systemctl.py did use a subshell thus using $PATH
ERROR:systemctl: however newer versions use execve just like the real SystemD daemon
ERROR:systemctl: so that your docker-only service scripts may start to fail suddenly.
ERROR:systemctl: Now 1 executable paths were not found in the current environment.
ERROR:systemctl: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
INFO:systemctl:forking start '!/usr/sbin/chronyd' '-F' '1'
INFO:systemctl:forking started PID 2960
INFO:systemctl:forking stopped PID 2960 (1) <->
WARNING:systemctl:forking start not active
# file /usr/sbin/chronyd
/usr/sbin/chronyd: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=5dd21164d789d39cf7d12b823edb32d83ccc56ce, for GNU/Linux 3.2.0, stripped
Offline
Continue
systemctl tool couldn't run chrony, so use service tool:
# service chrony start
Starting time daemon: chronyd.
# systemctl is-active chrony
INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-active chrony --system
active
Now chrony is running and systemctl detects it as active.
# cephadm --verbose bootstrap --mon-ip 192.168.1.74
--------------------------------------------------------------------------------
cephadm ['--verbose', 'bootstrap', '--mon-ip', '192.168.1.74']
/usr/bin/podman: 4.3.1
Verifying podman|docker is present...
/usr/bin/podman: 4.3.1
Verifying lvm2 is present...
Verifying time synchronization is in place...
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-enabled chrony.service --system
systemctl: enabled
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-active chrony.service --system
systemctl: active
Unit chrony.service is enabled and running
Repeating the final host check...
/usr/bin/podman: 4.3.1
podman (/usr/bin/podman) version 4.3.1 is present
systemctl is present
lvcreate is present
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-enabled chrony.service --system
systemctl: enabled
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl is-active chrony.service --system
systemctl: active
Unit chrony.service is enabled and running
Host looks OK
Cluster fsid: ea327e78-b46a-11ee-8449-0800274f25a2
Acquiring lock 139775760225552 on /run/cephadm/ea327e78-b46a-11ee-8449-0800274f25a2.lock
Lock 139775760225552 acquired on /run/cephadm/ea327e78-b46a-11ee-8449-0800274f25a2.lock
Verifying IP 192.168.1.74 port 3300 ...
Verifying IP 192.168.1.74 port 6789 ...
Base mon IP(s) is [192.168.1.74:3300, 192.168.1.74:6789], mon addrv is [v2:192.168.1.74:3300,v1:192.168.1.74:6789]
/sbin/ip: default via 192.168.1.254 dev eth0
/sbin/ip: 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.74
/sbin/ip: 2a00:1370:817a:3e06::/64 dev eth0 proto kernel metric 256 expires 583sec pref medium
/sbin/ip: fe80::/64 dev eth0 proto kernel metric 256 pref medium
/sbin/ip: default via fe80::1 dev eth0 proto ra metric 1024 expires 1783sec hoplimit 64 pref medium
/sbin/ip: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
/sbin/ip: inet6 ::1/128 scope host
/sbin/ip: valid_lft forever preferred_lft forever
/sbin/ip: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
/sbin/ip: inet6 2a00:1370:817a:3e06:a00:27ff:fe4f:25a2/64 scope global dynamic mngtmpaddr
/sbin/ip: valid_lft 584sec preferred_lft 584sec
/sbin/ip: inet6 fe80::a00:27ff:fe4f:25a2/64 scope link
/sbin/ip: valid_lft forever preferred_lft forever
Mon IP `192.168.1.74` is in CIDR network `192.168.1.0/24`
Mon IP `192.168.1.74` is in CIDR network `192.168.1.0/24`
Inferred mon public CIDR from local network configuration ['192.168.1.0/24', '192.168.1.0/24']
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v16...
/usr/bin/podman: Trying to pull quay.io/ceph/ceph:v16...
/usr/bin/podman: Getting image source signatures
/usr/bin/podman: Copying blob sha256:46af8f5390d4e94fc57efb422ccb97bb53dfe5b948546bfc191b46557eb2dbd9
/usr/bin/podman: Copying blob sha256:056fd520c9a2f3dab303754fcab1ad220173068f00c7482aed1b424649986c54
/usr/bin/podman: Copying config sha256:b22ed497323cdc87a2e07134a956da3876fb0806f726bed9bf9d209ca283d25b
/usr/bin/podman: Writing manifest to image destination
/usr/bin/podman: Storing signatures
/usr/bin/podman: b22ed497323cdc87a2e07134a956da3876fb0806f726bed9bf9d209ca283d25b
ceph: ceph version 16.2.14 (238ba602515df21ea7ffc75c88db29f9e5ef12c9) pacific (stable)
Ceph version: ceph version 16.2.14 (238ba602515df21ea7ffc75c88db29f9e5ef12c9) pacific (stable)
Extracting ceph user uid/gid from container image...
stat: 167 167
Creating initial keys...
/usr/bin/ceph-authtool: AQC7dqZlNMeyLxAAoNP6HsvJtphX0j4ao7+lLQ==
/usr/bin/ceph-authtool: AQC8dqZl/c4gEhAAkVA7xFXpZZhb0YtJETIhwQ==
/usr/bin/ceph-authtool: AQC8dqZl/VdzMRAAbvMovkr3dGRUtGXo1h8gYw==
Creating initial monmap...
/usr/bin/monmaptool: /usr/bin/monmaptool: monmap file /tmp/monmap
/usr/bin/monmaptool: /usr/bin/monmaptool: set fsid to ea327e78-b46a-11ee-8449-0800274f25a2
/usr/bin/monmaptool: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
monmaptool for devuan [v2:192.168.1.74:3300,v1:192.168.1.74:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap
/usr/bin/monmaptool: set fsid to ea327e78-b46a-11ee-8449-0800274f25a2
/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Creating mon...
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.194+0000 7f52c726a880 0 set uid:gid to 167:167 (ceph:ceph)
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.202+0000 7f52c726a880 1 imported monmap:
/usr/bin/ceph-mon: epoch 0
/usr/bin/ceph-mon: fsid ea327e78-b46a-11ee-8449-0800274f25a2
/usr/bin/ceph-mon: last_changed 2024-01-16T12:29:49.433670+0000
/usr/bin/ceph-mon: created 2024-01-16T12:29:49.433670+0000
/usr/bin/ceph-mon: min_mon_release 0 (unknown)
/usr/bin/ceph-mon: election_strategy: 1
/usr/bin/ceph-mon: 0: [v2:192.168.1.74:3300/0,v1:192.168.1.74:6789/0] mon.devuan
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.206+0000 7f52c726a880 0 /usr/bin/ceph-mon: set fsid to ea327e78-b46a-11ee-8449-0800274f25a2
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: RocksDB version: 6.8.1
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Git sha rocksdb_build_git_sha:@0@
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Compile date Aug 29 2023
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: DB SUMMARY
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-devuan/store.db dir, Total Num: 0, files:
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-devuan/store.db:
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.error_if_exists: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.create_if_missing: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.paranoid_checks: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.env: 0x55c6115c1080
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.fs: Posix File System
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.info_log: 0x55c613979320
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.max_file_opening_threads: 16
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.statistics: (nil)
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.use_fsync: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.max_log_file_size: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.max_manifest_file_size: 1073741824
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.log_file_time_to_roll: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.keep_log_file_num: 1000
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.226+0000 7f52c726a880 4 rocksdb: Options.recycle_log_file_num: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.allow_fallocate: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.allow_mmap_reads: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.allow_mmap_writes: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.use_direct_reads: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.create_missing_column_families: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.db_log_dir:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.wal_dir: /var/lib/ceph/mon/ceph-devuan/store.db
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.table_cache_numshardbits: 6
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.max_subcompactions: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.max_background_flushes: -1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.WAL_ttl_seconds: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.WAL_size_limit_MB: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.manifest_preallocation_size: 4194304
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.is_fd_close_on_exec: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.advise_random_on_open: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.db_write_buffer_size: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.write_buffer_manager: 0x55c613981890
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.access_hint_on_compaction_start: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.random_access_max_buffer_size: 1048576
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.use_adaptive_mutex: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.rate_limiter: (nil)
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.wal_recovery_mode: 2
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.enable_thread_tracking: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.enable_pipelined_write: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.unordered_write: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.allow_concurrent_memtable_write: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.write_thread_max_yield_usec: 100
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.write_thread_slow_yield_usec: 3
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.row_cache: None
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.wal_filter: None
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.avoid_flush_during_recovery: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.230+0000 7f52c726a880 4 rocksdb: Options.allow_ingest_behind: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.preserve_deletes: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.two_write_queues: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.manual_wal_flush: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.atomic_flush: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.persist_stats_to_disk: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.write_dbid_to_manifest: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.log_readahead_size: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.sst_file_checksum_func: Unknown
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.max_background_jobs: 2
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.max_background_compactions: -1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.avoid_flush_during_shutdown: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.writable_file_max_buffer_size: 1048576
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.delayed_write_rate : 16777216
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.max_total_wal_size: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.stats_dump_period_sec: 600
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.stats_persist_period_sec: 600
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.stats_history_buffer_size: 1048576
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.max_open_files: -1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.bytes_per_sync: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.wal_bytes_per_sync: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.strict_bytes_per_sync: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Options.compaction_readahead_size: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: Compression algorithms supported:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: kZSTDNotFinalCompression supported: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: kZSTD supported: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: kXpressCompression supported: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: kLZ4HCCompression supported: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: kLZ4Compression supported: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.234+0000 7f52c726a880 4 rocksdb: kBZip2Compression supported: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.238+0000 7f52c726a880 4 rocksdb: kZlibCompression supported: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.238+0000 7f52c726a880 4 rocksdb: kSnappyCompression supported: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.238+0000 7f52c726a880 4 rocksdb: Fast CRC32 supported: Supported on x86
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.246+0000 7f52c726a880 4 rocksdb: [db_impl/db_impl_open.cc:273] Creating manifest 1
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.266+0000 7f52c726a880 4 rocksdb: [version_set.cc:4413] Recovering from manifest file: /var/lib/ceph/mon/ceph-devuan/store.db/MANIFEST-000001
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.266+0000 7f52c726a880 4 rocksdb: [column_family.cc:552] --------------- Options for column family [default]:
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.266+0000 7f52c726a880 4 rocksdb: Options.comparator: leveldb.BytewiseComparator
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.266+0000 7f52c726a880 4 rocksdb: Options.merge_operator:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.266+0000 7f52c726a880 4 rocksdb: Options.compaction_filter: None
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.266+0000 7f52c726a880 4 rocksdb: Options.compaction_filter_factory: None
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.266+0000 7f52c726a880 4 rocksdb: Options.memtable_factory: SkipListFactory
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.266+0000 7f52c726a880 4 rocksdb: Options.table_factory: BlockBasedTable
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.266+0000 7f52c726a880 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c61388fd20)
/usr/bin/ceph-mon: cache_index_and_filter_blocks: 1
/usr/bin/ceph-mon: cache_index_and_filter_blocks_with_high_priority: 0
/usr/bin/ceph-mon: pin_l0_filter_and_index_blocks_in_cache: 0
/usr/bin/ceph-mon: pin_top_level_index_and_filter: 1
/usr/bin/ceph-mon: index_type: 0
/usr/bin/ceph-mon: data_block_index_type: 0
/usr/bin/ceph-mon: index_shortening: 1
/usr/bin/ceph-mon: data_block_hash_table_util_ratio: 0.750000
/usr/bin/ceph-mon: hash_index_allow_collision: 1
/usr/bin/ceph-mon: checksum: 1
/usr/bin/ceph-mon: no_block_cache: 0
/usr/bin/ceph-mon: block_cache: 0x55c6138c6d10
/usr/bin/ceph-mon: block_cache_name: BinnedLRUCache
/usr/bin/ceph-mon: block_cache_options:
/usr/bin/ceph-mon: capacity : 536870912
/usr/bin/ceph-mon: num_shard_bits : 4
/usr/bin/ceph-mon: strict_capacity_limit : 0
/usr/bin/ceph-mon: high_pri_pool_ratio: 0.000
/usr/bin/ceph-mon: block_cache_compressed: (nil)
/usr/bin/ceph-mon: persistent_cache: (nil)
/usr/bin/ceph-mon: block_size: 4096
/usr/bin/ceph-mon: block_size_deviation: 10
/usr/bin/ceph-mon: block_restart_interval: 16
/usr/bin/ceph-mon: index_block_restart_interval: 1
/usr/bin/ceph-mon: metadata_block_size: 4096
/usr/bin/ceph-mon: partition_filters: 0
/usr/bin/ceph-mon: use_delta_encoding: 1
/usr/bin/ceph-mon: filter_policy: rocksdb.BuiltinBloomFilter
/usr/bin/ceph-mon: whole_key_filtering: 1
/usr/bin/ceph-mon: verify_compression: 0
/usr/bin/ceph-mon: read_amp_bytes_per_bit: 0
/usr/bin/ceph-mon: format_version: 2
/usr/bin/ceph-mon: enable_index_compression: 1
/usr/bin/ceph-mon: block_align: 0
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.266+0000 7f52c726a880 4 rocksdb: Options.write_buffer_size: 33554432
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.266+0000 7f52c726a880 4 rocksdb: Options.max_write_buffer_number: 2
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compression: NoCompression
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.bottommost_compression: Disabled
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.prefix_extractor: nullptr
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.num_levels: 7
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.min_write_buffer_number_to_merge: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.bottommost_compression_opts.level: 32767
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.bottommost_compression_opts.strategy: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.bottommost_compression_opts.enabled: false
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compression_opts.window_bits: -14
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compression_opts.level: 32767
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compression_opts.strategy: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compression_opts.max_dict_bytes: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compression_opts.enabled: false
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.level0_file_num_compaction_trigger: 4
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.level0_slowdown_writes_trigger: 20
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.level0_stop_writes_trigger: 36
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.target_file_size_base: 67108864
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.target_file_size_multiplier: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_bytes_for_level_base: 268435456
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_sequential_skip_in_iterations: 8
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_compaction_bytes: 1677721600
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.arena_block_size: 4194304
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.disable_auto_compactions: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compaction_style: kCompactionStyleLevel
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compaction_options_universal.size_ratio: 1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.table_properties_collectors:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.inplace_update_support: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.inplace_update_num_locks: 10000
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.memtable_whole_key_filtering: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.memtable_huge_page_size: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.bloom_locality: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.max_successive_merges: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.optimize_filters_for_hits: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.paranoid_file_checks: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.force_consistency_checks: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.report_bg_io_stats: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.ttl: 2592000
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.270+0000 7f52c726a880 4 rocksdb: Options.periodic_compaction_seconds: 0
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.278+0000 7f52c726a880 4 rocksdb: [version_set.cc:4568] Recovered from manifest file:/var/lib/ceph/mon/ceph-devuan/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.278+0000 7f52c726a880 4 rocksdb: [version_set.cc:4577] Column family [default] (ID 0), log number is 0
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.290+0000 7f52c726a880 4 rocksdb: DB pointer 0x55c61398f800
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.290+0000 7f52b00f3700 4 rocksdb: [db_impl/db_impl.cc:850] ------- DUMPING STATS -------
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.290+0000 7f52b00f3700 4 rocksdb: [db_impl/db_impl.cc:851]
/usr/bin/ceph-mon: ** DB Stats **
/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval
/usr/bin/ceph-mon: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
/usr/bin/ceph-mon: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
/usr/bin/ceph-mon: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
/usr/bin/ceph-mon: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
/usr/bin/ceph-mon: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
/usr/bin/ceph-mon: Interval stall: 00:00:0.000 H:M:S, 0.0 percent
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: ** Compaction Stats [default] **
/usr/bin/ceph-mon: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
/usr/bin/ceph-mon: ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/usr/bin/ceph-mon: Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0
/usr/bin/ceph-mon: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: ** Compaction Stats [default] **
/usr/bin/ceph-mon: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
/usr/bin/ceph-mon: -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval
/usr/bin/ceph-mon: Flush(GB): cumulative 0.000, interval 0.000
/usr/bin/ceph-mon: AddFile(GB): cumulative 0.000, interval 0.000
/usr/bin/ceph-mon: AddFile(Total Files): cumulative 0, interval 0
/usr/bin/ceph-mon: AddFile(L0 Files): cumulative 0, interval 0
/usr/bin/ceph-mon: AddFile(Keys): cumulative 0, interval 0
/usr/bin/ceph-mon: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
/usr/bin/ceph-mon: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
/usr/bin/ceph-mon: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: ** File Read Latency Histogram By Level [default] **
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: ** Compaction Stats [default] **
/usr/bin/ceph-mon: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
/usr/bin/ceph-mon: ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/usr/bin/ceph-mon: Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0
/usr/bin/ceph-mon: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: ** Compaction Stats [default] **
/usr/bin/ceph-mon: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
/usr/bin/ceph-mon: -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval
/usr/bin/ceph-mon: Flush(GB): cumulative 0.000, interval 0.000
/usr/bin/ceph-mon: AddFile(GB): cumulative 0.000, interval 0.000
/usr/bin/ceph-mon: AddFile(Total Files): cumulative 0, interval 0
/usr/bin/ceph-mon: AddFile(L0 Files): cumulative 0, interval 0
/usr/bin/ceph-mon: AddFile(Keys): cumulative 0, interval 0
/usr/bin/ceph-mon: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
/usr/bin/ceph-mon: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
/usr/bin/ceph-mon: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: ** File Read Latency Histogram By Level [default] **
/usr/bin/ceph-mon:
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.318+0000 7f52c726a880 4 rocksdb: [db_impl/db_impl.cc:397] Shutdown: canceling all background work
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.318+0000 7f52c726a880 4 rocksdb: [db_impl/db_impl.cc:573] Shutdown complete
/usr/bin/ceph-mon: debug 2024-01-16T12:29:50.318+0000 7f52c726a880 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-devuan for mon.devuan
create mon.devuan on
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl enable ceph-ea327e78-b46a-11ee-8449-0800274f25a2.target --system
systemctl: INFO:systemctl:matched ceph-ea327e78-b46a-11ee-8449-0800274f25a2.target
systemctl: INFO:systemctl:system is offline
systemctl: INFO:systemctl:ln -s '/etc/systemd/system/ceph-ea327e78-b46a-11ee-8449-0800274f25a2.target' '/etc/systemd/system/multi-user.target ceph.target.wants/ceph-ea327e78-b46a-11ee-8449-0800274f25a2.target'
systemctl: INFO:systemctl:EXEC BEGIN /usr/bin/systemctl start ceph-ea327e78-b46a-11ee-8449-0800274f25a2.target --system
systemctl: INFO:systemctl:system is offline
systemctl: WARNING:systemctl:simple start not active
Non-zero exit code 1 from systemctl start ceph-ea327e78-b46a-11ee-8449-0800274f25a2.target
systemctl: stderr INFO:systemctl:EXEC BEGIN /usr/bin/systemctl start ceph-ea327e78-b46a-11ee-8449-0800274f25a2.target --system
systemctl: stderr INFO:systemctl:system is offline
systemctl: stderr WARNING:systemctl:simple start not active
Traceback (most recent call last):
File "/usr/sbin/cephadm", line 9248, in <module>
main()
File "/usr/sbin/cephadm", line 9236, in main
r = ctx.func(ctx)
^^^^^^^^^^^^^
File "/usr/sbin/cephadm", line 1990, in _default_image
return func(ctx)
^^^^^^^^^
File "/usr/sbin/cephadm", line 4753, in command_bootstrap
create_mon(ctx, uid, gid, fsid, mon_id)
File "/usr/sbin/cephadm", line 4259, in create_mon
deploy_daemon(ctx, fsid, 'mon', mon_id, mon_c, uid, gid,
File "/usr/sbin/cephadm", line 2951, in deploy_daemon
deploy_daemon_units(ctx, fsid, uid, gid, daemon_type, daemon_id,
File "/usr/sbin/cephadm", line 3179, in deploy_daemon_units
install_base_units(ctx, fsid)
File "/usr/sbin/cephadm", line 3428, in install_base_units
call_throws(ctx, ['systemctl', 'start', 'ceph-%s.target' % fsid])
File "/usr/sbin/cephadm", line 1658, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: systemctl start ceph-ea327e78-b46a-11ee-8449-0800274f25a2.target
Failed command is:
# systemctl start ceph-ea327e78-b46a-11ee-8449-0800274f25a2.target
INFO:systemctl:EXEC BEGIN /usr/bin/systemctl start ceph-ea327e78-b46a-11ee-8449-0800274f25a2.target --system
INFO:systemctl:system is offline
WARNING:systemctl:simple start not active
What is systemctl in Devuan? Can it imitate systemd services management?
Offline
Hello:
What is systemctl in Devuan? Can it imitate systemd services management?
See here.
Description-en:
daemonless "systemctl" command to manage services without systemd "systemctl" is a replacement command to control system daemons without systemd. "systemctl" is useful in application containers where systemd is not available to start/stop services.This script can also be run as init of an application container (i.e. the main "CMD" on PID 1) where it will automatically bring up all enabled services in the "multi-user.target" and where it will reap all zombies from background processes in the container. When stopping such a container it will also bring down all configured services correctly before exit.
Best,
A.
Last edited by Altoid (2024-01-17 11:36:49)
Offline
On my daedalus system, aptitude says it will install ceph and cephadm. How did you determine that it requires systemd?
cephadm attempted to run systemctl. First for running time sync service, then for running ceph daemon itself.
I tried to use systemctl tool from the systemctl package, but it gives an error.
Offline
Can anyone help me, please? Does ceph work on Devuan?
Offline
Hello:
I run my box on Devuan Beowulf with a backported kernel.
~$ uname -a
Linux devuan 5.10.0-0.deb10.16-amd64 #1 SMP Debian 5.10.127-2~bpo10+1 (2022-07-28) x86_64 GNU/Linux
~$
I have no docker containers installed.
~$ apt list | grep installed | grep -i docker
--- snip ---
~$
When I ask apt about ceph, systemctl and systemd, I get this:
~$ sudo apt install --dry-run ceph | grep -i "systemctl\|systemd"
--- snip ...
~$
As you can see, as per grep, the terminal printout does not contain any systemctl or systemd strings.
Likewise, when I ask apt about docker, I get this:
~$ sudo apt install --dry-run docker | grep -i "systemctl\|systemd"
--- snip ---
~$
So much for my Devuan system.
But on your Devuan system you are getting this error from cephadm:
systemctl: ERROR:systemctl:Unit systemd-timesyncd.service could not be found.
And systemd-timesyncd.service seems to be a Debian systemd-specific package.
So why is ceph looking for a systemd-specific Debian package when installed in Devuan?
No idea, but it do not think it should be happening.
Q: did you install ceph from a Devuan repository?
Looking around for a clue, I found this thread at the Bunsen Labs forum:
... up to Debian Buster the functionality was included in systemd, so there's no need to install it until you get to Bullseye, where it got separated out.
There is more discussion on systemd-timesyncd.service further down.
It may or may not be of relevance but from what I can make of what I read, it would seem (?) that ceph needs a time sync daemon to work but is configured to use a systemd service which (quite obviously) Devuan does not have, hence the error message.
That said, could ceph use any other one? eg: npt
Cannot really say as this is where I have reached my pay-grade ceiling.
Best,
A.
Last edited by Altoid (2024-04-24 13:41:13)
Offline
Q: did you install ceph from a Devuan repository?
Yes, it is from Devuan repository. I installed ceph and cephadm using apt.
# cat /etc/apt/sources.list
deb http://deb.devuan.org/merged daedalus main
deb http://deb.devuan.org/merged daedalus-updates main
deb http://deb.devuan.org/merged daedalus-security main
Offline
Hello:
... from Devuan repository.
... installed ceph and cephadm using apt.
Right ...
Not the problem then. 8^)
Have a look here:
Ceph can run on any distribution that includes a supported kernel and supported system startup framework, for example sysvinit or systemd.
Now, if you look at the table below, you can see that (for Debian releases, labelled with a C) it states:
C: Ceph provides packages only. No tests have been done on these releases.
Devuan is not on that list and, like I mentioned earlier, our (highly) overworked maintainers may have skipped a beat somewhere.
eg: maybe the sysvinit files for the ceph in the Devuan repositories are missing something?
So ...
You may want to consider first filing a bug in Devuan against ceph and see what the maintainer has to say about this.
ie: if it is a Debian package or a sanitised Devuan package.
Send email to submit@bugs.devuan.org with a descriptive subject line and the first line of the body should be Package: <package name>.
Be sure to include a link to this thread.
Follow-up messages go to <number>@bugs.devuan.org
At this point in time, filing a bug in Debian by a Devuan user is guaranteed to be an exercise in frustration/futility, so the next best thing would be to ask at the ceph user's forum and see what they have to say.
Please let us know how you fared.
Best,
A.
Last edited by Altoid (2024-04-24 16:41:51)
Offline
Hello:
... if it is a Debian package or a sanitised Devuan package.
It seems that ceph is a Debian package.
ie: maintainers are Ceph Packaging Team <team+ceph@tracker.debian.org>
Meaning that it is not a sanitised Devuan package with a Devuan maintainer.
Debian all but dropped support for sysvinit software as of Bullseye so it is highly probable that the ceph package does not have the all necessary files to run on Devuan.
But since it is in the Daedalus repository, a bug could be filed against it in Devuan.
Maybe the maintainers can do something about that that if the only thing lacking is an init script.
Best,
A.
Offline
Pages: 1