You are not logged in.
Check out refractasnapshot.
And "refractainstaller".
apt install refractasnapshot-base refractainstaller-base
Configure the snapshot build to your liking by altering the "/etc/refractasnapshot.conf" and the "/usr/lib/refractasnapshot/snapshot_exclude.list" files.
I may have run across a slight bug while installing the daedalus rc5 release iso.
TL/DR -
I tried to install xorg and ran across this error:
root@localhost:/etc/apt# apt install xorg
...
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
luit : Breaks: x11-utils (< 7.7+6) but 7.7+5 is to be installed
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
I installed the rc5-netinstall iso file and initially only installed the system utilities and console productivity parts. Afterwards, I then tried to install xfce, lightdm, xorg, and a few other desktop programs.
It was the xorg package that gave me the problem.
This was the situation after the initial netinstall:
root@localhost:/etc/apt# apt policy xorg
xorg:
Installed: (none)
Candidate: 1:7.7+23
Version table:
1:7.7+23 500
500 [url]http://deb.devuan.org/merged[/url] daedalus/main amd64 Packages
Once I ran the below to manually install x11-utils, xorg installed as expected.
root@localhost:/etc/apt# apt install x11-utils
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
libxcb-shape0 libxv1 libxxf86dga1
Suggested packages:
mesa-utils
The following packages will be REMOVED:
luit
The following NEW packages will be installed:
libxcb-shape0 libxv1 libxxf86dga1 x11-utils
0 upgraded, 4 newly installed, 1 to remove and 0 not upgraded.
Need to get 355 kB of archives.
After this operation, 831 kB of additional disk space will be used.
Do you want to continue? [Y/n]
A subsequent reboot and the lightdm login worked fine.
It looks like it initially is trying to install an older version of the "x11-utils" package. I hope that package "luit" was not important. My sources.list file is the default, but I added the "non-free" and "contrib" sections to the repo lines.
The only thing you will need to do is to enable security, updates and backports if required. They will become activated once Daedalus changes to stable.
I saw it mentioned somewhere, I think on the mailing list, that the "updates" and "security" parts are now active. I have added this to the daedalus repository and can confirm that they are now active. Have not checked the backports, yet.
If were to do a clean install using the Daedalus iso while it is in development, will there be any additional steps I'd need to make once Daedalus is moved to stable? Or would I be able to continue running Daedalus as installed?
Unless there is a major change made that can not be altered by apt re-installing a package, then yes you should be able to continue using it. Personally, I do not foresee this happening. I am sure that this will be mentioned again, but you will need to alter your /etc/apt/sources.list file after daedalus goes stable. Right now, only the "main" repo is available in daedalus.
You can also run into problems if the server goes down while the client still has the share mounted.
...
I find it better to use autofs, which will mount the share when you try to use it.
And autofs will unmount the nfs share at a preferred and configured timeout period.
ssh, vnc, rdesktop is basically all I know.
There are a lot of possibilities here and I will throw out the one example that I use the most: Xephyr, Xming, and SSH and the technology is called Nested Xsessions.
Xephyr is a linux program which runs nested X sessions on the client (local) system. Xming is the windows program that displays the remote system on a local windows computer. SSH is the tunnel to transport the xsession from remote to local. You might have to also use Putty with windows and Xming. At the time, I had never heard of "nested Xsessions" before so this was all new to me. I prefer it over VNC, X11vnc, etc. as you do not have to have an Xsession running on the remote system. It only has to be installed, and you start the remote xsession from the local machine and it displays within the Xephyr window on the local desktop.
There is a lot written about this on the web, so check if you think it may fit your needs. The many tutorials written on the web explains it better than I can. The hardest part for me to get it working initially was getting the Display number right. Also, you might have to make a few configuration changes on the server involving ssh xforwarding, so root access would be required there.
For my needs, I use it to access a remote server that is running in headless mode which has a lightweight xserver installed on it (Xfce). It is not perfect but it is the best that I have used to access a remote server which does not have a X session running. I have two aliases configured in my local client bash profile so it is easy and quick to access the remote server.
My goal is to get JWM on the remote server instead of Xfce, since it is uses less resources. This is a work in progress.
Well done and much appreciated!!
I agree. Thank you.
Thank you for the newer post links. I did a search and responded to the thread that showed first. Sorry about posting in the oldest thread.
Thank you for your work / contribution.
Well I think I will resurrect an almost two year old thread today. I think this belongs here instead of a new topic.
I am having trouble getting the following items from the JWM-Kit to open - Freedesktops, Trays, and Menus. These will not open and I can see no notations anywhere in any log file that might display a missing dependency message or something similar. Also it is not picking up the .desktop files to display within the menu. So when I click on the start button (the rocket icon in the system tray - left side) I can see the menu but all that is displaying is the word "Applications", then a double line horizontal separator, and the word "Exit". If the menu needs to be built manually, that is fine, but I am unable to open the menu builder program of JWM-Kit.
I feel like I am missing a dependency, but that is a guess. What I am using is a 64-bit beowulf minimal netinstall with a dist-upgrade to chimaera. I installed jwm and xserver-xorg, then installed the jwmkit*.deb file, and finally the programs menu and arandr. I am using the devuan 32bit build from the above links as a guide.
Any ideas on what is needed for the missing JWM-Kit programs to display?
Edit: I think I was accurate with the missing dependency. If anyone knows what it is, I would like to know. This JWM-kit works fine on a chimaera netinstall 64bit build. Been tinkering with it some this morning, and am impressed with it.
This post may look long, and it is. After installing, using, and fighting with an openvpn server for about 6 years, configuring and using a wireguard vpn server is much easier. A lot has been shared on the internet on installing and configuring wireguard. I don't think another one is needed, but I tried to put together a wireguard vpn and this is what I came up with on Devuan Chimaera.
The information below was mainly taken from these two tutorials:
https://linuxize.com/post/how-to-set-up … debian-10/
Dual stack ipv4 and ipv6:
https://stanislas.blog/2019/01/how-to-s … -nat-ipv6/
Also, I did run across and implement a couple of features that appear to work well with wireguard and devuan. These two items are 1- a sysvinit start/stop/status script, and 2- how to run multiple instances on the same vps, using different ports. You may not need to run wireguard on multiple ports but if you have a dedicated vps server, and one port is blocked by an internet service provider, having another port available might be of use. This "should" get a functional vpn tunnel operational, and then you can do more advanced things within the tunnel itself, if you wish.
Configuration steps:
- install wireguard
- Configure keys and wg0 file on both the server and peer (client)
- Create sysvinit startup script
- sysctl.conf edits to allow for routing on the server
- Add vpn profile to mobile device with a qr-code scan
- Import WireGuard profile using Network-Manager (nmcli)
- Create multiple instances of wireguard on different ports (same host/server) (optional)
On Server
Install:
apt install wireguard
create keys:
wg genkey | sudo tee /etc/wireguard/privatekey | wg pubkey | sudo tee /etc/wireguard/publickey
sudo nano /etc/wireguard/wg0.conf
Add to wg0.conf file, you may choose your own ip address subnets, and you may select a different port number:
[Interface]
Address = 10.0.0.1/24,fd00::1/64
ListenPort = 51820
PrivateKey = SERVER_PRIVATE_KEY
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; ip6tables -A FORWARD -i %i -j ACCEPT; ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip6tables -D FORWARD -i %i -j ACCEPT; ip6tables -t nat -D POSTROUTING -o eth0 -j MASQUERADE[Peer]
PublicKey = CLIENT_PUBLIC_KEY
AllowedIPs = 10.0.0.2,fd00::2
PersistentKeepalive = 24
sudo chmod 600 /etc/wireguard/{privatekey,wg0.conf}
https://www.procustodibus.com/blog/2021 … it-script/
Sysvinit start/stop/status script:
nano /etc/init.d/wg0
Add this to file:
#!/bin/sh -eu
# checkconfig: 2345 30 70
# description: set up a WireGuard interface simply
### BEGIN INIT INFO
# Provides: wg-quick
# Required-Start: $local-fs $network
# Required-Stop: $local-fs $network
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: set up a WireGuard interface simply
### END INIT INFO
command=/usr/bin/wg-quick
interface=wg0
description="wg-quick on $interface"
logfile=/var/log/$interface
status() {
/usr/bin/wg show $interface
}
start() {
touch $logfile && date >>$logfile
echo "starting $description ..." | tee -a $logfile
$command up $interface >>$logfile 2>&1
echo "... started $description" | tee -a $logfile
}
stop() {
touch $logfile && date >>$logfile
echo "stopping $description ..." | tee -a $logfile
$command down $interface >>$logfile 2>&1
echo "... stopped $description" | tee -a $logfile
}
case "${1-}" in
status) status ;;
start) start ;;
restart) stop || true; start ;;
stop) stop ;;
*) echo "usage: $0 {status|start|restart|stop}" ;;
esac
Make executable with
chmod +x /etc/init.d/wg0
Update default rc links:
update-rc.d wg0 defaults
Enable IPv4 and IPv6 routing on the server
In /etc/sysctl.conf, add or uncomment these, and save file
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding = 1
Save the file and apply the change:
sudo sysctl -p
Open up your firewall to allow for incoming udp connections to the port number you specified, if it is different from port 51820.
On Client
Install:
apt install wireguard
create keys:
wg genkey | sudo tee /etc/wireguard/privatekey | wg pubkey | sudo tee /etc/wireguard/publickey
Create the file wg0.conf and add the following contents:
sudo nano /etc/wireguard/wg0.conf
Add this to the wg0.conf file on the client machine
[Interface]
PrivateKey = CLIENT_PRIVATE_KEY
Address = 10.0.0.2/24
DNS = 8.8.8.8,2620:0:ccc::1[Peer]
PublicKey = SERVER_PUBLIC_KEY
Endpoint = SERVER_IP_ADDRESS:51820
AllowedIPs = 0.0.0.0/0
The client keys needed for a mobile device can be created on any computer, and does not need to be created on the mobile device itself. I just created a different folder and populated it with the keys so that the other keys were not overwritten.
At this point, you should have a fully functional wireguard vpn server. But you will need to start the wg0 service first.
service wg0 start
Useful commands to see wg0 on server, or use "service wg0 {start,stop,status}"
To start vpn tunneling:
sudo wg-quick up wg0
To stop the tunneling, bring down the wg0 interface:
sudo wg-quick down wg0
To check the interface state and configuration, run:
sudo wg show wg0
You can also verify the interface state with ip a show wg0:
ip a show wg0
https://www.hardill.me.uk/wordpress/202 … uard-ipv6/
Add vpn profile to mobile device with qr-code scan. To generate qr code for android import:
apt install qrencode
From the computer where the client keys and client wg0.conf file are located, as root
qrencode -t png -o wg0.png < wg0.conf
qrencode -t ansiutf8 < wg0.conf
The qr-code will display in the terminal, and from the wireguard mobile app
Add > Scan from QR Code
Once the profile is imported, minor changes can be made to the profile itself as editing is allowed.
https://www.cyberciti.biz/faq/how-to-im … -on-linux/
How to import WireGuard profile using nmcli (Network-Manager) on Linux. We can import /etc/wireguard/wg0.conf by typing the following command(s):
Set up shell environment variable:
file='/etc/wireguard/wg0.conf'
Now import it using the nmcli command:
sudo nmcli connection import type wireguard file "$file"
Rename profile wg0 as hostname-wg0, or whatever you want it to be:
nmcli connection modify wg0 connection.id "hostname-wg0"
You may repeat this procedure for all WireGuard profiles on Linux when using NetworkManager CLI interface called nmcli.
Multiple instances on same host with different ports, minimal changes are needed to a newly created wg1 interface file. The file can be given any name.
wg1 sounds good for this example.
Make duplicate of wg0.conf file
cd /etc/wireguard
cp wg0.conf wg1.conf
Edits to wg1.conf file, change the listening port
ListenPort = ??
(whatever port you choose)
Also, the noted [Peer] subnet must be different from wg0!
Change this from what is noted in the wg0.conf file:
AllowedIPs = 10.0.0.3,fd00::3
-to-
AllowedIPs = 10.0.1.3,fd01::3
The rest of the file can stay the same, including the keys.
sudo chmod 600 /etc/wireguard/wg1.conf
The best port to use for a vpn is open for discussion. Which port is least likely to be blocked by internet carriers?
Wireguard only uses udp, not tcp. Ports 443 and 53 are most often mentioned as least likely to be blocked.
Add / edit the /etc/init.d/wg1 script
Make copy of /etc/init.d/wg0 script
cd /etc/init.d
cp wg0 wg1
Open file /etc/init.d/wg1
nano wg1
and change the following line:
interface=wg0
change to:
interface=wg1
Save the file.
The first time above we ran
update-rc.d wg0 defaults
to update the script links into the /etc/rc.d folders. However, when it is run again with
update-rc.d wg1 defaults
It does not build any links in any rc0, rc1, rc2... folders, and this response is noted in the terminal,
insserv: script wg1: service wg-quick already provided!
But the wg1 service still works, it just does not start at boot. This can be corrected by adding
service wg1 start
to /etc/rc.local file so it will start at boot. You might want the rc.local file to look like this:
service wg1 start
sleep 1
exit 0
The service {wg0,wg1} start/stop/status commands will work.
The additional memory usage for the extra interface is minimal on a 512mb vultr vps server. CPU and memory use is quite light with wireguard in general.
Additional and helpful info on wireguard:
https://www.reddit.com/r/WireGuard/comm … ts_on_the/
I hope I did not overlook anything.
I get a little paranoid using public wifi hotspots. I am glad my vpn is operational again and I don't want to pay for a vpn if I can host my own.
I too had this problem, so I uninstalled gnome-keyring. Nothing much was removed maybe a dependency or two and I can't find anything that is not working, yet. It no longer locks the cpu at 100% usage.
You need to target chimaera-backports for that package, just as you did for the individual kernel package.
I learned something new today. I will install it now so my kernel does not get left behind on the update process.
Thanks HOAS.
Edit:
migf,
The same would be true for the header files as well.
apt install -t chimaera-backports linux-headers-amd64
Try this,
apt install -t chimaera-backports linux-image-5.18.0-0.deb11.4-amd64
Then you may uninstall the default kernel if you wish, after a reboot of course. The reboot should boot the new kernel, and you can verify this by running - after the reboot,
uname -a
I had a similar issue two weeks ago. I had a 5 or 6 year old HP laptop in good condition but the wifi card in it was only 2.4ghz and I wanted a 5ghz card for the speed. So I bought one off of ebay. It was a RTL8821CE 802.11ac. It did not work with the stock kernel in chimaera, so I installed a newer kernel from chimaera-backports. The dkms module built fine then and it works like a charm. It was the first time I ever needed a newer backported kernel.
Installed the necessary firmware:
apt install firmware-realtek
Add the backports line to the /etc/apt/sources.list file
deb http://deb.devuan.org/merged/ chimaera-backports main contrib non-free
Then:
apt update
Install the backported kernel:
apt install -t chimaera-backports linux-image-5.18-(whatever the version is in backports).
Might be best to install the kernel before the firmware.
It appears that this firmware-realtek package may support your RTL8852 card. From the debian package page showing supported hardware:
* Realtek RTL8852AU Bluetooth config (rtl_bt/rtl8852au_config.bin)
* Realtek RTL8852AU Bluetooth firmware (rtl_bt/rtl8852au_fw.bin)
...
* Realtek RTL8852A firmware, version v0.9.12.2 (rtw89/rtw8852a_fw.bin)
This forum thread is not that old, yet, so I thought I would add to it. I too had this same exact problem and it kept showing up "sometimes" when I had to reboot or restart the mysql service. I finally found what I "think" is a solution. I was having to delete the /var/lib/mysql/ib_logfile0 file to get mysql (mariadb-server) to start. I have been using this for a few weeks on more than one server and have not had any problems, yet. Also, my problem was showing up on chimaera, not ceres.
Steps involved:
- Stop mysql service before editing the file below
- In /etc/mysql/mariadb.conf.d/50-server.cnf file, add this (toward the bottom)
Under this line:
[mariadb]
Add this content:
innodb_fast_shutdown = 0
I hope this helps.
The default install doesn't have any Peppermint repositories or packages with all of the custom configuration applied via live build.
Yea, I was surprised to see this also. If a Pep based theme were to break it could possibly be fixed with a reinstall of the package.
HOAS is just lobbying you to re-base off of Fedora with Gnome 43 and systemd 252. Maybe throw in a Peppermint-themed Microsoft Edge browser for good measure.
Now that was just humorous. I got a good laugh from that. No disrespect intended to HOAS, none at all.
I have the Peppermint-dev installed and I admit ... when I look at my grub boot screen and it displays multiple linux OSs, lately my Pep-dev install gets the click to boot, as in boot into. It has a good "feel" to it. That's about as technical as I can get... (-:
I am glad to help ComputerBob!
see if I eventually get to the point where I am willing to buy a used PC, to use for experimenting with backing up and restoring
For what it is worth, virtualbox, or qemu is pretty easy to use. There should be lots of tutorials online to help you get started. A quick web search should be a good starting point. These virtualization programs are easier than dealing with another physical machine.
Good luck!
First of all, I am far from being an expert using rsync. I acquired that line of code from somewhere on the internet from someone who uses rsync to backup and restore their system. I will try to explain, and I hope that I do not give you any bad advice... (-;
Would your rsync script backup my / only, or would it also backup my separate /home (with all of my newly-restored data in it)?
It depends on if you tell it to, or not. (-; Currently, as the code is displayed, it would copy the /home folder and all its contents because it is not shown in the excluded section.
Let's look at this line of code in some detail. Here is the entire line of code:
rsync -aAXv --exclude={"/swapfile","/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/media/*","/mnt/*","/lost+found"} /* /backup/folder/location/ --delete
Let's take it apart:
- "rsync -aAXv" type "man rsync" in the terminal to see what these flags are
- "/* /backup/folder/location/" This is the source and destination of data to be copied.
- "--delete" means to delete the file in the destination location if it does not exist in the source location. Respect this --delete, as it can be very destructive.
Explanations:
-- The above is telling you to rsync (copy) the entire root filesystem "/*" but to exclude (omit from the copy) these folders/files:
-- the /swapfile, because it is can be large
-- /dev , /proc , /sys , /tmp , /run . These folders are populated with content as the system boots so they need not be copied over, but the empty folders should be copied over to make it easier if you had to restore the backup. The /lost+found is leftover data which need not be retained.
-- /media , /mnt These are normally mount points so their data need not be copied over.
Notes:
- within the exclude section of the code, "/dev/*" will copy only the /dev folder with no content within it, whereas "/dev" will omit the entire folder and its contents.
- to make the backup simpler, you could remove the swapfile from the exclude section. Then it would be copied over and thus included in the restore - moving the destination folder back into the source location. Within the line of code "/* /backup/folder/location/", these are the source and destination locations.
- You mentioned the /home folder. It can easily be omitted during the copying by including it within the exclude section of the code. For example the exclude section would look like this
--exclude={"/swapfile","/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/media/*","/mnt/*","/lost+found","/home/*"}
Restore the backup:
You could reverse the source and destination locations and it should work. For clarity, now the source and destination are reversed. The source is the backup and the destination is the original source location of the root folder.
So, this is an example of the restore code, it is almost identical but slightly altered by reversing the source and destination locations:
rsync -aAXv --exclude={"/swapfile","/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/media/*","/mnt/*","/lost+found"} /backup/folder/location/* / --delete
But beware of the /media and /mnt part of the code. Make sure that there are no mounted partitions, remote locations, etc. mounted on the system! Otherwise, including the mnt and media sections will --delete their contents! Run as root "umount -a" in the terminal and verify that there is no mounted content.
- the resore command above will not move the swapfile back into position, thus it would have to be re-created. Adjust the code to include or exclude the swapfile based on your need.
Once you somehow restore your backup to your local (or network) computer, what do you have to do to get it to boot, other than just use rsync to copy all of the system files into place? That's the part that still ABSOLUTELY TERRIFIES me.
It terrified me too. The way that I learned to trust this process is by moving a local system created in virtualbox into a remote system on a vps or another virtualbox guest system. The process is the same, the only difference is the destination location in the code. I have on several instances used this code to move a local system onto a remote virtual private server, which is the same process as "restore" as you mention above.
The only problems that I have seen are with two files: /etc/fstab and /etc/network/interfaces . To get around this I make sure that the necessary files are included in the source files before transferring. Or, I alter the files to be correct before attempting to reboot.
You may have an issue with the boot process hanging initially with a message referring to "unable to ... suspend or remove..." or something like that. If so, wait until it boots, and run as root
update-initramfs -u
This will rebuild the initramfs boot image.
Questions? I will be glad to help, if I can. But I would test this myself before I actually "trusted" it. You are on the right path in thinking about the restore process before it is actually needed. For me, testing within and across virtual machines was necessary and very helpful.
Last pointers:
For testing purposes, the sources and destination folders can be local folders or remote ssh filesystem locations, which would make the line of code pretty long. Don't get discouraged as it will function as it should.
Second, know the difference between locations shown with a /folder and /folder/* . "/folder/*" will copy/restore the folder and its contents.
Lastly, please respect the --delete function. For this reason, in the event of a catastrophic system failure, I would not mind doing the restore in several commands and not a one-line command. I am not an expert and I would not want to move the backup back into the root folder location unless I copied all the files. In simpler words, the --delete command may delete all of the original files when you did not copy but some of the files. I know I sound confusing, but testing will give you an idea of what this entire command can do. It is pretty powerful.
Sorry to be so long, but this is a powerful piece of code, and very helpful. It is worth testing and using. In writing it up, I realize that I need to test a few things to understand it better than I do.
"I don't know if I'll EVER have the courage to even TRY any sort of system backup again
...
I may try to figure out how to edit my rsync steps that I've used for years to backup"
Just in case you are interested, this is the command that I use to backup a running system. I have used this mainly to transfer a local build system to a remote location. It can just as easily work locally.
I use the "--delete" option as I also use this to keep an updated backup of an existing system. If you are simply coping data over, then this would not be needed. I want to keep a system backup in sync, so I use it.
rsync -aAXv --exclude={"/swapfile","/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/media/*","/mnt/*","/lost+found"} /* /backup/folder/location/ --delete
If you were to use this to move data over from a backup location to the root system location then please unmount all locations "umount -a" before moving data over. Don't want the --delete option to delete a mounted location! i.e. /mnt/data /media/folder, etc.
Notes:
- If you were to use the command to copy the backup over to the root location, then you might have to re-create the swapfile.
- If rsyncing to a remote location make sure the important files will be retained. By this, I mean to copy the remote fstab, /etc/network/interfaces file over to the files to be rsynced so that they will be on the remote server when rebooted.
Hi ComputerBob,
For future reference, to install the set of refractatools that will allow you to build and install a custom iso, try this command:
apt install refractasnapshot-base refractainstaller-base
These two programs will allow you to create and install a custom iso with the commands
refractasnapshot
and
refractainstaller
from the command line. These are not gui programs. Although they are not gui programs, they can probably satisfactorily create and install most snapshots.
You have already found the command for the gui programs so I will not mention those.
mixed content is(was?) most common reason for non-secure sites
Also, the main background image is pulled from a "http://" source instead of a "https://" source. The image source shows it as coming from "http://lionlinux-com.stackstaging.com/w … 00x169.png" and it should show the source as beginning with "https://". This will also cause the SSL to show as insecure.
Errors were encountered while processing:
nvidia-persistenced
E: Sub-process /usr/bin/dpkg returned an error code (1)
I will assume that you did not install the nvidia drivers with the "--no-install-recommends" part. If not, then it will install the nvidia-persistenced package. I seem to recall seeing this error before and the easy fix is to uninstall the package "nvidia-persistenced".
As for the other error message, I don't know about that.
php-imagick is now built for chimaera.
It installed without issue, is recognized, and working fine!
Thank you TDR for your work in making these packages available.
I do understand the concept of "no support" but I hope it will not hurt to ask if there are packages missing.
My instance of nextcloud is throwing up an error about missing the php module "imagick". I can only see the "php-imagick" package available with the deb.devuan.org repository now. The php*-imagick packages from the tdrnetworks repo are not showing up:
root@server:/home/user# apt policy php*-imagick
php7.4-imagick:
Installed: (none)
Candidate: (none)
Version table:
php-imagick:
Installed: 3.4.4+php8.0+3.4.4-2+deb11u2
Candidate: 3.4.4+php8.0+3.4.4-2+deb11u2
Version table:
*** 3.4.4+php8.0+3.4.4-2+deb11u2 500
500 http://deb.devuan.org/merged chimaera/main amd64 Packages
100 /var/lib/dpkg/status
Whereas the deb.sury repos shows all the individual modules for each version of php. I activated this repo only for this info:
root@server:/home/user# apt policy php*-imagick
(...snip...)
php7.4-imagick:
Installed: (none)
Candidate: 3.6.0-4+0~20220117.35+debian11~1.gbp149f82
Version table:
3.6.0-4+0~20220117.35+debian11~1.gbp149f82 200
200 https://packages.sury.org/php bullseye/main amd64 Packages
php8.1-imagick:
Installed: (none)
Candidate: 3.6.0-4+0~20220117.35+debian11~1.gbp149f82
Version table:
3.6.0-4+0~20220117.35+debian11~1.gbp149f82 200
200 https://packages.sury.org/php bullseye/main amd64 Packages
php-imagick:
Installed: 3.4.4+php8.0+3.4.4-2+deb11u2
Candidate: 3.4.4+php8.0+3.4.4-2+deb11u2
Version table:
3.6.0-4+0~20220117.35+debian11~1.gbp149f82 200
200 https://packages.sury.org/php bullseye/main amd64 Packages
*** 3.4.4+php8.0+3.4.4-2+deb11u2 500
500 http://deb.devuan.org/merged chimaera/main amd64 Packages
100 /var/lib/dpkg/status
(...snip...)
php8.0-imagick:
Installed: (none)
Candidate: 3.6.0-4+0~20220117.35+debian11~1.gbp149f82
Version table:
3.6.0-4+0~20220117.35+debian11~1.gbp149f82 200
200 https://packages.sury.org/php bullseye/main amd64 Packages
My sources.list includes
deb https://pkgs.tdrnetworks.com/apt/devuan chimaera main
I have no related "apt pinning" in preferences.d folder, nor do I have any packages on "hold". This command returns nothing:
apt-mark showhold
Other packages from the tdrnetworks repo are showing, for example:
php8.0-fpm:
Installed: 8.0.14-1+devuan4~1
Candidate: 8.0.14-1+devuan4~1
Version table:
*** 8.0.14-1+devuan4~1 500
500 https://pkgs.tdrnetworks.com/apt/devuan chimaera/main amd64 Packages
100 /var/lib/dpkg/status
Are these "missing" php*-imagick packages, or is there a reason why the tdrnetworks packages are not showing up at all?
I expect to have some updated packages ready soon including for Devuan Chimaera.
This is good news. Thank you for what you do!