You are not logged in.
I've gone back the the beginning. Reset BIOS and am now in Install.
I'm at this step in the instructions from @Andre4freedom ...
- Enter the Disk Partitioner:
-- Remove any residual config whatsoever There shall be no LVM volumes, no LV Groups, no RAID partitions, just nothing.
-- Create a 512MB primary partition, the first, on each disk and set the type to type fd (Linux RAID). (sda1 sdb1)
I don't think that I have access to Terminal yet ...
I don't know why there's 1.0 MB at the top of sdb - I selected "Beginning" for the 521 MB partition.
SCSI1 (0,0,0) (sda) - 400.1 GB ATA THNSFxxxxx
#1 primary 510.7 MB K raid
pri/log 399.6 GB FREE SPACE
SCSI3 (0,0,0) (sdb) - 400.1 GB ATA THNSFxxxxx
1.0 MB FREE SPACE
#1 510.7 MB K raid
399.6 GB FREE SPACE
Wow, @Andre4freedom, that's an awesome bit of documenting the process! Devuan folks might consider adding it (with a few edits) to the Wiki.
@Head_on_a_stick Thanks re. points of clarification.
I gather that you both agree that I need to, essentially, wipe the ssd's and start from scratch?
I have a backup of the file structure in working order. So I won't need to reinvent all of that work.
I look forward to reading your procedure for a correct setup.
I have backup covered now (Deja Dup). At least that piece of the puzzle appears to be stable.
Thanks!
EDIT: Just rebooted into my User account vs Root and the server is also working OK, so it's not a difference between accounts (permissions, etc).
I found just this, which may be helpful ...
Cold start refers to starting the CPU from power off. Current configuration is discarded and program processing begins again with the initial values. Warm Start. Warm start refers to restarting the CPU without turning the power off. Program processing starts once again where Retentive data is retained.
Source: https://forums.debian.net/viewtopic.php?t=137068
EDIT 2: Is this telling me anything relevant?
root@devuan1:~# df -T -h /etc/inittab
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda2 ext4 44G 4.0G 38G 10% /
root@devuan1:~#
Server settings are not holding across a Shutdown - not even persisting across a Restart - the one that worked seems to be a fluke at the moment.
I'm checking some BIOS settings - Raid is Off, ACHI is On, I unchecked drives not present, Integrated NIC changed to Enabled from Enabled with PXE, UEFI Boot Path Security was set by default at Never, Advanced Boot Options are both checked - Enable Legacy Option ROMS & Enable Attempt Legacy Boot, Boot Sequence is Debian then Windows Boot Manager (Do both need to be checked?), Boot list option is UEFI (Legacy is unchecked).
Anything seem 'off', please?
EDIT: On reboot everything is working, again ...
Time to sleep ... will return to this tomorrow.
Anyone using BIND or Knot Resolver as a DNS resolver plus DNSdist as a DNS load balancer in Devuan?
Any reason doing so would be a bad idea in Devuan?
It's her ("linuxbabe") fault that I'm looking into this ... https://www.linuxbabe.com/ubuntu/dns-over-https-doh-resolver-ubuntu-dnsdist lol
Backing up.
To backup my /home partition I would use something that supports incremental back where it's possible to restore individual files or directories. I use Déjà Dup which is just a wrapper around duplicity.
To backup my \root partition I use fsarchiver which does a whole partition backup. Although it is possible to backup a live partition (using the -A flag) it's much safer to unmount it and backup from another system (e.g one on a usb stick). If you the have LVM partitions (which I do, on top of RAID1) you can snapshot your live root partition.
Deja Dup is awesome. It's a little less than intuitive at points but after thrashing about a bit I have it backing up everything but the Trash folder. lol
Thanks!
Currently I have duplicity at the top of my list of timeline backup methods as it's both fast and compact, and it's trivially easy to set up a cron job script that makes an incremental delta as often as I like.
I know of the front-end duply but only by name. (It's so rare that I need to peep into the backup)
Do I read correctly that writing to a "local filesystem" would include a USB drive?
Would this get everything & put it on the external 1TB USB drive? rsync -ra / /dev/sdc/backups/
QUESTION: What is the recommended app, or command line string, to use to backup to the 1TB external drive, please?
Gnome Disk Utility looks promising.
dd would be OK ... unless over-tired and the wrong string is typed ...
Clonezilla and Tar also look interesting. https://www.maketecheasier.com/back-up-entire-hard-drive-linux/
Timeshift doesn't seem to save everything.
Guess what?
It's working, again!
Here's what I remember doing just now ...
I went into BIOS and toggled-off Raid.
Saved BIOS and let it boot ... files still missing.
I added that line to fstab and rebooted - files still missing.
I added a line to /etc/nginx/nginx.conf and rebooted - change to nginx.conf not there.
I shut down and restarted - cold - files still missing and the change to nginx.conf not there.
I just rebooted and everything is working, again ...
OK. So, basically, start from scratch.
@ralph_ronnquist what would be the equivalent-function mirroring method to Raid1 that isn't Raid1?
Is the alternative a complex process and one that requires extra attention? (I really want to get to some long-delayed content. This is my second attempt at a server - I sold the other hardware as it was way beyond what I needed and frustratingly complex.)
Is anyone up to walking me through setting up mdadm RAID1 (and making sure that the BIOS raid is disabled)?
OR, should I just follow this?
https://linuxconfig.org/linux-software-raid-1-setup
Note: I've just dedicated a 1TB external drive for backup purposes. That way the Raid1 array maintains a mirror and the external drive a backup. I guess I'll just go ahead and follow the instructions from the link and then see about the best way to implement the backup (I'd like to run it more often than 1x/day.)
OK, need to shut down ... too many late nights and early mornings in a row.
Sure hope someone on the Forum has some ideas ...
I've been reading about sync ...
Is it possible that looking here ... https://linuxconfig.org/linux-software-raid-1-setup
... scrolling down to Configure persistent RAID mount ...
that the missing line in /etc/fstab "/dev/md0 /mnt/raid1 ext4 defaults 0 0" is the cause?
Is it correct that without this I don't have a working RAID 1 array that stays mounted even after a system reboot.
Could that cause the lost changes I'm experiencing?
root@devuan1:/etc/nginx# lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sda isw_raid_member 1.1.00
sdb isw_raid_member 1.1.00
├─sdb1 vfat FAT32 8395-4005 506.3M 1% /boot/efi
├─sdb2 ext4 1.0 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 37.7G 9% /
├─sdb3 swap 1 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx [SWAP]
└─sdb4 ext4 1.0 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 297.8G 0% /home
sr0 iso9660 Joliet Extension Devuan 4.0 2021-10-12-11-25-10-00
root@devuan1:/etc/nginx# df -h
df: /run/user/0/doc: Operation not permitted
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 1.2M 6.3G 1% /run
/dev/sdb2 44G 3.8G 38G 10% /
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 15G 0 15G 0% /dev/shm
/dev/sdb1 513M 5.8M 507M 2% /boot/efi
/dev/sdb4 314G 102M 298G 1% /home
tmpfs 6.3G 12K 6.3G 1% /run/user/0
root@devuan1:/etc/nginx#
Question: Is there any potential security risk if UUID's are posted in-the-clear on this forum?
Anything here look suspect, please?
root@devuan1:/etc/nginx# parted -l
Warning: Not all of the space available to /dev/sda appears to be used, you can
fix the GPT to use all of the space (an extra 6320 blocks) or continue with the
current setting?
Fix/Ignore? I
Model: ATA THNSF8400CCSE (scsi)
Disk /dev/sda: 400GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 17.4kB 538MB 538MB fat32 boot, esp
2 538MB 48.5GB 48.0GB ext4
3 48.5GB 56.5GB 8000MB linux-swap(v1) swap swap
4 56.5GB 400GB 344GB ext4
Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 6320
blocks) or continue with the current setting?
Fix/Ignore? I
Model: ATA THNSF8400CCSE (scsi)
Disk /dev/sdb: 400GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 17.4kB 538MB 538MB fat32 boot, esp
2 538MB 48.5GB 48.0GB ext4
3 48.5GB 56.5GB 8000MB linux-swap(v1) swap swap
4 56.5GB 400GB 344GB ext4
I am using two SSD drives in a Raid 1 format.
Its purpose is to mirror drive 1 to drive 2.
I'm wondering if it's not working properly and the second drive is overwriting the first drive when I reboot?
dcolburn wrote:(I still don't know why it keeps creating an access.log.1 and an error.log.1)
The logger creates a new file when the log reaches a specified size, and adds a number to the older log. e.g. syslog becomes syslog.1, the current contents remains in a new file syslog. if there was a syslog.1 before it becomes syslog.2. As many versions are kept as specified.
This function is called log-rotate.
It's not doing that on my server - it's creating a new one with only a few lines in the first.
It was, prior to today, also still adding to a log that already contained pages of log files - while also updating a new log.1.
I need guidance as to where to start looking for what's broken:
Raid 1
Devuan settings
??
Thanks
Something, that others have identified in the past, has to be causing this behavior.
I need some guidance as to the places to look.
There's no point in rebuilding until the stability of saved files has been resolved.
Is it possible Raid 1 is causing this?
A hardware problem?
A setting somewhere that's telling it to restore defaults?
OK, bad day here.
I'm suspecting a problem in the Raid 1 setup - but it may be elsewhere ...
After all of the work to get the web server running correctly - it's down, again.
I powered it down overnight then brought it back up.
Critical file content has changed - backwards - as I had previously observed/suspected and mentioned.
I deleted "grav" - it's back.
Everything in /etc/nginx including nginx.conf nftables.conf sites-available etc has reverted to what appear be a default versions.
All the work done in /var/www is also gone.
Curiously, ufw and gufw did not return (as before) but nftables is present.
/var/log/nginx access.log.1 is back to December.
access.log is empty
(I still don't know why it keeps creating an access.log.1 and an error.log.1)
error.log and error.log.1 only show my nginx.conf errors today
Ideas, please?
Shutting down for the night but will try this tomorrow ... unless directed elsewhere ...
https://www.linuxbabe.com/ubuntu/dns-over-tls-resolver-nginx
Step 3: Create DNS over TLS Proxy in Nginx
These are the last few lines of output from https://unboundtest.com for CAA.
Jan 05 03:47:24 unbound[729481:0] info: reply from <com.> 192.41.162.30#53
Jan 05 03:47:24 unbound[729481:0] info: query response was ANSWER
Jan 05 03:47:24 unbound[729481:0] info: validated DNSKEY com. DNSKEY IN
Jan 05 03:47:24 unbound[729481:0] info: resolving realupnow.com. DS IN
Jan 05 03:47:24 unbound[729481:0] info: response for realupnow.com. DS IN
Jan 05 03:47:24 unbound[729481:0] info: reply from <com.> 2001:503:d2d::30#53
Jan 05 03:47:24 unbound[729481:0] info: query response was nodata ANSWER
Jan 05 03:47:24 unbound[729481:0] info: NSEC3s for the referral proved no DS.
Jan 05 03:47:24 unbound[729481:0] info: Verified that unsigned response is INSECURE
Jan 05 03:47:24 unbound[729481:0] info: 127.0.0.1 realupnow.com. CAA IN NOERROR 1.528696 0 101
Suggestions as to what I should address vs ignore?
I just ran this free analysis ... https://www.ssllabs.com/ssltest/analyze.html
I'm guessing I need to figure out why DNS CAA isn't being reported ... DNS CAA No (more info)
Should I just ignore the rest of this?
IE 11 / Win Phone 8.1 R Server sent fatal alert: handshake_failure
Safari 6 / iOS 6.0.1 Server sent fatal alert: handshake_failure
Safari 7 / iOS 7.1 R Server sent fatal alert: handshake_failure
Safari 7 / OS X 10.9 R Server sent fatal alert: handshake_failure
Safari 8 / iOS 8.4 R Server sent fatal alert: handshake_failure
Safari 8 / OS X 10.10 R Server sent fatal alert: handshake_failure