You are not logged in.
And my final solution for that particular computer is to use Alpine Linux: init freedom + keyboard variant support from the start.
The bug seems to be specific to the Devuan GNU+Linux installer. The graphical Debian GNU/Linux installer (Debian Bookworm 12.1 Netinstall) does not have that bug.
The quick solution that works for me is to use the graphical Debian GNU/Linux installer: I opened a console and used the setxkbmap program to set the variant:
setxkbmap -layout de -variant neo
After that, I continued with the installation of Debian GNU/Linux instead of Devuan GNU+Linux on that machine. Although the installer resetted the keyboard variant at some point, I was able to at least enter the wifi password more easily.
The installer step "Select a keyboard layout" could be the easy solution, but it isn't working:
In the Devuan daedalus installer, it first asks about the keyboard and gives two selections: "PC-AT ..." and "keep settings" (or something like this). When I select the first entry, the installer step fails and I get the "Installation step failed" message. In the Devuan chimaera installer, the installation step fails instantly when I select it.
I got the same message with the netinstall image from USB.
Restarting the installer did not change the behavior of the installer on that step of the installation process.
Hello
The installer for Devuan GNU+Linux Daedalus in the netinst image does not let me choose a keyboard layout variant in the installation process. I need this to select the neo2 "variant" of the german layout which in reality is a layout on its own (XVLCWK instead of QWERTZ). I imagine that users that use Dvorak layouts have the same issue.
I think the variant selection should be added to the installer step to select the keyboard layout again, because installing with a very different layout is a major hurdle in the installation process (think of entering passwords correctly).
Is there a way to select the keyboard variant on the console in the netinst Devuan installer for now?
Regards
mstrohm
Hello
After upgrading from Devuan GNU+Linux beowulf to chimaera, I had the problem that the login hang for 10 to 15 seconds before the desktop is being loaded. This also happened when I tried to login as another user from my X session using su in a shell.
This problem is somehow related to the Slim desktop manager and can be circumvented by using lightdm instead of Slim.
I'm writing this post just in case someone else has the same problem.
Regards
mstrohm
PLEAAAAAAASEEEEEE ... not another single protocol browser ...
:-(
I'm planning to add Gemini support to MoeNavigator, my multi-protocol web browser written from scratch (including its engine):
https://codeberg.org/moenavigator/moena … /issues/23
It may take some time though until Gemini is supported.
Hello
I'm currently trying to boot a kernel on an ARM board (Banana Pi R2) using u-boot as first stage, followed by Grub. The job of u-boot in this setup is to initialise the hardware and to setup an UEFI environment. It then loads Grub (from the grub-arm-efi package in the Devuan repository) which in turn shall load an installed Linux kernel.
The basic receipe for this setup is the following (work in progress, may be incomplete):
Build a Devuan GNU root-fs using multistrap. It must have the necessary packages like the grub-arm-efi package, basic system tools (shell, your favourite editor, apt, ...).
Build u-boot from source.
Build the kernel for your board from source.
Create an SD card where you install u-Boot onto.
Copy the zImage from your compiled kernel to the boot partition of the SD card and name it vmlinuz-(kernel version string).
Copy the kernel modules folder of your compiled kernel into the /lib/modules/ in your root-fs.
Copy the content of your root-fs onto the root partition of the SD card.
Copy qemu-system-arm into /usr/bin into the root-fs on the SD card, then chroot into it, mount /dev, /sys, /proc and /boot (the boot partition on the SD card, not the boot partition from your current computer) and run update-initramfs -c -k (kernel version string)
Run grub-install to get grub onto the boot partition. In case that doesn't work, you may need to use grub-mkimage. The latter can be a bit tricky because you have to hand-select the modules. This invocation might work:
grub-mkimage -p '(hd0,msdos1)/grub/' -O arm-efi -o /boot/efi/boot/bootarm.efi fat ext2 probe terminal scsi ls linux elf part_msdos search normal help echo loadenv parttool boot configfile disk fdt file fshelp gfxmenu halt jpeg lsefi msdospart png reboot search_fs_file search_fs_uuid test
Run update-grub to generate a configuration file for grub. It should find your kernel and the initrd-image you made with update-initramfs.
unmount everything mounted in your SD card partitions and then unmont them, too.
Put the SD card into your ARM board. It should be able to boot to Grub.
This is where I'm stuck at the moment: I can get to the point where Grub is loading a kernel and the ramdisk, but then the board restarts after a few seconds. The kernel I use is version 5.12 from this repository: https://github.com/frank-w/BPI-R2-4.14.
These are the last lines I get via the serial console before the board reboots:
Loading Linux 5.12.0-bpi-r2+ ...
Loading initial ramdisk ...
Press any key to continue
EFI stub: Booting Linux Kernel...
EFI stub: usind DTB from configuration table
EFI stub: Exiting boot services and installing virtual address map...
Is it just the hardware support for the Banana Pi R2 that prevents the kernel from booting or can this be reproduced on other ARM boards besides the Raspberry Pi*? Is an adtitional step required to get the kernel to work?
*= Note: The Raspberry Pi cannot be booted using Grub, see: https://dev1galaxy.org/viewtopic.php?id=3508
Hi
A bit related. With musescore they are rewriting all of the code. And they have removed most comments in the code, so it will be a big challenge for somebody to maintain a fork of the new code because of that. In a way I feel they are "stealing" the program from the current users. And quite a lot of the of developers has left.
Have a nice day.
Hmm... when they removed all the code comments without storing them somewhere else to look them up, they will probably suffer from code amnesia in a few months (code amnesia = when you don't remember why something is done in a particular way in the source code). That could lead to slower Audacity development, giving the Tenacity project an advantage. Documentation is important, inside and outside of the source code. If Muse Group thinks otherwise, they may be up to an interesting development adventure.
I don't have a problem with people on the Internet having a sense of humor while making a robust piece of software to go along with it.
Me neither, as long as nobody is harmed. The sneedacity project definetly crossed a line. Feel free to take a look at the 4chan part of the sneedacity community (sneedacity Readme.md, section "getting started", the link with "developer information"). You will find a lot of disgusting comments about the former tenacity maintainer.
I consider tenacity (https://github.com/tenacityteam/tenacity) the real fork. First of all, the issues of the sneedacity "project" make it look like a bad parody of a software project:
https://github.com/Sneeds-Feed-and-Seed … ity/issues
Most important, there is the report of attempted murder of the main tenacity maintainer. Read more here: https://github.com/tenacityteam/tenacity/issues/99
If the people from 4chan behind the sneedacity project (see its README.md) really see murder as a legitimate way to solve problems, then the sneedacity project should be removed.
exactly., they are trying to do far too much stuff imo. Years ago browsers did not have embedded pdf viewers, you needed outside software for this, namely adobe, but not anymore due to foss. Mupdf, ftw! Smart folks create website without javascript sorcery, but it seems it is the ever present cancer killing most websites these days.
JavaScript (or more geneal: dynamic content on web pages) has its benefits. You don't want to reload the whole page when you clicked a delete icon in a list to remove on entry out of it. But today there are websites that don't even load text when JavaScript is turned off. So your browser makes a request to load an empty HTML document, then some JavaScript and then the JavaScript makes AJAX requests to load the content. What a waste of energy and time!
The good thing is that the gemini protocol and its markup format doesn't allow script (as far as I know) so you will always be able to read the text on a gemini page. If Firefox/Chromium together with the HTML5/Javascript world continues to become a de-facto blob world, because the JavaScript is getting too complex to study its code, the gemini protocol with a wide variety of browsers will continue the goal of the internet: Make information available to everyone.
I'm still surprised some people are crying like babies over this. How many times did Firefox "break" these legacy extensions in the past? Why do you think some of those extensions had so many updates? You can't expect frozen extensions to continue to work forever in an actively developed browser. What's so hard to understand about that? I mean, people seem more upset over this than Firefox totally abandoning this type of extension in favor of webextensions. It makes absolutely no sense to me. I also get the impression that a high majority of people bitchin' over this don't even use Pale Moon (they just have a weird hatred of it).
This thread is not about browser extensions or the need to update them from time to time. It's more about Firefox (and Chromium) removing features like FTP that are still useful for some people while introducing other questionable "features" like Pocket, new UI and hiding useful configuration entries.
Instead of forking Firefox or Chromium, one could also continue the development of the Dillo browser: https://www.dillo.org/
The source code has better documentation than the files of the Firefox source code I have seen: https://hg.dillo.org/dillo
And Dillo is released under the terms of the GPLv3, so there should be no trademark worries as with Firefox or Palemoon and no risk of building software that could be used in a closed source software as with the BSD-licensed Chromium.
Plus, Dillo has FTP support built-in
The developers are making some very questionable decisions - and perhaps it's time the browser's fans cast their minds back as to why they stopped using browsers such as chrome or firefox?
The way the forum is administered and how the lead developers behave, may be irrelevant to the actual software itself, but the obnoxious and arrogant posturing and in particular the attitudes on show in the "insect" comment and the posting directed at the individual trying to work on an OpenBSD port, is in fact reflected in the code - in the decisions regarding tor, add ons, blocking user agent override, etc - in general foisting changes on users for a particular reason, while publicly offering another line of reasoning - not so plausible reasoning. For example the UA override removal and reasons stated for doing so were questionable - and it's likely that the same narcissistic reasoning of "branding" was behind that, as it was with the OpenBSD port situation - in that allowing users to masquerade as firefox his perceived "market share" suffers. It looks like it's - "Our browser, Our way"... Except in reality it's a fork, with the overwhelming bulk of that code being from decades of dev work at Mozilla (and Netscape before that).
Too bad to see that the Palemoon developers apparently act in the same manner as Mozilla does in regard to unwanted changes (like the user agent setting). And the arrogant attitude of two main developers certainly doesn't contribute to a good working environment in the Palemoon project.
I took a quick look at the current Firefox source code (see https://hg.mozilla.org/mozilla-central/file ) to get a first-hand impression of the code quality. I looked at some files in the "browser" and "netwerk" folders. The files I looked at are written in C++98 (raw pointers, const char* instead of std::string), even the newer ones like those for the HTTP3 protocol. I haven't found a header file with documentation blocks, but instead multiple classes in one header file and cpp files with almost no comments. Furthermore, I found hints in the form of include directives that the Firefox developers implemented things like mutex and weak_ptr themselves instead of using the implementations in the standard library. If the rest of the code is the same way, then I can understand why Firefox forks are so rare... Then the only option indeed would be writing own browsers, made by the community, for the community.
mstrohm wrote:Either by starting to code from scratch (like I did)
I like this solution best... it's time to throw away the baggage browsers have been dragging around for far too long.
Well, you have a clean start when you're doing this and it sure is fun programming all the different parts of a browser and you will learn a lot from it, but it can take a long time and a lot of programming effort until your browser will render a web page on the screen But it is a magical moment when you see your compiled code drawing the content of a web page, which makes it all worth it.
People have more to worry about from malicious js used in websites served over over https run by phishing scammers and the like...
Maybe ${THEY} would keep FTP if someone adds cookies and JS to it... :-Þ
Protocols and malware are similar to pizza boxes with tainted pizza: It doesn't matter how the box looks like if the pizza is tainted. If you deliver a malicious web page via HTTP(S), FTP(S), Gopher or another protocol, the protocol doesn't matter.
Doesn't the FTP protocol removal follow the same pattern that can be seen around SystemD? Before it descended upon the GNU/Linux world, we had multiple init systems that other software had to interface with. Now a unification process is on its way, either by using SystemD directly or by using compatible implementations. The web also had many protocols besides HTTP(S) that were more common in the past to retrieve data: Gopher, FTP, (that one newsgroup protocol), QOTD and many more. A huge part of the Web is HTTPS now, so that we see a similar unification process.
They are developing and maintaining a boatload of "garbage features", yet FTP protocol handling has to go.
Indeed, Firefox is bloated and there seems to be no incentive for Firefox project leaders to make the browser better, since Firefox and Chromium have almost no concurrency at the moment (Microsoft Edge is also Chromium) and Firefox can be seen as the lesser of the two evils in regard to privacy issues since you still can disable some antifeatures via about:config, which you cannot do with Chromium.
In the long term, the free software community has to build new browsers. Either by starting to code from scratch (like I did) or by taking the Firefox or Chromium source code and creating something better out of it (Pale Moon looks promising).
I just have a hard time buying the idea that FTP is a serious time sink for the Mozilla developers [...]
As someone who is writing a web browser including its engine from scratch, I would say it depends on the software architecture of your browser. If protocol implementations are "encapsulated" and use common interfaces for accessing network facilities (sending/receiving data) and for being accessed by the rest of the engine, the only times you would have to touch protocol implementations is when you fix errors, add new features or when you change the interfaces.
The Google groups thread that is linked in the Mozilla blog post is interesting, since the motivations for the removal of FTP are written there. At least one person argues that FTP is insecure and HTTPS should be used instead. One comment there is especially interesting since it says that the FTP implementation in Firefox is insecure and hard to maintain. I haven't looked at the Firefox source code, but judging from that Google groups thread, it might be that Firefox may not have a good abstraction layer for protocols so that supporting multiple protocols may indeed be difficult. Those that know the Firefox source code are more qualified to give an answer to that.
Speaking about EFI, the question here is what it brings of new?
EFI would allow building an installer image once without having to care about the specialities of the hardware. That's what an U-Boot that is already installed on the machine (in on-board flash for example) would do. The installer image would just contain Grub and a generic kernel which gets its hardware information (FDT) from U-Boot.
U-Boot can now run EFI executables and it is possible to run the EFI variant of Grub 2 from U-Boot. Four years ago, there were already some boards that were able to boot Linux using Grub 2 (EFI) and U-Boot: https://www.suse.com/media/article/UEFI … U-Boot.pdf (The list is in section V)
In other words: The abstraction layer that U-Boot provides through the EFI boot process could make it possible to provide generic Devuan GNU+Linux ARM installer images without hardware-specific boot code. The latter could be put in separate (small) image files for each hardware in case one needs to install the boot firmware (blob + U-Boot).
Are there plans to build generic installer images for ARM that use the EFI boot process?
A few days ago, apt finally reported some packages that can be updated. In short: Everything is fine again. Thank you all for your help :-)
Hello
I'm wondering a bit, because apt reports that all packages are up to date for weeks. This happens on more than one machine that is running Devuan GNU+Linux beowulf.
The apt source is http://deb.devuan.org/merged using main, contrib, beowulf-updates and beowulf-security. All these repositories show up with priority 500 for the armhf, i386 and amd64 architecture when I run apt-cache policy.
Are there really no updates available at the moment or are my systems misconfigured?
Regards,
mstrohm
@florine:
I think, we have different approaches here. I try to build my Devuan system "from scratch" using multistrap. Regarding the boot process, I'm attempting to start my ARM boards just like x86 hardware by letting U-Boot load Grub first so that no extra steps after upgrading the kernel will be required.
After reading the following thread on the Linux Kernel mailing list and reading the thread of the GitHub issue mentioned there, it seems that the memory allocation problem is specific to the Raspberry Pi.
https://lkml.org/lkml/2019/8/2/57
https://github.com/raspberrypi/firmware/issues/1199