You are not logged in.
Interestingly reading about the various service depedency models and what is supported by each system it reminds me of similar discussions regarding apt dependencies. A package A depends on B. So B is installed with A. Now A is gone, what apt should do with B ?
I guess the answer could be fuzzy and elusive.
Which makes me think. Is APT scripted or not? And maybe then my initial question could be better rephrased as : Should a service manager be implemented in a certain programming language or in a scripted (interpreted) language? And a similar question : Is BASH (a shell) a specialized interpreter suited for that kind of job (service-system supervisor) or another interpreter could be better suited ?
ralph.ronnquist i should study more sysvinit and runit i guess. My idea was that by using a scripted service-system supervisor you make the system less complex (in comparison to having a another -hardcoded- supervisor). But you argue that shell-scripted control leads to less coherence due to more offered flexibility. So as a metaphore ,the system's services supervisor should be a brick to hold the userspace and not a quicksand ?
So.. if it could .. wouldnt a shell by being more flexible and powerfull be more good at it ?
Why a shell (by interpreting a sysadm's script) couldnt do what a service manager does ?
(root@client-~/importedshare)$ chattr +i test.txt
chattr: Operation not supported while reading flags on test.txt
But i changed 'test.txt's attribute in the server.
Thanks for reminding me that UNIX 'quirk' .
According to $ man exports :
root_squash
Map requests from uid/gid 0 to the anonymous uid/gid. Note that this does not apply to any other uids or gids that might be equally sensitive such as user bin or group staff.
no_root_squash
Turn off root squashing. This option is mainly useful for diskless clients.
And assuming my server /etc/exports is :
/home/chomwitt/NFSExport 192.168.2.44(rw,sync,no_subtree_check,no_root_squash)
It happens that a client/root user can create a file in the nfs share.
(root@client-~/importedshare) # touch test.txt
And in the client we will indeed see that a file was created:
(chomwtt@server-~/exportedshare) # ls
-rw-r--r-- 1 root root 0 Jun 27 17:01 test.txt
Now logically chomwitt@server should not be able to delete that test.txt . But i can.
Is that a bug?
@PedroReina How will i restart the X server in xdm or another display manager?
For the moment i try using network transparency without using ssh.
@ralph.ronnquist I was off for a while so unfortunately i couldnt sync to your proposed X conf race :-)
but thanks for the solution.
It worked but only to one of my host where i start X from a tty shell with startx.
In the other xfce host with xdm as display manager it didnt work.
I will try to read to xdm conf.
I think it's :
/etc/X11/xdm/Xservers
:0 local /usr/bin/X :0 vt7 -nolisten tcp
Changing -nolisten to -listen it'll be effective after restarting the whole system.Logging out from XDM and
llogginh again in didnt work.
Also it seem's appropriate since i started that thread to quote the security note from /etc/X11/xdm/Xservers
and remind to fellow devuan readers that what drives me is the curiosity to learn some basic of how network
trasnaperncy works with X.
# - SECURITY NOTE: Always pass the "-nolisten tcp" option to the X
# server, as shown in the examples below, unless you know you
# need the X server listening on a TCP port. Omitting this
# option can expose your X server to attacks from remote hosts.
# Note also that SSH's X11 port-forwarding option works even with
# X servers that do not listen on a TCP port, so you do not need
# to remove the "-nolisten tcp" option for SSH's benefit.
Speaking of 'security' in the context of X can have more finegrained control than xost + that i tried for experimentation's sake.
So i think that ssh forwarding could 'overdone it' for a home lan. I guess having access restricted inside the lan hosts could be a far more better and less computational intesive approach. Last, i prefer the term 'workflow isolation' than security. Security make me think of badass criminals and police. But when i work on my table for me 'security' is not allowing other family members messing with my workspace. On the other hand i may have set a space to allow someone to laydown a snack or water.. (i wonder if that is a part of the desktop metaphor that was missed in the 80s..)
Home Lan: hostname: enous (192.168.2.75) / user : chomwitt
hostname : familypc (192.168.2.11) / user : alex
Both run devuan/daedalus.
We'll need deb package : x11-apps for xeyes (@)
(chomwitt/enous) $ xhost +
access control disabled, clients can connect from any host
(alex/familypc) $ xeyes -display 192.168.2.75:0.0
Error: Can't open display: 192.168.2.75:0.0
The same happens trying to start xeyes in enous and use as X Display familypc.
In a funny twist of a personal perspective of a very influential dark shadow of politics ,cultural differences on libreland (with various recent news tending to reinforce that view leading to the collapse of the meritocracy camp :@1) here a is quote from a 2000 book on X giving an initial description that has a Unix philosophy aura :
X is a method for representing graphics operations as a stream of data . suitable for use as a network protocol. The concise guide to XFree86 for linux by Aron Hsiao
Contrast with an even older book 1993 The Joy of X (hitting the all time high ceiling of catchy promoting titles!!) that starts the introduction to X by focusing to the window nature of X :
X lets you run many simultaneous applications on your display , each with one or more windows of its own.
So forgetting for a moment that is named X, the question is ,is that idea (the networked graphic ops stream :-) ) worth existing as a libre alternative ? (if yes why is the least forkable idea in libreland?) .And is Wayland an incarnation of the same core idea and if not what is the Wayland's core idea ?
Later addition :
According to XLibre maintainer and initial forker Wayland's core focus was the composition component of the display server stack.
Note that Wayland itself is only about surface composition, nothing more (plus a little bit input routing). It was created as an experiment to explore how future composition component in the Unix/X11 stack could perhaps look like – the idea of building whole desktop directly ontop of it (without X) came much late. Enrico Weigelt's interview by Felipe Contreras, (06/2025)
--------------
@Question of Felipe Contreras on X11 future on Xorg mailing list (7/6/25)
@How X started. (An effort of mine to see the development of X from the view of the influence of bigger organizations.)
@Wayland''s creator interview in 2012 fosdem.
@Wkp/Wayland (protocol)
Thanks @fsmithred @ralph.ronnquist @golinux for giving us feedback on the core devuan infrastructure. .
So there are:
core:
pkgmaster.d.o
git.d.o
files.d.o
keyring.d.o
bugs.d.o
ci.d.o (jenkins)
devuan.o
forum
wiki
newsgroups
outerim :
mirrors
dns service?
irc channels
Is jenkins.devuan.dev part of the 'core' Devuan infrastructure? And what does it do?
Also i think keyring.d.o is in the core.
I don't think IRC can be considered as part of the infrastructure of Devuan or any distro for that matter, it's just a handy little outside utility, a simple courtesy that may be helpful.
Very interesting. What is condisedered part of Devuan project infrastructure ?
So how would you call an effort to make some handy tools equipped with automation hooks in order for a robot to use them ?
Automation Interface Hooking System? It's not that the system (tools) didnt have access points before . They just got additional ones in order to be used in a different context . Was a timesharing system shell-less before runcom ?
So the 'shell(in computing) ' is a special kind of human interface and mechanism that allows to create automated workflows of already existing processes.
So speaking in general we want programs to have hooks and interfaces that allow for interactive use and automate use. Why an effort to achieve that would be called 'Shell' make no sense to me.The subtitle of Louis Pouzin paper 'A Global Tool for Callling and Chaining Procedures in the System' makes more sense.
What 'rant' means ?
Reading multicians.org/shell and especially the shell related linked paper ti seems that in early 60s in the timesharing era when the 'shell' concept emerged it wasnt about external system aspects or the systems outer layer encircliing the kernel. It seems to me that it was an effort to allow calling 'commands' (user initiated programs from a console) from another program.
I am not sure why the 'shell' word was choosen. But from reading the shell paper doesnt seem plausible to me the idea of creating an outer - interface to the user. The way the ideas are presented its like builing a system that merges interacting and automated use of routines. The word -interface- is used but i think refering to the user.
Then in 64 came the Multics design time, in which I was not much involved, because I had made it clear I wanted to return to France in mid 65. However, this idea of using commands somehow like a programming language was still in the back of my mind. Christopher Strachey, a British scientist, had visited MIT about that time, and his macro-generator design appeared to me a very solid base for a command language, in particular the techniques for quoting and passing arguments. Without being invited on the subject, I wrote a paper explaining how the Multics command language could be designed with this objective. And I coined the word "shell" to name it. It must have been at the end of 64 or beginning of 65.
Louis Pouzin on the Shell origins.
4.1 . We may envision a common procedure called automatically by the supervisor whenever a user types in some message at his console, at a time when he has no other process in active execution under control . This procedure acts as in interface between console messages and subroutines.
The purpose of such a procedure is to create a medium of exchange into which one could activate any procedure , as if it were called from the inside of another program.Louis Pouzin . The SHELL : A Global Tool for Calling and Chaining Procedures in the System
So you have interactively called programs (commands) and you want to create a way to chain them - use the as parts of a another program. Isnt that a description of today 'shell scripting' ? In that context what we call 'shell' started as a way to make user-interactive programs accessible to automation by being able to became part of a larger program. That is not a 'shell' - outer layer semantics.
IRC is usefull as part of the devuan project as it tries to accomplice a rather difficult promise . The promise that LLM tools try to satisfy.The promise of helping you as quick as possible , 'now' . And emergencies are issues that do have that time-urgent aspect.
If 'shell' means outside-exterior interface of a system to it's user then any user progrram could be seen as 'shell' like.
I agree that CLI-TUI Shell is more suitable.
Shell : exterior interface
UI : interface to whom ? the User
CLI : Its interpreter of commands
But still i think is missing the coordination - orchestrating part of setting up and running workflows using a comp system's resources.
I think that 'shell' is a semantically coarse word that highlights semantics of enclosure,protection, (maybe thinness) that stuck , and latter its usage was reinforced of the proliferation in unix-textbooks of the classic image of the shell encircling the kernel.
We could of course still use that word but i think the semantics of the 'shell' are not align with the way we use the process 'shell'.
The 'shell' is the default process that a computational system presents to its human user by default.
A user can use the 'shell' to set up workflows of processes. A user can also automate by scripting workflows of processes.
So i think the 'use' entails semantics of control, management , coordination , language intepreter , admininstration , human interface , mediator.
In that context even CLI seems semantically more corect. But still missing some central semantics.
Surely the 'shell' does NOT encirlcle the 'kernel' . A user process started by the shell or automatically when the system boots also is close to the kernel services.
So a 'shell' seems to be a user process that is best suited to a task that others are not. That task is workflow setup , management and control.
A recent family issue made me visit hospitals very frequently.
And yesterday an issue poped up in irc#devuan. A devuan friend had a strange xorg issue in his monitor.
Immediately two other friends tryied to help him.
My brain connected some neuro-dots and i thought that that situation reminds me of the emergencies part of
a hospital. The first step in greek hospitals is 'διαλογη' which means in direct translation selection. After
a first examination is determined at what part of the hospital the patient will be delivered.
(but the first first step is giving your social security number to the front desk)
(after selection the front desk gives you a plastic bracelet with name and a tracking id).
So i cant help wonder.If irc is like emergencies could a user-patience been delivered to the forums ? or the mailiing list ?or to another irc channel ? And what about issue tracking ?
(i think it would be helpfull to be able to create new irc temp channels at will).
The way irc works now is like emergencies but assuming that the issue can be solved (or not) in that 'compartment' alone
and only by the brave friends that are in the front line for the most hours!.
But i think that its possibly that the broader devuan community could have helpfull feedback (but maybe in different timeframes) but it is unaware of the emergencies happening. And even it the issue is successfully dealt with again it seems to me that the incident (as info generated) is contained in the irc logs and thus dont make waves to the broader community that could (when filtered) make devuan adapt and change.
Of course loosing a patience means many things (mostly negative .. :-) like diverting to morewebsearching.
Also i couldnt help but wonder if a hospital building structure imposes a certain selection and treatment form when instead it could be better for a patience to be assigned maybe in paraller to more that one 'clinic' and force those doctors to colloborate and move near the patience rather that the petience assigned to only one clinic. So the networked structure of info flowing that would help more a patience is misaligned from the brick structure.
context : devuan packages / nvidia-driver @
devuan wiki / nvidia gpus @
devuan forum / Installation > How to install nvidia drivers in daedalus? @
debian wiki / NvidiaGraphicsDrivers @
wikipedia / List of Nvidia graphics processing units @
nouveau / codenames @
debian package tracker / glmark2 @
After installing nvidia driver in Daedalus i took the chance to perform a couple of benchmarks
to see how libre nouveau performs relative to propriatery nvidia gpu drivers.
I use two benchmarks : glmark2 and unigine's Valley 2013 .(Superposition 2017 wasnt executing with nouveau)
$ inxi -G
Graphics: Device-1: NVIDIA GM206 [GeForce GTX 960]
Unigine Valley Benchmark 1.0
System
Platform: Linux 6.1.0-33-amd64 x86_64
CPU model: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz (3597MHz) x8
GPU model: NVIDIA GeForce GTX 960 PCI Express 535.216.01 (4096MB) x1
Settings
Render: OpenGL
Mode: 1440x900 fullscreen
Preset Custom
Quality High'
FPS: 99.8
Score: 4176
Min FPS: 43.3
Max FPS: 167.8
FPS: 10.8
Score: 452
Min FPS: 7.5
Max FPS: 16.5
glmark2 Score: 842
glmark2 Score: 17719
Something else that i would like to benchmark is the time it takes dekstop process to open their window.
It seem to me in nouveau the window took more time to appear.
ps : I align here with a mainstream mentality that focus on certain aspects of a software use. But from another
libre distro community perspective thinking in terms of community workflow integration , security , not to mention
of course the ability to study and alter the code the nouveau would be a better choice. Of course i cant help to
wonder why some libre projects like X server , nouveau seem so sterile in terms of forks in contrast with other
areas like window management. I would make a first guess that the more closer you get to the hardware the
less flexible a programmers becomes.
I disabled secureboot in my asus bios and i installed again nvidia-driver. Again i got the error with nvidia-persistence but after reboot nvidia driver seem so far to work ok.
$ sudo nvidia-detect
Detected NVIDIA GPUs:
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM206 [GeForce GTX 960] [10de:1401] (rev a1)Checking card: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1)
Your card is supported by all driver versions.
Your card is also supported by the Tesla 470 drivers series.
It is recommended to install the
nvidia-driver
package.
In my first attempt to install the nvidia-driver
$ sudo apt-get install nvidia-driver
...
Setting up nvidia-persistenced (535.171.04-1~deb12u1) ...
Starting NVIDIA Persistence Daemon
nvidia-persistenced failed to initialize. Check syslog for more details.
invoke-rc.d: initscript nvidia-persistenced, action "start" failed.
dpkg: error processing package nvidia-persistenced (--configure):
installed nvidia-persistenced package post-installation script subprocess returned error exit status 1......
nvidia-persistenced E: Sub-process /usr/bin/dpkg returned an error code (1)
...
I tried to follow an stopAI's idea from the thread nvidia-persistenced failed to initialize.
$ sudo apt-get purge nvidia-*
$ sudo apt install --no-install-recommends nvidia-driver$ sudo dmesg | grep nvidia
[ 0.648792] udevd[160]: Error running install command 'modprobe -i nvidia-current ' for module nvidia: retcode 1
.
I also follow the debian wiki guide @(Nvidia Proprietary GPU Drivers).
There there is a section on enroll your machine owner's key (MOK) to use DKMS modules.
Is that relevant in Daedalus if secure boot is enabled ?