So far we can only consiuder DBUS as a sort of local UDP transport, if
all goes well messages get to their destination but are not guaranteed.
The argument here is that presently what we tell application authors is much more like TCP than UDP; if we allowed distributors to restart it in %post or the like automatically on upgrades, then we either have to change our guarantee, or try to "hide" the fact that the bus gets restarted under the covers.
I think the only sensible solution is the latter. Which is certainly *possible*, just like how everything short of the halting problem is possible; but it would not be trivial.
For many likely classes of DBus flaws, porting the Ksplice (http://web.mit.edu/ksplice/
) style approach would be easiest probably. But to handle the general case, I can imagine a system where we send a special message to all clients like org.freedesktop.Local.Restart and this causes them to enter a mode where they queue pending messages, waiting via inotify for the socket to reappear. The bus itself would try to flush all pending messages and save the current map of connections->service names and other state I'm not thinking of right now to JSON/XML/whatever.
Then on startup you'd need to wait for all of the previous clients to connect, probably with some timeout; I can't think of offhand how to make this nrandomon-racy. After that we need to handle anything that changed in the meantime like clients having exited and thus lost their service name (this will happen for sure if we make other software restart on upgrade like setroubleshoot does). So we compute that delta and then send the relevant signals off to clients.
For someone who knew the code and was an A+ hacker it might only be a two week or so job, though to actually know this worked you'd have to spend a lot of time creating test cases.
What was the cost/benefit analysis in this case?
The original cost/benefit was "Absolutely nothing happens when I put my USB key into a Linux desktop" and "The networking system is a static mess of shell script that we edit via UIs run as root" =)
Given some people is thinking of using NM by default also on servers
then this issue become more critical, servers do serve clients,
Let's back up a second; if our overall goal is to make applying security/important-reliability updates happen more transparently, I think the best bang for the buck is going to be Linux. For example, we could spend the engineering time figuring out how to get Ksplice (http://web.mit.edu/ksplice/
) work under the RPM hood.
DBus has so far had a pretty good security and reliability track record; while it's not simple software, it has simple goals and this has limited complexity. Something like the Linux kernel clearly has a much bigger goal and so is order(s) of magnitude more complex and with this complexity has come the concomitant security/reliability issues.
And if I had the ability to herd security/reliability cats, I'd have them spend time on Firefox and try to take what Dan Walsh has been doing even farther - break it up into multiple processes with locked down security contexts and evaluate changes to the desktop to better handle the concept of processes with different privilege for example.