I think git is much better than hg, but I think most complaints made in the post are fair. Git UI is truly awful. IMO, The two killer features of git are the index and content-based tracking.
First the index makes dealing with OP situation much better once you understand it. For example, in hg (or any other VCS I know of), when you merge something and there is a conflict, every change is in your tree, and you don't know what's correctly merged from what's not through the VCS command. For example, hg diff will show you changes of both merged and unmerged stuff, but thanks to the index, git diff will show only the unmerged stuff. When you start fixing conflicts, when you add files with git add, those are not shown anymore with git diff (but are through git diff --cached). The index has a high learning curve, though.
The other killer feature is code tracking: git blame -C -M is extremely powerful. It can tells you that which changes are coming from which file (through heuristics, so it also works for code converted to git, e.g. git-svn). I explained this in more details there http://cournape.wordpress.com/2009/05/12/why-people-should-s....
I think in the end, git is actually simpler than hg - that is, the UI is awful, but the underlying model is simple. For example, the branching model in git is simpler than hg. In hg, you have branch-through-clone, bookmark, branches created by hg branch ( http://stevelosh.com/blog/2009/08/a-guide-to-branching-in-me...). This is maybe a matter of personal opinion, but I hate simple version numbers in DVCS (I changed from bzr to git because I wasted a lot of time with bzr so-called simple numbers). With DVCS, it is impossible to have a consistent version scheme (that is at some point, you will have several branches with the same simple version referring to different commits). One thing to understand is that you almost never need to use the raw id in git, because most commands understand a lot of different syntax such as HEAD^, HEAD~, branch names and tags. If I need to refer to a commit more than once, I use a tag for it.
One think which git got wrong IMO is fast-forward when pulling: it means you lose branch information, and the history is hard to understand (it complicates life for bisect or continuous integration) - I change the pull command to make pull non fast forward by default in git.
Going back to svn is insane if you ask me, at least for usual usage of source-code only (DVCS currently suck at assets management). I agree with the OP that branching is sometimes used too much by git users, but branching for release management, code reviews, etc... has saved me hours of work as a release manager on several middle-sized open source projects (through git-svn).
Git got the low-level stuff right, but I think we have barely seen what's possible with DVCS. Bug-tracker integration, code-review integration, etc... are still in infancy.
> One think which git got wrong IMO is fast-forward when pulling: it means you lose branch information, and the history is hard to understand (it complicates life for bisect or continuous integration) - I change the pull command to make pull non fast forward by default in git.
That sounds pretty terrible. I would certainly refuse any changesets from you in OSS projects I'm managing if you tried to send me changes that had artificial commits introduced everytime you tried to sync with the upstream.
You don't lose any information with a ff. There's no information to lose -- you're just not specifically recording a commit that has no code changes, but says "and on this date, I grabbed some upstream code."
I've seen that complaint elsewhere, though. I always do pull --rebase (and only take merge commits in exceptional cases) because I'm on the other end of that spectrum. My history is very easy to read. Glad you can bend it to do what you want, though.
You do loose useful information, because you don't know where branches started/ended. For feature branches, that's very useful: useful for bisect, useful for reverting a branch. For syncing to upstream, that's indeed not so useful.
Git pull/merge etc. have a --no-ff switch that will do a normal merge even if a fast-forward is possible. You can even configure the default, though I don't remember the config variable.
If it fast-forwarded, it's because you haven't added any changes that aren't already upstream. If you're losing information, it's when you considered starting a branch.
The .rpm vs .deb debate is a fallacy. The real issue is that when packaging for say debian and RHEL you have to care about different glibc, different compiler versions with potentially different ABI, different filesystem conventions, different scripts for post install, etc... The format in which the files are packaged is the least of the issues compared to that. For example, packaging for opensuse once you have a package for RHEL is as much of a PITA than packaging for debian, really.
Once you standardize on what makes packaging difficult across distro, you basically end up with the same system. I think systems like the build service from Suse, etc... are much more useful than wishful thinking on packaging format.
Yes, suse build service is a very nice tool and much more interesting than a single package format, which wouldn't help much. And if the dream is to provide just a single binary package for any distro in the world, it won't work anyway. Well, maybe in theory, but if you mean it seriously, you need to compile and link against exactly the same binaries that the target system use. (Or static linking. Or just believe it will work. But serious commercial vendor who wants to provide a real user support cannot.) Which would mean that every Linux system would have to use the same binaries, which is, hm, nonsense.
Meh, from a source point of view, once I have a debian/ directory, with a properly set up rules file, building becomes trivial. Then it is matter of generating a package per-distro/version with the right environment. This could be very well done in an automated manner and a few vms.
I hear what you are saying, but we can reduce the differences between build systems while still using different targets.
I do a lot of work with OpenEmbedded, every distro image you create is for a different hardware platform, using different libraries, different tools, etc. But it is a unified build system. So I can say use this version of glibc and spit out the files in that folder, and with very minor tweaks I can use a completely different version of glibc and a different file structure for another image. If we had a unified dependency/build system across distros, we could have completely different contents while having relatively straightforward customization because if you knew one system, you'd know them all.