Quand les gens s’aperçoivent qu’ils s’ennuient, ils cessent de s’ennuyer.
Regarding my last post, Benjamin said to me that "the biggest omission was a solution to the problem". Well, I don't have a solution, not really anyway. But while the topic is fresh, and it seems to have resonated with a few people, I do have some thoughts on the topic.
I seem to recall Christian complaining about the new GDM animations on login, that they feel slow. Well that's because we have a static conception of interface, that things are static by default, and that motion is the abnormal case. This is a mindset that distorts how we make programs (static with sprinkled animation), and one that does not reflect the way that hardware works these days.
Three years on, Jon Smirl's breakdown of the Free graphics stack is still relevant. Go read it! But instead of his conclusion ("make free applications faster through a new X server"), I would draw the line differently: let us make our applications with GL. Then the applications that we make will be new, 100 times a second.
This hertzian rebirth liberates the computer to react to us, and to anticipate our desires.
For example, as the user starts to focus on an actionable surface, that surface might become more prominent at the same time as other, related actions present themselves. To take a most basic example, if you are typing in a word processor, the mouse cursor goes away. (Or can be made to do so.) If you then grab the mouse again, probably you want to click somethingorother; yet the clickable surfaces are present even when typing, unnecessarily, and are equally small once you do grab the mouse.
Or to take a more familiar example, the "expanding icon" behaviour of the Mac OS dock contextually expands the icon that you are focusing on, and anticipates focus of adjacent choices; and it's something that we still do not have, years later.
It is for these reasons that I think that a GL-based canvas like Clutter is an excellent option for GTK+. The library provides nouns and verbs that compose to make a dynamic, pleasing interface; and if you are programming in a fast language, you can implement your own nouns and verbs, which then form part of the GL rebirth/draw loop.
Another thing to think about is that the mouse is simply a way of indicating focus and a limited form of selection. Our verbs are point, click, and drag. New input methods are here, or coming soon: things like multitouch, head tracking, and gestural recognition. These input methods do not map exactly to the mouse. In fact, once you have such capabilities, other, more "normal" parts of standard interfaces start to look restricting. Menu bars are made for mice and framebuffers; the panel starts to look superfluous; etc.
Obviously this strategy has problems when it comes to power consumption; I do not know the solution. I assume the OpenGL ES people have thought about this somehow.
Whether or not you think these ideas are good or bad, you can rest assured that they will not come to pass in GNOME as it is now.
The problem, to my mind, is that we have painted ourselves into a corner of serving a fictional clientele that is not ourselves. As Havoc says:
Is GNOME for UNIX and shell users? Is it for Linus Torvalds? Is it for ourselves? Is it for American college students? 35-year-old corporate office workers with an IT staff? And are we willing to tell whoever it's not for to jump in a lake?
...There is a default audience if a project doesn't find a way to choose one deliberately, and it's what I've called "by and for developers." If there's no way to stay out of that gravity, it's better to embrace it wholeheartedly IMO...
I think we should have a project for hackers. A GNOME skunkworks, if you will; a place in which hackers can create something new for ourselves, in which innovation and even failed experimentation is possible and encouraged. If something developed there is useful to a wider public, as we would hope would be the case, then by all means folks can pull it out, and start to put it somewhere more stable.
Note that "for hackers" does not necessarily entail complexity; it entails only that complexity which is necessary. Physics does not strive to make ugly theories, string theory aside; we as hackers, though oft maligned, neither want such things. Working together, with the best spirit of peer review, we could come out with something with power, and simplicity, and diversity.
Such a project would question everything: what is our interface metaphor? Is it a desktop? Is it a space, like Neuromancer? Is it a set of overlapping planes, like in Minority Report? Is it something else? What is it populated with? How do we compose separate parts into one graphical whole? How do we choose parts of the whole to focus on, to interact with? What is the nature of a part of the whole? How do focus and applications interact? Does the phrase "switching applications" make sense, and if so, how do we do so, or start new applications? Do applications exist in windows? How do multiple users interact with one live system, when we run GNOME on our TV and on our laptop on the arm of the sofa, and using the wiimote as an input device?
I think we can learn a couple of things from the Pyro experiment. In case you don't recall, Pyro was an attempt to bring web developers out of the confines of the browser window, to let them manipulate the whole "desktop". It was a really neat hack. It seems to have failed, also; I don't hear much about its uptake, a year later. At the risk of poking Alex's old wounds, we should probably wonder why.
If I were to guess, I would say that Pyro failed for cultural reasons. Its target "audience" was web developers, and also those parts of GNOME that would be comfortable changing desktop development into web development. But web developers like to work for an audience of users, and of these there were not many.
But more than that, and reasonable people may disagree, I think that Pyro failed because it wasn't exciting to the developers that we do have, to our culture. I don't think this situation was particularly gratifying to Alex, who (I would imagine, I do not know) found more fulfillment in other parts of his life. Ah well. "One must bear a chaos inside to give birth to a dancing star."
Regardless of your judgment on Pyro, I think that there's still loads of interesting work to do on "the client side", not least experiments with the GPU and self-renewing interfaces. So we should embrace the hackers that we do have, and see where that takes us.
Let's take a number of cases of existing applications, and see what they might look like, coming out of the skunkworks.
Marvin is browsing planet gnome. The web page fills up the entire screen. He finds some link of interest, and selects it. The Planet GNOME page seems to recede in distance as the new page fades in to replace the full screen.
He reads for a while, somehow scrolling the page, then realizes he wanted to go back to some other link. He bangs the mouse against the edge of the screen, or gives a certain hand gesture, and the web page shrinks down from fullscreen to just one page in a series of pages fading off into the distance, representing his browsing history. As he runs his mouse over that series, the pages pop up under the cursor, with contents if available. The effect is similar to that of flipping pages in a phone book. or passing the mouse over the Mac OS dock.
Marvin middle-clicks on a link to open it in a new "window". The existing train of web pages slides left, off the screen, leaving space for the new page to fade in. Marvin reads for a while, then decides he wants to get some hacking done. He holds up his hand, pushes back the space, pans his applications around until he is on Emacs, and lets his fingers fall into a fist. Emacs comes close to him and fills him and the screen with its radiant light.
Alternately, Marvin could pan via moving the mouse while holding down the shift key, or pressing some appropriate key combination that does not conflict with his carefully-crafted emacs keybindings.
Erinn, having just finished hacking the good hack, wants to enjoy her photos on the TV in the living room. She sits down on the couch, and grabs the wiimote. She presses a button to open a media navigator, showing her all of the media shares on the home LAN, neuromancer-style. She pilots into the living room machine, selects the photos, and begins browsing. The photos are arranged on a timeline rolling into the distance; she focuses in on the set from last year's GUADEC, and begins flipping through them.
OK, so you see where I want to go with this. Try it yourself, perhaps taking "video chat" as an example. How are calls received? How are they made? How does video chat interact with your photo app, with the rest of your applications?
My point is that although we have reached a kind of local equilibrium with the "desktop" metaphor, the existence of the GPU gives us all kinds of possibilities, unique capabilities of the box connected to the monitor. My ideas (and these are not all mine, I stole most of them from folk at work) are those that I see from here, which is not very far. We can do better than this.
I recognize that these are words from a marginal player in GNOME, and a Schemer at that. But I think that somehow, a skunkworks is how GNOME will "revision up", whether it comes from the free community, or whether it is dropped on us from some corporate lab.
The concrete step that interested hackers should take is to learn GL, to play around with it. Use the texture_from_pixmap extensions to pull in a WebKit window or two, toss them around. Use Clutter, if that's what you feel like. Build a video chat application (the Telepathy people swear it's possible), build a new window manager (with tight Ruby bindings), make a mockup. Then if your mockup is inspiring, and organic enough to live in, we can start a GNOME skunkworks to play around in.
But above all, make it for you. Pay no attention to mental questions of how your mother would see this, those questions will fix themselves in time. A focus on beauty and simplicity and power cannot fail to make something interesting. Code against boredom!
It is with some trepidation that I go to buy my ticket to this year's european GNOME conference, GUADEC.
Decadence \De*ca"dence\, Decadency \De*ca"den*cy\, n.
[LL. decadentia; L. de- + cadere to fall: cf. F. décadence. See Decay.]
A falling away; decay; deterioration; declension. "The old castle, where the family lived in their decadence." --Sir W. Scott.
Take a look at the list of slated talks. What is your general impression?
Mine is of a large project in a state of marginal returns, in which a larger and larger part of the effort goes to maintenance. On the one hand you have the large deployments, the integration with other software projects. On the other hand the new developments that we have are very careful not to bite off too much: a printing dialog;
another revision of ekiga; a new image library. Ed: Ekiga was probably not a good example of this.
The problem, as I see it, is that GNOME is in a state of decadence -- we largely achieved what we set out to achieve, insofar as it was possible. Now our hands are full with dealing with entropic decay. Take, for example, Evolution's random walk to improvement. In most releases it's better, in a few it's worse, but basically it still works fine, and has been that way since about 3 or 4 years ago.
It's like, welcome back to 1984's Macintosh plus interweb. We did it!
Seriously though, it does not seem to me that GNOME is on a healthy evolutionary track. By that I mean to say that there is no way there from here, if "there" is universal use of free software, and "here" is our existing GNOME software stack. The evolutionary thing to do would be to do something web-like, because that's where all of the programmers are these days. But that's not part of our culture. Until recently, with WebKit/GTK, it wasn't part of our software stack either -- all of the new web platform bits were dribbled to us over the wall from Mozilla, or embedded within GNOME as Firefox.
The other side of that is that while the web will be a core part of the computing future, it's not clear that swallowing it wholesale is the best strategy for client development. There are too many things that local computing offers: other software paradigms (emacs, unix, independence), unmediated input (sound, video, alternate input devices), direct access to powerful output devices (control part or the entirety of a screen, access the underused GPU).
But even if we eschew "going with the flow"-style evolution, it's not like GNOME is on a revolutionary track which will win in the end with its compelling UI or programming-linguistic metaphors. The screen is still constructed as a static landing strip on which the mouse pointer might alight, an array of possibilities necessarily constricted by decontextualized space. The metaphors are the same: file, folder, desktop, even as these things cease to exist for many people. And techologically, we don't even have a way of considering how the visual elements of space might be anything other than static, much less have any way of interacting with those elements other than the impoverished point and click. What we're left with is the GUI equivalent of chartjunk.
There are exceptions to this story. There's Clutter, which I expect will be the "way out" both for GTK+ and for GNOME. (I know there are technological differences with other canvas models, but at least they have the hackers and the maintenance resources.) There's Moonlight, which is interesting and hacked by very smart folk, but whose fortunes are too bound to Microsoft. There are heroic retrofix efforts like MPX, not really a part of GNOME. But other than that, we have the decay of slavish adherence to the HIG, the logout dialog, the wallpaper chooser, the last-percent efforts of refining an increasingly irrelevant stack of software.
The GTK+ maintainers are well aware of the decadent state of GTK+, and are moving as much as possible to plug the leaks. But it is no longer a nimble codebase, and will take at least 6 and possibly 12 months before a 3.0 release can come out. And that's just stopping retrograde motion; actual construction must take place outside of the "core" until the core is ready for it.
This disenchantment is personal as well: among other things, I've spent thousands of hours on bindings to GNOME libraries, and just now when I am ready to make an API and ABI stable release, I just don't feel like packing another button into another hbox.
Anyway, I'm buying my ticket, but mostly for the hallway track -- GNOME folks are smart and kind, and I want to see what's going on, what people are really thinking about. Istanbul ho!