As you have seen in the previous posts (like Beyond form-factor differenciation, challenges of hybrid devices ), nowardays, computing devices have a tendancy of having a lot of form-factors. The two big categories are those that use keyboard and mouse to interact (desktop, laptop, netbooks) and those using touchscreens (smartphones, tablets etc). Hybrid devices with both keyboards and touchscreens exists as well (Asus transformer, Lenovo Xt series).
Both Microsoft and Apple, as well as the Linux world (somehow lead by Canonical) started to evolve in order to take these form-factor in account. (Desktop UI part 1: Windows or the meteor storm ), and most of the time, the evolution is painful, especially with Windows 8, and the dual desktop and modern-ui (metro) schizophrenic UI.
The fact is that in order to be installed in these new devices, these companies or organization tried to evolve, coming from either the desktop world, or the mobile world. But sometimes, evolution is not enough. Microsoft failed to build an unified UI for both mobile and desktop, since it is basically desktop and modern-ui put together, without any “glue”, while Apple is proceeding more carefully, bringing mobile concepts on it’s desktop (dashboard, contacts app etc.)
Something interesting today is that, with that explosion of form-factors, there is also the maturation of many graphical toolkits, that makes design on those heterogeneous platforms easier. Take HTML5 and all the helper toolkits around (jQuery, enyo etc.) or Qt, with the impressive QML 2 coming soon.
With this problem of an UI running on both a “computer” and a “touch device” to solve, and the help of the toolkits, I started to brainstorm on what should be this UI.
This article shows the first ideas I had, and a bit of implementation. Both technical and related to design.
Navigation and toolbars
My first design principle is “focus on content”. The user should never search for the content he / she is looking for. And for that principle, all the tools and other stuff should not interfere with the content. As with many UI design, toolbars are rejected to the borders, whereas the center area is only used to display content.
On the desktop, the biggest problem is that navigation tools are also rejected to the border. Back buttons, OK or Cancel buttons, close, minimize buttons, as well as toolbars are then far from the center, where the content is (and where the mouse cursor is most of the time). In order to partially minimize mouse travel, I think that a global context menu, that can be accessed with right-click, anywhere in the context area, and that brings the same actions as those in the toolbars around (except for navigation features) should be present.
On mobile device, this problem is less important, since moving a hand around and touching only the controls is easier. But, the problem is space. Small mobile devices might not afford displaying a lot of controls (buttons, toolbars). The idea is to minimize the number of tool buttons, and include a context menu button, that will group all the missing actions.
For navigation on a mobile device, I think that the best way is to push the “page stack” paradigm to the highest level, and remove the “back” button, that is present in nearly all mobile UI, and wasting space. As described in Some quick UX suggestions for Jolla, I think that a swipable page stack is a good idea for navigating on a mobile device (whether smartphone or tabet). So moving back is done by swiping from left to right.
We then have two UI, that are very similar, and all based on the page stack principle. The only difference is that on mobile, there is the swipe move to go back, and on desktop, there is a button.
For dialogs, it is a bit trickier, since dialogs often have two buttons, one for accepting and another for rejecting. I decided that dialogs will be pages, that will have vertical accept and reject buttons on the left. For a desktop application, nothing changed. Clicking on the given button accept or discard the dialog page, but for a mobile device, the user have to perform the swipe gesture from the action to trigger (and they are vertical on the left to be easily caught for the swipe gesture). Then, the gesture is the same than for going back.
I think that future device will need multitasking capabilities, and we have seen nice implementations (in windows 8, or on the Samsung Note, and Galaxy Note 2). Indeed, screens that are as big as the one of the iPad, the Galaxy SIII or a laptop needs to run more than one task, or the space on the screen will be wasted.
However, what’s missing with these devices is a proper multitask view, like the one on the N900 or the N9. Although on a smartphone, there might need a separate view to display launched applications, on a tablet and a laptop, a thumbnail of all opened applications could be displayed all the time, like on the MacOS dock or the task manager on windows.
The winter project is the codename I gave for an unified UI for laptops, tablets and smartphones, built with Qt5, and based on Mer + wayland. Those technologies are maturing fast (wayland should be stable soon and Qt 5 will be out at the end of the year) and very promising, but I still need to learn a bit how to put all the pieces together in order to write something that works well.
The winter testbench is the first application that I’m writing for the winter project. It is used for testing my ideas but also includes the first graphical components, like buttons and the stack of pages that will be the basis of winter. Currently, it works flawlessly even if what I coded is mostly dirty hacks.
Maybe when more ideas are implemented, I will be able to release a version for you to test it as well.