Skip navigation

So Michal Danzer just pushed some patches to enable using glamor from within the xf86-video-ati driver. As a quick recap, glamor is a generic Xorg driver library that translates the 2D requests used to render X into OpenGL commands. The argument being then that the driver teams need only concentrate on bringing up the OpenGL stack and gain a functioning display server in the process. The counter argument is that this compromise in saving engineering time penalises the performance of the display server.

To highlight that last point, we can look at the performance of the intel driver with and without glamor, and rendering directly with OpenGL:

glamor on SandyBridge

The centre baseline is the performance of simply using the CPU and pixman to render, above that we are faster and below slower. The first bar is the performance of using OpenGL directly, in theory this should provide the best performance of all, only being limited by hardware. Sadly, the graph shows the stark reality that undermines using glamor – one needs an OpenGL driver that has been optimized for 2D usage in order to maximise GPU performance with the Xorg workload. Note the areas where glamor does better than the direct usage in cairo-gl? This is where glamor itself attempts to mitigate against poor buffer managment in the driver.

Enter a different GPU and a different driver. The whole balance of CPU to GPU power shifts along with the engineering focus. Everything changes.

Taking a look at the same workloads on the same computer, but using the discrete Radeon HD5770 rather than the integrated processor graphics:

glamor on Radeion HD5770

Perhaps the first thing that we notice is the raw power of the discrete graphics as exposed by using OpenGL directly from within cairo. Secondly, we notice the lack luster performance of the existing EXA driver for the Radeon chipset – remember everything below the lines implies that the GPU driver in Xorg is behaving worse than could be achieved just through client-side sowftware rendering, that using RENDER acceleration is nothing of the sort. And then our attention turns to the newcomer, glamor on radeon. It is still notably much slower than both the CPU and using OpenGL directly. However, it is of very similar performance to the existing EXA driver, sometimes slower, sometimes faster (if you look at the relative x11perf, then it reveals some areas where the EXA driver could do major improvements).

glamor on Radeion HD5770

Not bad for the first patch with an immature library, and demonstrates that glamor can be used to reduce the development cost of bringing up a new chipset – yet does not reach the full potential of the system. Judging by the last graph, one does wonder whether glamor is even preferable to using xf86-video-modesetting in such cases, on a high performance multicore system, for the time being, at least. ;-)

About these ads


  1. There’s one thing I started wondering when reading your sentence about “engineering focus”:

    You spent many many many hours working to improve SNA, and you made a clearly wonderful job. What do you think would have happened if you had spent all those hours working on Glamor and its stack? Do you think Glamor could have been above the “image backend” line on the i915 graphs? Do you think if you worked on Glamor for a full year you would have made its performance at least comparable to what SNA is? Even if you think Glamor is “bad by design”, do you think you could be able to fix its design and make it perform better, while still having the same advantages (translate everything to gl)? Maybe Glamor just needs some love from a brilliant engineer like you?

    • Ultimately a generic 2D rendering layer over top of OpenGL is not going to achieve the same level of performance as a specialised driver talking directly to the hardware and render manager. Then there are the added complications that both the hardware and the Render protocol exceed the limitations of the OpenGL API. To accommodate that one would need to extend the OpenGL specification in ever more esoterics ways to handle the eccentricities of individual hardware. Whilst there will be various commonalities between chipsets, the pretense that you have a generic library is starting to fade as you apply more and more optimisations for individual drivers and chips. Remember that many of the target drivers will be considered black boxes (think OpenGL drivers for PowerVR) and so the onus will be on glamor to work well on those, rather than on the drivers to improve their 2D performance.

      As it currently stands, I do not think you can write a good driver based on OpenGL. To do something the equivalent of SNA wold essentially mean rebuilding the DDX inside the OpenGL driver. This is the approach that VMware have chosen to take with their Shadow Acceleration Architecture building on top of the gallium XA state tracker. However the primary goal of that driver has so far been to integrate well with a hosted virtual machine and provide accelerated video and OpeGL, and not to deliver outstanding 2D performance. Whether someone takes up that challenge remains to be seen.

      • “Ultimately a generic 2D rendering layer over top of OpenGL is not going to achieve the same level of performance as a specialised driver talking directly to the hardware and render manager.”

        Is this due to specifics of OpenGL, or would the same apply to Gallium3D (i.e. the xorg state tracker)?

        For things outside of X (e.g. a wayland native app), what would this specialized driver be? Do we need to write something new, or just take the DDX, claim it’s no longer part of X, and put it somewhere? Would it be merged into cairo (ignoring all the other 2D graphics APIs), into something else, or be its own thing?

        • ickle
        • Posted July 13, 2012 at 8:38 am
        • Permalink

        X and Render have a relaxed approach to hardware limits, and put the burden upon the driver rather than the application. OpenGL takes the opposite approach and informs the application of the hardware limits, and will throw an error if you exceed those. In any case, the OpenGL driver may fallback to an extremely slow software rasteriser.

        Often the hardware is much more complicated than can be expressed as a single set of limits, though for obvious reason you only want to advertise the capabilities of your 3D pipeline to your 3D applications. Yet in some cases the display engine can address much larger screens than the 3D pipeline, and application surfaces are often far larger. So in order to avoid atrocious performance in those cases, you need to use all the hardware at your disposal and not just the 3D pipeline. This is the level of intimacy with the hardware that is not possible through the current OpenGL API and so you would need a set of extensions to expose those choices, or to add a DDX-like state tracker and hook the DDX directly into the OpenGL driver.

        If we look at the API landscape of tomorrow, we have two options. One is that we have a resource conscious render server that is used by the majority of clients in the same way as X/Render today. Or that every client effectively pulls the Render server inside itself by linking to Cairo or Qt etc (which will render directly in the same manner as the Render server either through a custom driver or through OpenGL) and everybody has their own render caches, duplicating resources such as fonts, gradients and glyph caches inside your limited video memory. Through the likelihood is that the key players will implement the rendering as a separate process within their libraries (for safety) and you have just reinvented the X display server and instantiated one per client!

        From my point of view, I already have code that can accelerate Cairo and can perform as a Render server. If the decision is to drop the Render server and move it into the toolkits and libraries, so be it…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

%d bloggers like this: