RIFT Client Performance Improvements

RIFT 2.4 contains some of the biggest under-the-hood changes we’ve made to date. Our engineers have been hard at work optimizing our renderer for both high-end and low-end systems. The result is that everyone should see a decent improvement in framerate, and some people will see really jaw-dropping increases: players with Radeon HD 6800 series cards, for example, have seen their average framerate increase by 17% after the 2.4 patch. Read on for the details ….

Culling Improvements One of a game engine’s big tasks is to figure out what exactly needs to be rendered – a more complex task than you might think! The culling system is responsible for identifying which objects and parts of the terrain should be sent to the graphics card to be rendered. If the culling system does a good job, it means less work for the GPU. On the other hand, if it runs too slowly, it becomes a bottleneck itself.

There are a few different ways to do culling, and they all have advantages in different situations. A simple frustum culler excludes objects that are behind you, to the sides, or too far away. Because it’s simple, it runs very quickly. However, a frustum culler won’t exclude anything that is obscured by another object, like a monster standing on the other side of a wall. With a frustum culler, the graphics card draws the monster and then draws the wall on top of it, which means more work for the GPU.

Since the launch of RIFT we’ve been using a second resource – a culling library from Umbra Software. It’s a slick system that uses a pre-pass by the graphics card to determine which objects should be drawn in the final pass. It’s good because it puts a lot of the work onto the graphics card, and also because it will work in any environment, like forests, cities, dungeons, and Dimensions. However, with players running at higher and higher resolutions, performing the culling on the GPU can become a bottleneck. Furthermore, most players now have at least 4 CPU cores, and RIFT wasn’t utilizing multiple cores as well as it could.

With RIFT 2.4 we’ve added support for the Umbra 3 culler. This new system uses multiple CPU cores instead of the GPU to determine which objects should be rendered, which means that the GPU is freed up to do the actual rendering. The main disadvantage is that the Umbra 3 culler only works with static geometry, which means that it won’t work at all in places with lots of dynamic objects like Dimensions. Additionally, the new Umbra culler may not be faster for all users. Players with older CPUs – or who aren’t bottlenecked by their graphic card – might not see any improvement.

In order to figure out which culler is best for RIFT, our engineers implemented an in-game detection system that periodically renders frames using all three cullers (frustum, Umbra, and Umbra 3). We originally wrote this system for our own internal comparisons, so that we could see the performance improvements using the Umbra 3 culler. But after we implemented it, we realized that it would nicely solve the problem of figuring out which culler is best for rendering a particular scene on a player’s specific hardware.

With this system, RIFT automatically adjusts the culler that it uses as you move throughout the world. For example, Umbra 3 works very well in Tempest Bay where there are a lot of walls and buildings, especially on newer hardware. The original Umbra culler works better in certain parts of the open world and with particular GPU / CPU combinations.

Surprisingly, the super-simple frustum culler is sometimes the fastest of all! This goes to show that it can be hard to predict exactly where the bottleneck will be. Some people get the fastest framerate when the game gets out of the way quickly and just throws polygons at the graphics card.

Pixel Granularity Modern screens are getting large! 1920×1080 is a common resolution, and a lot of people run much higher than that. High-resolution screens are great for your UI, since they make the text sharp and crisp and easy to read. But they add a lot of overhead to the rendering of the game itself (the landscape, your character, other players and monsters, etc.). If you make your screen twice as wide, the game has to render twice as many pixels, and that causes your framerate to drop. And believe it or not, those extra pixels don’t actually do a lot for a 3D-rendered image. While your eye is great at spotting low-resolution text and fine lines, it’s much harder to spot a lower-resolution image.

“Well,” said our rendering engineers one day, “what if we could have the best of both worlds? What if we rendered the UI at full resolution so that it’s nice and crisp, and rendered everything else at a lower resolution to improve framerate?”

So we did just that. There’s a new slider in your video options menu called “Pixel Granularity”. The default is 100, which means that the in-game world is rendered at full resolution. You can drop it all the way down to 50, at which point the screen (except for the UI) is being rendered at half-resolution and then scaled back up.

If you’ve got an amazing rig and scoff at gamers who only have dual graphics cards, then you should probably leave the Pixel Granularity at 100. But if you’re yearning for a faster framerate, give it a shot. Most people on the dev team agree that you can drop it down to 75 or 80 without affecting quality too much, while giving a nice boost to framerate.

Don’t believe me? Let’s look at some screenshots. These are full-resolution BMPs, so they show you exactly what I have on my screen.

Here’s part of Tempest Bay at ultra settings with Pixel Granularity at 100. It looks great, and the framerate is a respectable 41 fps. The latency is 0 ms, but that’s just because this is on an internal
Community