Diegesis and interface design

A very well-written article was posted on Gamasutra regarding UI design in video games, and it is directly relevant to my MSc dissertation. In it, Anthony Stonehouse discusses the uses of different types of user interface elements that help, inform and guide the player: diegetic, spatial, meta, and non-diegetic. You can find the full article here.

What I find really interesting is more and more FPS games are pushing the limits of what is presented diegetically. Some examples include compasses, maps, ammo counters and wristwatches which, instead of displaying their information somewhere on the screen at all times, have to be summoned by the player. It gives the game a more realistic feel, but some features hit a diegetic wall: games are still limited to a narrow field-of-view screen. I will use FPS games in my examples because they make for the best translation of “being in someone’s shoes” compared to other game genres.

When I was running my study on FPS diegesis, the one problem that kept arising was the limited awareness that the player has of his immediate surroundings. The way hardware has evolved, surround sound technology can deliver pristine audio quality for games, letting a Battlefield or ArmA player immediately know where the footsteps are coming from. It is not a matter of front, left, right or behind. It is the difference between being able to tell your team that a helicopter is approaching from the south-west or that an enemy is entering the house from the second floor balcony and saying “There is an enemy nearby”.

Vision isn’t that well served, unfortunately. What video games currently lack, for example, is peripheral vision. Humans are generally very good at estimating and simulating in their heads where moving objects are in relation to them. It is the reason why basketball players know where their team-mates are at all times and can deliver a blind alley-oop pass: that involves sending the ball to a position one is not looking while the receiving player has not even reached that aerial point. Yet somehow it works. In FPS games, however, the UI often needs to impose a non-diegetic replacement to compensate for the lack of vision and spatial awareness of the player’s avatar. In ArmA 2, for example, the following community created addon is almost necessary to play the game, even though the emphasis and its selling point are ultra-realism and simulation:

Artificial spatial awareness (STHud).

Unlike other games, this UI element is not a mini-map. It is not a radar and never shows enemies, either. What it does is give the player (white dot) spatial awareness of where their teammates are (green and red dots in this picture) within a small radius around them. This remove the need for the player to constantly look around to know exactly where his teammates are, which in real life would be covered by peripheral vision to a very large extent.

Anthony’s article also mentions the way things are moving forward in the video game industry and that the coming of AR/VR will change a lot of things, and I agree. The current version of the Oculus Rift does not fully allow that yet, as it is still just a screen with a limited field of view. However, small head/camera movements are so easy and natural that spatial awareness in-game is improved by a lot.

The quest for full diegesis is a very hard path, because of the lack of adaptability of games compared to real life. To give a simple example: You are at the beach, and you want to check your watch. You lift your left arm, look at the watch, and the sun glare prevents you from reading the dial. You simply twist your arm a bit, or use your second hand to provide shadow, and you read the time off the dial. In a game scenario, when you lift your arm to check the time, the movement is scripted. If glare is in the way, you cannot adapt to it with a simple movement, but rather have to move your entire body somewhere that is less sunny. The other option is for glare never to affect the watch display. But in the evermore demanding world of video games, light reflections need to be ultra-realistic (even if they can be ultra-annoying) or game developers might face the wrath of their fan base.

Seeing things is not the only issue, of course. What about feeling? If you get shot, there is no way for you to know it other than something on the interface giving you an indication of physical damage sustained (unless you are these guys). The non-diegetic classic option is a healthbar, the meta option is blood splatter on the screen, the spatial option does not really apply in this case and the fully diegetic option would be to have to look down and notice that your arm is bleeding. But that still would not tell you how bad  your arm is hurt.

To sum things up, I believe that different types of UI elements should be used when it makes sense to use them. Knowing where one has been hurt is very intuitive in real-life because of all the sensory input we get from our body. Asking players to check their bodies through visual scanning alone in order to know whether they are fit to keep fighting or whether they should take a rest is stretching it and does not constitute what is surely the most important factor: having fun.

Gamasutra article © Anthony Stonhouse

STHud picture © Armaholic.com

Advertisements

3 thoughts on “Diegesis and interface design

  1. Pingback: User interface design in video games, diegetic interfaces | thewanderlust.net

  2. Pingback: User interface design in video games – thewanderlust.net

  3. Pingback: Deep Dive | Annotated Bibliography – Kim's Dev Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s