Skip to main content
If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

The Suite Science: Paul Weir Talks Generative Music

Sir Mix-A-Lot


This is the latest in the series of articles about the art technology of games, in collaboration with the particularly handsome Dead End Thrills.

When Paul Weir gave a talk at GDC 2011 about GRAMPS, the generative audio system he designed for Eidos Montreal's Thief, the games press took notice. Not so much of the contents, though, or indeed the subject, just Thief. Here, finally, was a chance to get something on this oh so secretive game. Maybe, while prattling on about 'sounds' and stuff, he'd toss them a headline or two, get 'em some clicks. Suspecting as much, Weir recommended to his audience that anyone just there for Thief nooz should probably leave the room. Some people did.

We can often seem deaf to game audio in the same way we're blind to animation. Maybe it's because the best examples of both are so natural and chameleonic that they blend into a game's broader objectives. Maybe it has to be Halo ostentatious or Amon Tobin trendy just to prick up our ears; or make the screen flash pretty colours. Or maybe Brian Eno has to be involved, as we'll come to in a minute.

Yet Weir's work is fascinating, and goes some way beyond the more conventional fields of 'horizontal re-sequencing' (shuffling pre-recorded segments of music) and 'vertical re-orchestration' (more complex dynamic mixes). It blurs the line not just between games and the real world - much of his work at sound design agency Earcom is generative soundscapes for shops, banks and hotels - but between melodies and chaos. What's more, it invites games to become more than the linear B-movies imported from outgoing consoles, delivering something worthy of its ambition. He is currently an audio director at Microsoft.

Who ya gonna call when confronted with a screenshot of Ghost Master? Richard Cobbett was my first thought.

DET: What's the landscape of dynamic music like at the moment? What kinds of things are you working on?

Weir: One thing is a kind of dynamic stem mixing where effectively you've got all your tracks sitting there, and you bring things in and out as and when. That's pretty common. It's not generative but dynamic mixing, really. Audio middleware like Wwise can handle it so you can time-sync everything and say that at this point I'll bring in the strings or add the melody - that's kind of the 'vertical' element. The horizontal one, which has been used many times - I remember the Harry Potter games used to do it - is deciding which cue you're next going to play, but like a jigsaw they all fit together. Yeah, it can work. It works particularly well in a linear game.

We've tried generative music before in a few titles; there was an MMORPG which got cancelled. I think multiplayer online games are just a perfect vehicle for generative music. There was a game I did years ago called Ghost Master that did a randomised music system similar to what I do for shops and building societies: lots of little components. The chords and melody and bass and percussion would have separate controls and would then be randomly shuffled, and that created the background bed. We never told anyone about that and no one seemed to pick up on the fact it never repeated. But then they didn't turn the music off, either, so that was a success as far as I was concerned.

There's a project I'm discussing now - again, it's very early days - where I'd love to add a generative system. I'm slightly optimistic: as we go next-gen and everyone gets more interested in procedural worlds, it's a natural fit. But the problem is you need composers who are very comfortable writing generative music and have the systems to do so. On both fronts it's not easy.

To infinity! That's how much better Spore's prototypes are than Spore. You have to suspect its music, produced in collaboration with Brian Eno, went through a fair few of its own.

DET: People still think of Spore when talking about generative music in games.

Weir: Spore is interesting because it did as much damage as it did good. Their ambition was fantastic but unfortunately, from my understanding, they did spend a lot of time building the systems. They used Pure Data initially, and Pure Data's not very game friendly. So they did spend a lot of resources rewriting it. While it's got generative music in the game, by the time they got round to actually creating the content - with Brian Eno's posh involvement - actually it wasn't the most effective demonstration.

The most success - I won't say 'anyone' because that sounds terrible - we've had has not been games but in the generative sound design we do for commercial spaces, which is very game-orientated but just not in a game. I've just sound designed the National Georgian Bank. I've been doing Harrods as an ongoing thing, and several shopping centres in the UK. Lots of banks, as it happens. That's using the same creative approach as what we used in Thief - different technology but the same approach. They'll often buy a little Linux machine from us which has all our handwritten code on it, and that will then generate a soundscape which is made for that particular brand. It could be interactive but it's not, though with Tesco we did something which was interactive.

A company called The Sound Agency does this, and I do the work for them. There's a guy there called Julian Treasure who's worth Googling because he's done lots of talks on the power of audio in relationship to brands, and to people's wellbeing - he's quite into that. And he's done a lot of TED talks. Julian's known as the guy who 'does this' while I make the content. I'm healthily sceptical about generative music, but I think that in the right situation it can be a perfect solution. Yeah, you want something that's able to react in ways that you can define, that's non-looping, that basically acts in an unobtrusive way that sets the right atmosphere. Which is obviously what we wanted for Thief.

But a lot of composers don't like composing music that doesn't get recognised. We still suffer from this idea that if you write a film-style score for a game then somehow it's qualitatively better, rather than actually being appropriate. It's something we fight against.

Lucasarts had algorithmically generated music in Ballblazer long before iMuse, giving the technique the arguably jazzier sounding name 'riffology'. Stand back, ladies, I'm a riffologist.

DET: Is there an ecosystem problem? Games being loyal to more anthemic or cinematic music for marketing reasons, maybe?

Weir: I'm not sure it's quite as bad as that. It's true for large titles, and the comparison with film is quite apt: you have very little middle ground in film now. The fact there's so much indie going on is fantastic. I work a lot with Hello Games who are the perfect example of a great indie outfit. It's so much fun working with them, it's just like in the old days. With them, they decide to do something and they do it. That's very different from my day job at Microsoft.

Microsoft is full of supremely intelligent people doing excellent work, but clearly there are a lot of layers to go through, and the best time is when you can sneak something in. Get a producer to approve it and you're like: done, that's it, it's in. That's a recognised problem inside companies like Microsoft, but it's a difficult one to change in any big corporation. There's a lot of money involved, everyone has to have a say, and everyone has to be right.

DET: How long would you have to be in charge of a generative system to ensure that it worked for a game like Thief?

Weir: We got Thief's up and running very quickly because I've done it before. We had a prototype running in about six weeks which a colleague of mine - a guy called Sandy White who is a gaming legend - helped build just in C. That kind of validated the apporach, and then we brought it back in-house to Eidos to finish off. Right from the beginning I was composing generatively for Thief.

DET: Can generative music become the standard? Should it?

Weir: I don't think it'll ever become the standard, it'll always be esoteric. Although: what's interesting in both the games field and with brands is that if you'd mentioned something like generative music when we started doing this ten years ago, they just wouldn't get it. It was just 'weird'. Now I can sit in a meeting with Volkswagen and they go, 'Yeah, okay, that's good.' We don't talk much about the technology. It was the same on the game projects. People get it, they understand the principle. There's definitely a cultural shift in accepting it. It'll never be mainstream and I don't think it should be, but it goes hand-in-hand with what's happening in film schools. Good film schools have become much more non-linear and textural - films like The Matrix started that. I just saw Gravity and that had a good score: very un-thematic, un-John Williams. Games have had a slight influence on that, but it's also the influence of Japanese cinema which tends to be less structured.

Harrods' Toy Kingdom features five of Weir's algorithmic soundscapes, one for each of its themed zones. You're thinking about the obvious parallels with videogames. I'm thinking of The Crystal Maze, as I do every evening.

DET: You once singled out Red Dead Redemption for featuring "great music, but it all had to be written in A minor at 130bpm". What's that all about?

Weir: It's a great example of vertical mixing and it does work really well, but that's an incredibly restrictive way of composing. I'm sure they had a really good reason to do it, but I'm not sure what the logic was; the entire score had to be the same key at the same tempo. It's great music, don't get me wrong, but maybe that indicates a lack of imagination on what's technically possible.

You don't see it that often. Normally it's the same per level or per track or whatever, depending on what the game is. It's stem mixing, but to do it the same across an entire game is a bit weird.

What I was trying to do on Thief was kind of tackle this slightly. In the generative music system we had, you could have one piece of music and, within the same tool, compose a second piece of music and transition between the two. You could say that 'this' one takes 30 seconds, and it might go minor at a middle point and become more sparse. All the values I had for controlling the music were exposed, like in a sequencer. What my ambition was, which of course we never realised, was that you could start a level with one piece of music and end it with a totally different piece of music, and you'd never know when the music had been changed.

DET: Has that never been done before?

Weir: I don't think it has. Music's been changed but it's always different cues. Our system solved the technical problem but we didn't quite solve the human problem.

DET: The direction the game's taken?

Weir: Yeah.

[Before you ask, Weir does elaborate upon this but only off the record.]

DET: How much greater is the workload for generative music compared to more linear music?

Weir: I don't think the workload is any different, it's the focus that's different. With conventional music you start writing and immediately hear your results. You build it block by block. In generative music you plan what you want to do, imagine how it's going to be, try all the elements, then assemble the elements into the system, and then you hear it. So it's a lot more front-loaded, but once you've done that and have a certain body of material, you end up with a lot more soundtrack for your money. But yeah, not all composers are comfortable with that because it's not as immediate.

Before we even started a level on Thief, I would have a plan as to what music would go into that level. There were points where I was asking the level designers to change a level just to give me space to do what I wanted to do, which is absolutely the right way around to be doing it. I love mixing generative and linear music, in fact, I think that's a really good combination.

Thief. Hmm.

DET: Is the plight of the musician trying to influence the rest of a dev team the same as that of many game story writers?

Weir: Like when designers decide to become writers? Yeah, you get that all the time. Its very true that on many projects you come in at the end and are told what to write and deliver some tracks and off you go. That's suitable for a lot of projects, but some teams want a relationship with an audio person. They want you to tell them what to do. I've got to the point where I've helped to build so many systems that I can easily go to them and say: 'Here's an example. Here's what I think your game should sound like, running live in a generative system. I can build this for you. It'll take this amount of time but will give you these benefits.' If you can sell that to them then generally they'll be very supportive.

DET: Should there be more middleware for this sort of thing?

Weir: I don't think so. I've seen companies try to make not exactly this, but music software for games. It's never succeeded for various reasons. First of all, it's not hard to build a system; it's actually going to be cheaper for me to build a system than to license it. Someone tried to sell me a music system that works with Wwise, but I was like: 'Well, we've already licensed Wwise and integrated that. I'll then need to integrated your system into ours and into the game, and by the time I've done that I may as well have bloody written it myself.'

These systems aren't hard to write, but whenever I see people do it, they write it for how they work. That's kind of missing the point. Whereas something like Scaleform or Wwise or whatever, yes, it forces you to work in a certain way, but ultimately an audio engine's an audio engine; you're not getting it to help you with the creative tasks, you're getting it just to be there and make your life easier. I absolutely believe it's not about the technology. Technically what I do is really simple: just randomise a few files with a bit of logic. It's much more about whether it's appropriate for the game, and as a composer how I approach that. It's just about getting the sound right.

The famous iMuse system showcased by Monkey Island 2, developed by Michael Land, was designed to convince players the music had anticipated their actions.

DET: Who are the real pioneers then?

Weir: Lucasarts were the guys really, they did extraordinary work in those early adventure games. Microsoft have tried it; a few years ago they had a program called Direct Music Producer which, you know, basically didn't take off and got killed. That came out about the time of the first Xbox.

The problem with quite a lot of the attempts is balance: if you've got a properly generative system then you need to have inherent flexibility, but of course the more you have, the more chaotic it becomes - the less musical it becomes. So it's enough flexibility for it to serve its purpose, but not so much that it defeats its purpose. That's tricky.

DET: Should generative music at least be a greater presence in games?

Weir: I would love to see a major game have generative music at its heart in a way that really supports it. Of course that's what we tried to do with Thief. It wasn't about being clever, it wasn't about giving the audience something different to hear, it was about supporting the game. I still have high hopes. I'm talking to some people about a project that generative music would be absolutely perfect for. Even at Microsoft I introduce it when it's suitable. It's unlikely to happen but I've got a wonderful idea about Xbox One and how we might be able to do something generative for that. We keep banging on that door.

Read this next