On Monday morning (16 November) I received two corrections to this page from John Carmack, regarding Java licensing (Kaffe, not Cafe) and C game DLL code size (30K lines, not 3K). Both those corrections have been made. On Monday afternoon, I added the mention of John’s “vector to raster” analogy in the “future of 3D” section, and added mention of the multiresolution book (thanks to those who wrote in about it!).
So yesterday I was in my San Francisco apartment surfing voodooextreme.com (or maybe it was bluesnews.com), when I chanced upon the announcement that John Carmack would be speaking at the Professional Gamer’s League championships about twelve blocks from my house in fifteen minutes. Mountain bike ahoy!!! And ten minutes later, I was watching the Starcraft finals before The Man came onstage to rap with us all.
There’s plenty of coverage of the finals and of John’s speech elsewhere on the net, but I managed to chat with him for a while after his official speech was over, and got to ask some questions about which all you tech-heads might be as curious as I was. (Incidentally, if the video of his speech makes it onto the web anywhere, the first audience question–“Do you still think all analytic curve representations basically suck?”–was me 🙂
So here’s the scoop. All these notes are my recollection of what was said, with some additional explanatory detail–I didn’t have a tape recorder or even a notepad in hand. Any misstatements are my responsibility alone–if you feel like flaming John for something in here, flame me first! because I probably got it wrong!
Java in Quake–why not, yet
Turns out that John actually got into negotiations with the Kaffe team to get a version of their Java virtual machine to ship with Quake 3 (he even said the terms were “pretty reasonable”). But he wound up backing away from Java for a few reasons:
Portability. Despite the hype, there isn’t yet a rock-solid Java virtual machine that ships on all the “critical” Quake 3 platforms (Mac, Windows, and Linux)… there are different Java virtual machines that are tops on each platform, but those VMs aren’t perfectly cross-compatible (for example, in their integration with native code). He said that other OS’s, like Irix or BeOS, would be nice to port to, but aren’t deal-killers.
Performance. Java, being an intrinsically garbage-collected language, isn’t as predictable in its run-time performance as C. (I’m personally working on a real-time “internet broadcasting” video animation system in Java, so I’m finding there are some ways around these issues, but they’re definitely problematic.)
Stability. Java’s pretty close to the bleeding edge in terms of its overall feature set (look at how long Sun’s taken to ship Java 1.2). There were enough other issues with leading-edge tech in Quake 3 that John didn’t want to add another to the mix.
Debugging. Java debuggers are still behind the curve. (Though I’ve been having great fun with VJ++ 6.0… yes, it’s from evil Microsoft, but it’s a *nice* environment!)
I didn’t manage to ask how exactly John goes about debugging his C libraries when they’re running under his runtime interpreter–obviously you can compile them as native and use a standard C debugger, but debugging the C translator (along with its “bytecode-ish” interpreter) is a different problem, and debugging the C code itself when run in that environment is doubly tricky. I imagine the solution is to “make sure the translator and the interpreter are rock solid” 🙂
John also mentioned the minor (really) issue of having to rewrite and re-debug 30,000+ lines of production game DLL code. But in the long term, he feels that Java is a superior language to C, and he’s watching all the Java-in-realtime-3D-games efforts with great interest.
The future of 3D graphics
Back in April one of John’s plans mentioned that “All analytic curved surface representations basically suck.” I’ve been curious about that ever since, so I asked him about it. Here’s (paraphrased through my limited 3D graphics understanding) his position:
Polygons and curved surfaces are both analytic representations that have serious problems of scalability. The “number of visible polygons on screen” problem comes up if you build your world out of polys. Curved surfaces seem to help with that, but not for long… soon you run into problems of “number of visible curves on screen” and you’re back to square one.
John’s hunch is that eventually 3D hardware will be based on some kind of multiresolution representation, where the world is kept in a data structure that fundamentally supports rendering it in near-constant time from any viewpoint.
Voxels are one example of such a structure. John mentioned that he actually wrote a voxel renderer and converted an entire Quake 2 level to use it. It wound up being about 3 gigabytes of data! But he said that that’s not actually that far off from today’s hardware capacity, if you use intelligent streaming techniques to move the relevant portion of the voxel set into and out of memory. And he said it was really nice having only one data structure for the entire world–no more points versus faces versus BSPs… just one octree node (and subnodes of the same type) representing everything.
Other possibilities include frequency space representations (i.e. wavelet compression schemes). (Think something like fractal compression, and fractal-compression-like progressive rendering, applied to an entire 3D space. He didn’t mention fractal compression at all, though; he just talked about “frequency space” and nodded when I said “wavelets” 🙂 He mentioned that there is one multiresolution graphics text that includes some techniques for frequency-space 3D modeling and rendering, but didn’t say which one; one possibility is Tony DeRose’s (et al.) Wavelets for Computer Graphics.
The analogy John likes is the comparison to the switch from vector graphics (which described a 2D screen in terms of points connected by lines) to raster graphics (which described a 2D screen in terms of a semi-continuous 2D mapping of values). Vector graphics didn’t take much data to start with, but broke down once 2D images got really rich. Likewise, current analytic polygon/curve techniques can describe simple models with not much data, but once models get really complex, you get into trouble and you wind up wanting something that describes the 3D space in other terms.
The big current theoretical problem with multiresolution techniques is they don’t support texture mapping on hierarchically-represented worlds… it’s far from a solved problem to get the level of texture detail we currently find in polygon rendering systems.
All of this will take a few more years to hit the mainstream.
I asked him about modeling for multiresolution worlds like this, and he said he anticipates it going back to basics–artists and modelers will actually sculpt and paint their models directly, as is done in movies today, and then scan those models into the multiresolution representation at a super high level of detail. This is because modeling is currently horribly difficult, as modelers have to struggle with all these very abstract inaccurate mathematical representations of their worlds–modelers wind up doing mostly polygon and curve wrestling, not much actual modeling! Once we’re no longer dependent on those abstract mathematical constructions as the basis of the world, modeling will get back to its hands-on roots. [WARNING WARNING! The odds of my having mis-heard him are very high in this paragraph! So don’t go flaming around about it!]
Some Quake 3 architecture tidbits
Some pseudo-random information about Quake 3’s internal architecture:
A while back John mentioned some troubles he was having with collision detection against curved surfaces. I asked him how he’s now dealing with that. He said that he’s using the same polygon-based physics code he was using from Quake 2; when it comes to curved surfaces, he just dynamically tessellates the relevant part of the curve to an “internal game-physics” level of detail, and then runs his polygon physics code against the resulting polygonal section. It’s possible to do it exactly–“it just takes a few Newton-Raphson iterations”–but it’s not worth it; the polygon code already had all the killer singularity bugs shaken out of it, and doing exact curve physics would reopen that can of worms. In general, approximations are undervalued in 3D world programming; he feels that lots of people think “if you can do it precisely, you should,” when in his experience approximations often work better on almost all fronts.
Models have independent animation rates in Quake 3; you can have a “crouch” animation running at 5 hertz while the “running” animation runs at 15 hertz.
There is one unified graphics pipeline for everything in the world–objects and background together. The BSP still exists but is used only for calculating the world visibility; once the visible section of the world is computed, everything goes through the same pipeline. This means that any graphics attribute you can apply to the world, you can also apply to a model, and vice versa. He’s very happy with this.
Client-side cheating is a problem that, at bottom, is fundamentally impossible to solve; anything running on the client side can be reverse engineered and hacked, and any network protocol can be exploited. But id is going to invest some effort into making it difficult to cheat, and John personally would be interested in busting and suing anyone who deliberately reverse engineers id games in order to cheat (the id software license forbids reverse engineering, and one could quite plausibly prove that the author of a cheat had to have reverse engineered the code in order to construct the cheat). He pretty much said, “They’re ruining the game for thousands of players, and they’re breaking the law.” So watch out, client-side hackers! id has the law on its side….
And on a personal note, I found John’s recent plan update on the nature of time to be very very helpful in the project I’m currently working on… thanks, John! 🙂
That pretty much does it for the conversation I (and others) had with him. So:
Other general notes from the talk
Hmm, seems like the talk itself hasn’t hit the web as quickly as I thought. Oh well, here’s some of what I remember.
John started out with a fairly lengthy discussion of the tradeoffs id’s making in developing Quake 3. Basically, John strongly believes that investing time in multiplayer features is just a better investment than in building single-player levels, because a good multiplayer game will get played for years, whereas even a great single-player game gets played through once and then forgotten about. He said some things to the effect of, “It doesn’t feel that great for a level designer to slave away for months on a set of levels that a gamer will run through in one evening and then never look at again. People are still playing multiplayer Quake 1 on the net, but you never see anyone going through, say, the third episode of Quake 1 on single-player.”
Since there is no underlying plot to Quake 3, there’s no “hook” to pull people through the slower parts of the game. There’s no “I want to slog around this kinda-boring level because I’m keen to see what’s next” factor causing people to play longer. The game has to be immediately and continuously compelling.
They’re putting major effort into lots of subtle cues. They’ve solved the knockback problem in Quake 2 (where you were basically stuck to the floor all the time no matter what you were shot with) with what John believes is the correct solution: when hit, you basically suddenly acquire flying physics for about 200 milliseconds. This makes it easier to knock people around while still giving the person being knocked some sense of control. (I may have misheard this part, but that was the gist of it.) In general there is lots of work going into every nuance of view shifting, weapon motion, jumping behavior, etc.
The goal is to make Quake 3 feel more violent than Quake 1, and considerably more violent than Quake 2. The overall goal is to make a game that combines and improves on the best elements of the previous games–a game so good there’s no longer really any reason to play the older Quake variants.
id will be building some levels that you can play without freelook. This is because true two-axis freelook is one of the harder things for a novice FPS gamer to learn. Doom was much quicker to get into because you never found yourself looking at the ceiling or floor by accident. They will be building quite a few “training wheels” levels that don’t require you to look up or down. Once you get good at them, then you go on to the full 3D levels where you have to use all-angle mouselook to compete.
id won’t run its own website (a la battle.net) or put on its own tournaments (a la PGL). id’s small and wants to stay small and focused on development. Anything that John can do in the code to help support gaming services or tournament play, he’s open to hearing about and working on.
John got asked about multi-CPU support. He said that there are some interesting possibilities with a multi-threaded client, such as multiple monitors, but that it’s hard to parallelize the client in general (as people who follow his .plan know). As for a multi-threaded server, his basic feeling is that there aren’t yet any mods that really use a lot of people–say, 40+ people–effectively in a single game. As soon as someone demonstrates a gameplay style that really involves that many people, John said that he’ll start investing the effort needed to scale up the server. He said something to the effect of “I consider it a very interesting engineering challenge, but it’s not the best use of my time right now.” I personally heard this as, on some level, a grand challenge to .mod builders to build really large-scale levels and styles of team play. If you build it, John will come 🙂
See the PGL writeup for more on the talk–hopefully the video will hit Shooters or something at some point.
My personal take
Java will continue to take over the world. (And open source rocks out, as well–but that’s getting off topic 🙂
I look forward to seeing where multiresolution technology goes–it’s definitely a very active research area.
It’s interesting that John cites the longevity of id’s older games as evidence of how multiplayer intrinsically delivers more play life, but then points out how the older games’ longevity is partly due to suboptimal decisions in later games (Quake 2), and finally states his intention to end the playing of the older games once and for all because of how good Quake 3 will be. In other words, id itself is the greatest threat to the longevity of id games!
Single-player offers the dimension of story. Storytelling is a colossal part of human creativity, and single-player games (and someday, really good co-op games) will always thrive because of the space they create for storytelling. In fact, many of id’s innovations in “violent feel” and in immersive sense of presence will probably wind up enhancing single-player games as well.
Thanks for reading
Hope you enjoyed it! I know it was a nice unexpected weekend bonus to get to pick John’s brain, and God knows I can’t wait for Quake 3! Keep up the good work, id folks–it’ll be a pleasure to witness (and play) id’s work in the 21st century.