How to get running with the framework of the sps2demo: I assume you're at www.playstation2-linux.com/projects/sps2demo since you're reading this file. Click on 'source', then click on 'Browse CVS Repository' and finally click on 'download tarball'. Alternatively you can just use the url http://www.playstation2-linux.com/viewcvs/viewcvs.cgi/cvs_root.tar.gz?tarball=1&cvsroot=sps2demo This project requires sps2 (obviously) and the intmdloader library which is available at www.playstation2-linux.com/projects/ps2conv you can read about that project here http://playstation2-linux.com/project/shownotes.php?release_id=238 It also uses geommath but that comes with the sps2 developers package. So assuming you have these libs all you need to do is download the tarball, extract the package and then do the following simple steps: 1. There's a folder called intmdloader/ in sps2demo/ it's not the actual library but just a sample using the library so don't let the name fool you. In this folder you edit the few paths that are there in the Makefile so that they match with your lib paths. 2. You then copy the file sps2demo/trash/intmdvu.h into sps2demo/intmdloader/ 3. You then change directory to sps2demo/intmdloader/ and type, make if it doesn't compile it's most likely because you don't have VCL which is available at http://playstation2-linux.com/projects/vcl otherwise let us know and we'll help you. When it's compiled you have an executable file called 'demo' in sps2demo/intmdloader/ To actually run this you'd need the model files that are in pack.tgz, just copy the .asc and .bin files into sps2demo/data/ Alternatively you can just run the precompiled one in pack.tgz if you are feeling lazy ;) The models are just some test graphics (nothing final) but they look a lot better then a simple cube or sphere I think. You're supposed to move the camera around using the joypad, the initial camera position may not be a very intuitive one. Interesting items to take a closer look at in the framework: There are several so I'll just mention a few that I currently think are important ones, 1. The packet builder In sps2demo/shared/dma.h is a packet builder for vif1 and it deals with stitching for you automatically which is a very nice feature. The framework for building packets has been designed with a specific strategy in mind which is most often used by professional game developers aswell. The idea is to precompute most of your transfers before you enter the main loop. Typically this is done for geometry and textures since they take up the most space. The global object staticDma is used for this, it allows you to create such static callchains. At the beginning of every frame dynDma is executed. During the current frame you build the next dynDma to be executed at the beginning of the next frame. To avoid overwriting the memory of the one currently under execution the memory used by dynDma is double buffered internally for you. Generally you insert calltags into dynDma that reference geometry and textures that were created using staticDma before the main loop was entered. Details changed dynamically such as the transformation matrix, light settings, camera setting and so on will ofcourse not go into the precomputed callchain. This is instead either added to dynDma directly or refered to using reftags to some double buffered location. 2. Texture creation In the folder called sps2demo/shared/texture/ there's some code for generating texture callchains using staticDma which is a lot more complete then what you generally see in samples around. It supports 4/8/32 textures, it even supports mipmapping and also non power of two textures. On top of this it autodetects when a texture is swizzable, if so it then swizzles it and computes the register settings for it. You can use this code as you please either for simple reference or to use as is directly. It does a transfer to path2 but it can quite easily be modified to a path3 transfer should you want that. The main creation and upload functions are declared in texture.h, the chain creation itself is done in uploadchain.cpp. The files textureintmdloader.* and texturemanager.* are usable hacks to load textures that are not in intmd form. So the files that actually deal with the transfer itself are the following: texture.*, uploadchain.*, uploadattribs.*, swizzle.* 3. Screenshots In sps2demo/shared/ there's screenshot.cpp which is exacly what it sounds like, taking screenshots (in a direct access environment). 4. Postfilters Postfilters are very important on the ps2 to get something extra out of the rendering quality. You can have a look in the folder sps2demo/posteffects/ to see the ones we've done so far (the one called glare is very cool). Another very important piece of code in there is the rgbaindexer, what it allows you to do is essentially index a component (r/g/b/a) in the 32bit framebuffer and then to have it distributed into any of the 4 channels of the same pixel location in some framebuffer of the same size (the same buffer is often used, inplace rendering). This is NOT software emulation or anything like that, it's fast and heavily optimized and all done on the GS, assistance is given by vu1. Essentially this works like the broadcasting you get to do with an upper VU1 instruction, in this case however, the instruction is a MOVE but the broadcasting works the same. You can do MOVEg.rb, MOVEa.gba and so on (any combination), if you additionally use alpha blending then in two passes you can do r+g+b (used for greyscale or normalmapping). A classic trick is fetching the green channel of a 24bit zbuffer, you set the rgba indexer to postfilter with a depth test of 0x00ffff so essentially it only fetches the green channel between 0x000000 and 0x00ffff, since it's a 1/z buffer this range will surprizingly be most of your scene. If you wanted to do fog you'd just blend a color onto the framebuffer during this post filter pass. The blending is ofcourse based on the value of the green channel of the 24 bit zbuffer. You don't need to clamp since the depth test takes care of that. A neat thing is that the zbuffer is typically not needed at the stage where you start applying postfilters to the framebuffer. This means you can use it as a temporary buffer where you can do things like a blured version of the framebuffer or a greyscale and then just as before have it blended back onto the framebuffer based on the value of the green channel. In this case you'd need to backup the green channel of the 24 bit zbuffer in an available alpha channel, the framebuffer one seems like a good choice. It's important ofcourse that you preclear the alpha to 0xff so you get the values clamped so they don't wrap when you're between 0x00ffff and 0xffffff. There's lots more good stuff so feel free to browse around.