maciek_urbanski; What I can tell you for sure is that the PowerVR CLX2 on Dreamcast does the geometry binning stage in parallel with the tile based rasterization stage. This actually has a few nice implications for emulating Dreamcast, in that both processes are basically atomic for an entire frame, and so can be parallelized. I don't see any ways in which moving vertex shading onto the GPU and adding pixel shading would get in the way of this, but if you see something let me know...
I think you are suggesting that if they were not done in parallel then the memory for the backbuffer and for the tile bin lists could be shared.
About the lists themselves, I don't really see how the bitmap concept works with TBDR, because then the triangle would at least have to be partially rendered to determine which pixels it occupies, and this would basically be like early a traditional renderer with early Z-test. It would have to cover the same framebuffer regions in two completely different passes, which is not cache friendly. Also note that the polygons per second limitation is due to binning, not geometry calculations (look at the GeForce 3 which claimed much higher polygon transformation based on shaders). If binning were done per-pixel then the maximum number per frame would be a function of the size of the polygons, not the number of vertexes. So I really believe that the binning does result in lists of what triangles are in which tiles (per tile), and not per-pixel per-tile. Yes, this list can grow up to the number of vertexes, but there are any number of ways of dealing with growing size lists (and compromises between those and fixed size ones). Not that I can say exactly what algorithm and data structures the binning uses, but in real scenes triangles will tend to be relatively well distributed. It might be N triangles per chunk and linked list of chunks from a global pool of them.