Distributed terascale volume visualization using distributed shared virtual memory

Johanna Beyer, Markus Hadwiger, Jens Schneider, Wonki Jeong, Hanspeter Pfister

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Scopus citations


Table 1 illustrates the impact of different distribution unit sizes, different screen resolutions, and numbers of GPU nodes. We use two and four GPUs (NVIDIA Quadro 5000 with 2.5 GB memory) and a mouse cortex EM dataset (see Figure 2) of resolution 21,494 x 25,790 x 1,850 = 955GB. The size of the virtual distribution units significantly influences the data distribution between nodes. Small distribution units result in a high depth complexity for compositing. Large distribution units lead to a low utilization of GPUs, because in the worst case only a single distribution unit will be in view, which is rendered by only a single node. The choice of an optimal distribution unit size depends on three major factors: the output screen resolution, the block cache size on each node, and the number of nodes. Currently, we are working on optimizing the compositing step and network communication between nodes. © 2011 IEEE.
Original languageEnglish (US)
Title of host publication2011 IEEE Symposium on Large Data Analysis and Visualization
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages2
ISBN (Print)9781467301541
StatePublished - Oct 2011


Dive into the research topics of 'Distributed terascale volume visualization using distributed shared virtual memory'. Together they form a unique fingerprint.

Cite this