When using arivis and looking at the resource monitor, it appears that the software is not using all the CPU, RAM or GPU capabilities. This article aims to explain why this is so.
Introduction
When arivis was started, the software was built on the principle that the file size should never be the issue. The main bottlenecks when it comes to processing large datasets are the CPU processing power, availability of RAM, availability of GPU processing. Each of these affects the ability to process images in different ways. When looking at the Task Manager at the usage of PC resources we may get the impression that arivis is not using all of the computer's resources. This article explains in more details why looking at the Task Manager doesn't tell the whole story and why arivis uses all the resources it needs.
RAM availability
Any data that the CPU processes must be held in the computer's Memory (usually shortened to RAM). But we also need enough RAM to hold the result of the operation, which means we need twice as much RAM as the data we are trying to process. Since RAM is very expensive and even most high-powered computers can't be configured with more than 512GB without going into much more expensive server machines, this would limits our ability to process datasets that are larger than the available RAM.
To better explain why RAM is important, let's consider the simple case of a Mean denoising filter.
Noise is a common problem in imaging, and a simple way to minimise the effect of noise is to consider the value of individual pixels within the context of their immediate neighbourhood.
If we look at this group of pixels:
The pixel in the center is clearly a lot brighter than most of the pixels around it. To soften the effect of noise we can look at each pixel in term, group it with the pixels immediately around it, and change its value to that of the mean of the group. If we do this for all the pixels in the image we've applied a Mean filter.
the way the computer processes this is to load these pixels together in the memory and we call this a kernel. We process the kernel, and store the mean value into the central pixel. However, if we were to modify the existing kernel we would affect how the next pixel is changed and the effect would compound throughout the image. Instead we write this pixel into a new image and we repeat the process for the whole image, loading each kernel and writing the output into a new pixel in the destination image.
Normally, the image we're working on is stored on the hard disk, and reading from a hard disk is slow compared to reading from the memory, so typically we would load the whole image in memory and then process each kernel one at a time, progressively building up the output image which we also keep in memory. This means we would need at least twice as much memory as the data we're trying to process, not including the memory required just for the computer to function.
However, as mentioned above, RAM is expensive compared to hard disk storage. RAM is typically 10x more expensive per GB compared to even a very fast SSD and 100x more expensive than spinning disk drives. If we want to process a 1TB dataset using traditional methods we would need 2TB of RAM which would cost 10s of thousands of euros, whereas we could store this dataset on a fast SSD for a few hundred euros instead.
To this end, arivis is built to process images in small blocks, one at a time, and stitching the results together into a temporary document that is stored on the hard disk. This is done while taking care to adequately handle block sizes and overlap to avoid running out of memory or affecting the accuracy of the processing operation.
Typically this results in arivis loading around 2GB of data into the RAM at a time, processing the information in that block, and writing the results to the hard disk.
Some processes, like image filtering, which use a pixel's neighbourhood to affect the results (e.g. denoising, morphological filters, filter based segmentation) will process the blocks with a margin of overlap between blocks equivalent to the radius of the filter to ensure that the results are not affected by the blocking process. Furthermore, we limit the size of most filtering kernels to a maximum diameter of 256 pixels so that we can process blocks that are small enough for most computers, without needing excessive double processing of the data on the edges of the blocks.
So when we look at the Task Manager while executing a pipeline, we might see the memory usage increase by about 2-4GB, and disk usage spikes every time we write the results of the current block and load the next one.
Note that if we look at the CPU core usage, some operations as so fast in the individual task that the refresh rate of the Task Manager window may only show 50% CPU usage. Typically, this means we are using 100% of the CPU, but only 50% of the time while we wait to load and unload data.
Note also that while arivis uses parallelisation wherever possible to speed up processing, not all computing tasks can be parallelised, and the Task Manager will reflect that by showing that on occasion only one core is active.
Visualising images
This topic is covered in more details here, but in short, most computers are equipped with displays with 2-8million pixels. a 1TB dataset is likely to contain 1 billion pixels. Clearly it is not possible to show every single pixel from the image on the display at once. Usually this means that we load the image in memory and only display as many pixels as we can fit on the screen, either by subsampling (showing every 100th pixel), or cropping (not worrying about pixels outside the field of view).
In this screenshot you can see that we are only displaying in the viewer a small portion of the image, and even this portion only at a reduced resolution:
So why would we need to load 1TB of data in RAM if we only end up displaying a few megabytes worth of data at a time? Well, arivis doesn't. It uses a very efficient file format that allows us to load in RAM only the pixels we can display as and when needed. this means that if you look at the Task Manager with an arivis window open you will see that arivis might use as little as 500MB of memory, even with a very large dataset open, and most of this memory requirement will be down to simply having the program open.
Visualising 3D datasets
Again this topic is covered in more detail here, but in short, for 3D visualisation we typically use the Graphics Card (GPU), and the GPU can be considered thought of as a computer within the computer.
The GPU has its own computing cores, its own clock rate, its own memory etc. The amount of data that a GPU can process is also limited by these factors. Again there are no consumer GPUs with anything close to 1TB of memory. Typically, GPUs come with 2-20GB of VideoRAM, with some high end server GPUs going up to 128GB of VRAM. However, being able to hold that much data in memory doesn't mean the GPU is capable of rendering these pixels on the screen in 3D in the time needed for an interactive visualisation.
For an interactive visualisation we need to render an image every 100ms as a bare minimum, though most users would consider 10fps laggy and prefer something closer to 30-60fps. At 60fps we only have around 16ms to calculate and display each image, and no GPU can get anywhere near to rendering more than around 2 gigapixels of 3D data within that time. Therefore, arivis will subsample the dataset into the GPU RAM and display the subsampled dataset as fast as possible.
Again, if we look in the Task Manager this will appear as if the software is only using 10-20% of the GPU resources, but using more would usually result in a noticeable drop in performance.
Conclusions
The Task Manager is a very useful feature of Windows operating systems, often allowing the users to identify applications that may be stuck or unresponsive, and gives clues as to what may be the cause of the issues. However, it is not necessarily a great way to measure system optimisation, partially because this tool was created prior to the development of many modern computing methods, and partly because if we wanted it to tell us the full story it would need most of the resources we are using it to monitor.
That is not to say that arivis software is perfectly optimised, and arivis software engineers are constantly working to improve the software performance. If you are concerned, as a user, that your system may not be working optimally, you should contact your local ZEISS arivis support representative who can help you identify any issues and suggest optimisations, or forward your feedback to our engineering team so that we can work towards a even better solution.