VisionVR (InViewR) Release Notes 2.0.0 - 2018-02-27
We are proud to announce our biggest update to arivis InViewR so far. Version 2.0.0 brings several new tools enabling users to proof-read and edit segmentations done in arivis Vision4D or perform their own manual or semi-automatic segmentation in Virtual Reality. It is now also possible to visualize multiple channels at the same time. To meet the demands of these GPU intensive new tasks, arivis InViewR now also works with Nvidia graphics cards.
Multi-Channel Visualization & Nvidia Graphic Card Support
arivis InViewR 2.0.0 contains new visualization features such as multi-channel visualization. Up to two channels can be visualized simultaneously and for images with more than two channels, switching between them is easy and accessible in Virtual Reality. Naturally, channel colors and opacity settings can be adjusted for every channel individually. arivis InViewR is now also compatible with Nvidia GPUs, opening the world of powerful VR capable graphics cards to InViewR users.
Completely new user interface in Virtual Reality
We implemented a complete new user interface in Virtual Reality. Our main menu has been tidied up and we incorporated plenty of new parameters that the user has access to. All annotation parameters for example can now be displayed in a list menu when wearing the headset. Tool specific parameters can now be accessed via a quick tool menu on the left hand. Pressing the “Y” button opens this context sensitive menu at any time. Here, the user has access to tool specific parameters such as threshold levels or brush size, but also basic visualization settings like overall opacity and visibility of segments, markers, measurements or clipping planes. The right hand is now consistently the tool hand, where tool functionalities are executed.
Manually edit segmentation results in Virtual Reality
Often, 3D images are difficult to segment due to poor signal to noise ratio, irregular staining or signal intensity variations due to the imaging depth. Situations like this not only make the segmentation processes complicated, but also complicate the proof-reading of segmentation results on a 2D screen. To solve these problems, we created our biggest addition to arivis InViewR so far: offering the possibility to proofread, edit and create segments in Virtual Reality. Users can now display segments that where created in arivis Vision4D and proofread the accuracy of their segmentation result, using the Virtual Reality headset and their hands.
Our new Sculpting tool makes manual editing of the segments possible in Virtual Reality. Simply select a segment and manually enlarge, or delete portions of the segment with the sculpting brush. Using the new magic wand tool, it is now possible to automatically create new segments based on opacity or intensity information at any time. It is also possible to create new segments from scratch. With these new tools, users are now able to perform 3D image segmentations, where automatic algorithms fail or a manual segmentation plane by plane is too cumbersome.
Integration with Vision4D
arivis InViewR is now closely connected to arivis Vision4D. Images can easily be opened in both programs and switching between analysis software is just one click away. Segments created in arivis Vision4D can be easily displayed in arivis InViewR. This enables users to integrate proofreading their images in Virtual Reality to their large data 4D image analysis workflow and opens the whole big data image analysis power of arivis Vision4D to the world of Virtual Reality.