NIS.ai
Preprocessing
Clarify.ai
This function removes out of focus blur from the source images using neural networks. It is intended for widefield images and works best for thick samples. It is a preferred choice for under-sampled images, whereas deconvolution is a preferred method for well-sampled images.
See Deconvolution.
Clarify.ai requires valid image metadata (similar to deconvolution). It is a parameterless method which does not increase the resolution and does not denoise the image however it can be combined with Denoise.ai. Check the Denoise.ai check box next to a channel to perform denoising first before clarifying. Check this check box only for very noisy images with SNR value smaller than 20.
- Modality To handle the out-of-focus planes correctly, it is important to know how exactly the image sequence has been acquired. Select the proper microscopic modality from the combo box.
- Pinhole size Depending on the
Modalitysetting, set the pinhole/slit size value and choose the proper units. - Magnification of the objective used to capture the image sequence.
- Numerical Aperture of the objective.
- Immersion Refractive Index of the medium used.
- Calibration in μm/px.
- Channels selects which channels will be clarified and which will be denoised. You can also adjust the emission wavelength.
Parameters
Restore.ai
Opens the Restore.ai dialog window. This function is designed to be used when denoise and deconvolution processes are combined. It can be applied on all types of fluorescence images (widefield, confocal, 2D/3D, etc.).
- Modality To handle the out-of-focus planes correctly, it is important to know how exactly the image sequence has been acquired. Select the proper microscopic modality from the combo box.
- Magnification of the objective used to capture the image sequence.
- Numerical Aperture of the objective.
- Refractive Index of the immersion medium used. Some predefined refractive indices of different media are available in the nearby pull-down menu.
- Calibration in μm/px.
- Channels produced by camera are listed within this table. You can decide which channel(s) shall be processed by checking the check boxes next to the channel names. The emission wavelength value may be edited (except the Live De-Blur method).
Denoise.ai
Denoise.ai is a deep learning-based denoising algorithm. It uses a convolutional neural network trained on thousands of confocal images (resonant and galvano) and widefield images to remove shot noise - a dominant noise component in low-light microscopy, while preserving signal intensity and structure. The algorithm operates in real time on GPU(s), supporting both live and post-acquisition processing. Denoise.ai improves image quality without increasing exposure or averaging, enabling faster acquisition and lower illumination power. It requires spatially uncorrelated noise and is not compatible with sensors like the Nikon Qi2. Denoise.ai can be used on a timelapse or on a single frame. This function is used especially for static scenes because moving objects may get blurred.
For more information please see Nikon NIS-Elements Denoise.ai Software: utilizing deep learning to denoise confocal data.
ND Denoise.ai
Cells Localization.ai
Detects cells and outputs a binary image with dots on the detected cell centers.
Works only on images with brightfield modality and in a narrow Z range around the focus (+/- 25 µm).
Transformations
Enhance.ai
For the function description please see Enhance.ai.
- Trained AI Selects the trained network from a file (click Browse to locate the *.eai file).
- Details… Opens metadata associated with training of the currently selected neural network.
Parameters
Convert.ai
For the function description please see Convert.ai.
- Trained AI Selects the trained network from a file (click Browse to locate the *.cai file).
- Details… Opens metadata associated with training of the currently selected neural network.
Parameters
Segmentation
Segment.ai
For the function description please see Segment.ai.
- Trained AI Selects the trained network from a file (click Browse to locate the *.sai file).
- Details… Opens metadata associated with training of the currently selected neural network.
- Advanced Reveals post-processing tools and restrictions used for enhancing the results of the neural network.
Parameters
Segment Objects.ai
For the function description please see Segment Objects.ai.
- Trained AI Selects the trained network from a file (click Browse to locate the *.oai file).
- Details… Opens metadata associated with training of the currently selected neural network.
- Advanced Reveals post-processing tools and restrictions used for enhancing the results of the neural network.
Parameters
Trained files
Select Trained File ai
Selects an appropriate trained AI file according to the sample objective magnification. Output is expected to be used as a dynamic input parameter for another AI node. Paths to the trained AI files, either relative or absolute, can be defined using a standard regular expression.
Measurement
Cells Presence.ai
Detects whether cells are present in the brightfield image or not. Works even for out of focus images.
- Network Selects the trained network used for the cells presence detection.
- The Recommended option should be used in most cases, as it is trained on a larger dataset and generally delivers better results. However, in rare situations, you can switch to the Legacy network if necessary.
Node outputs:
- Verdict is 1 if cells are present, 0 if they are not present.
- Confidence of the detection, ranging from 0 (not confident) to 1 (very confident).
Quality Estimate.ai
Estimates the Signal to Noise Ratio (“SNR”) value.
Evaluation
Segmentation Accuracy
Calculates the average precision to evaluate AI performance on objects. This node has two inputs - GT (Ground Truth) and Pred (Prediction). It compares the ground truth binary layer (A) and predicted binary layer (B) generated by segmentation using AI.
This node calculates the given metrics directly on pixel data without considering any objects. The metrics it computes (TP, FP, FN) are expressed as the number of pixels, not the number of objects. The IoU threshold is not used in this node.
- true positives (TP) matched correctly,
- false positives (FP) incorrectly segmented objects and
- false negatives (FN) incorrectly missed objects.
Based on these numbers it calculates:
- precision = TP / (TP + FP),
- recall = TP / (TP + FN) and
- F1 = 2 x precision x recall / (precision + recall)
IoU Threshold Defines a threshold above which two overlapping objects are considered correctly matched - threshold of quantity:
Object Segmentation Accuracy
Calculates the average precision to evaluate AI performance on objects. This node has two inputs - GT (Ground Truth) and Pred (Prediction). It compares the ground truth binary layer (A) and predicted binary layer (B) generated by segmentation using AI.
This node calculates the given metrics directly on pixel data without considering any objects. The metrics it computes (TP, FP, FN) are expressed as the number of pixels, not the number of objects. The IoU threshold is not used in this node.
- true positives (TP) matched correctly,
- false positives (FP) incorrectly segmented objects and
- false negatives (FN) incorrectly missed objects.
Based on these numbers it calculates:
- precision = TP / (TP + FP),
- recall = TP / (TP + FN) and
- F1 = 2 x precision x recall / (precision + recall)
IoU Threshold Defines a threshold above which two overlapping objects are considered correctly matched - threshold of quantity: