
We applied our automated segmentation pipeline to analyze 3D confocal imaging data from an experimental study wherein different subsets of neighboring pavement cells were ablated to determine the effects of these ablations on guard cell morphology. This allowed us to perform analyses of these parameters, which are relevant to stomatal biology, in a fully automated fashion. In addition, we provided an extended pipeline to automatically measure the volume of each guard cell and the width and length of each stomatal complex. Also, we employ combined dice and focal loss to reduce the inherent class imbalance issue. This eliminates the need to employ an external network for volume of interest (VOI) localization, which has been used frequently in previous approaches. (3) We improve the model’s sensitivity and accuracy with volumetric attention gates by making the model automatically target relevant regions while suppressing irrelevant regions. This makes the model more memory efficient when analyzing large volumetric images. (2) Input images are divided into small patches, and the model is only trained with those patches. There are three key parts to our model: (1) two subnetworks for different image resolutions are trained and fused to improve the model’s robustness towards different scales. Here, we developed a patch-wise, attention-gated, fully convolutional 3D framework for guard cell segmentation ( Figure 1), which only requires a few hand-annotated volumetric images to train. We anticipate that our model will allow biologists to rapidly test cell mechanics and dynamics and help them identify plants that more efficiently use water, a major limiting factor in global agricultural production and an area of critical concern during climate change. This work presents a comprehensive, automated, computer-based volumetric analysis of fluorescent guard cell images. The resulting data allow us to refine this polar stiffening model. As a proof of concept, we applied our model to 3D confocal data from a cell ablation experiment that tests the “polar stiffening” model of stomatal biomechanics. Our model is trained end to end and achieved expert-level accuracy while leveraging only eight human-labeled volume images.

We present a memory-efficient, attention-based, one-stage segmentation neural network for 3D images of stomatal guard cells. SummaryĪutomating the three-dimensional (3D) segmentation of stomatal guard cells and other confocal microscopy data is extremely challenging due to hardware limitations, hard-to-localize regions, and limited optical resolution. We anticipate that our model will allow biologists to efficiently test new hypotheses on cell dynamics and biomechanics with a handful of labeled 3D images of their own. When applied to 3D confocal data, it allowed us to quantitatively test how neighboring pavement cells in the epidermis of Arabidopsis thaliana plants impose physical constraints on stomatal complexes, a refinement of the “polar stiffening” model of stomatal biomechanics. With this aim, we present a one-stage automated segmentation network for 3D images of stomatal guard cells (3D CellNet) that enables rapid and accurate morphological measurements. Plant biologists seek to study the dynamic deformations of stomatal guard cells, which can allow plants to use water more efficiently, a major limiting factor in global agricultural production and an area of increasing concern due to climate change.

#The bigger picture analysis manual#
A major challenge for biologists studying cell dynamics in 3D is labor-intensive manual segmentation and measurements of cells.
