PLOS ONE: Multivariate Gaussian Bayes classifier with limited data for segmentation of clean and contaminated regions in the small bowel capsule endoscopy images
Vahid Sadeghi1, Alireza Mehridehnavi 1*, Maryam Behdad2, Alireza Vard1, Mina Omrani3, Mohsen Sharifi4, Yasaman Sanahmadi1, Niloufar Teyfouri
Abstract
A considerable amount of undesirable factors in the wireless capsule endoscopy (WCE) procedure hinder the proper visualization of the small bowel and take gastroenterologists more time to review. Objective quantitative assessment of different bowel preparation paradigms and saving the physician reviewing time motivated us to present an automatic lowcost statistical model for automatically segmenting of clean and contaminated regions in the WCE images. In the model construction phase, only 20 manually pixel-labeled images have been used from the normal and reduced mucosal view classes of the Kvasir capsule endoscopy dataset. In addition to calculating prior probability, two different probabilistic tri-variate Gaussian distribution models (GDMs) with unique mean vectors and covariance matrices have been fitted to the concatenated RGB color pixel intensity values of clean and contaminated regions separately. Applying the Bayes rule, the membership probability of every pixel of the input test image to each of the two classes is evaluated. The robustness has been evaluated using 5 trials; in each round, from the total number of 2000 randomly selected images, 20 and 1980 images have been used for model construction and evaluation modes, respectively. Our experimental results indicate that accuracy, precision, specificity, sensitivity, area under the receiver operating characteristic curve (AUROC), dice similarity coefficient (DSC), and intersection over union (IOU) are 0.89 ± 0.07, 0.91 ± 0.07, 0.73 ± 0.20, 0.90 ± 0.12, 0.92±0.06, 0.92 ± 0.05 and 0.86 ± 0.09, respectively. The presented scheme is easy to deploy for objectively assessing small bowel cleansing score, comparing different bowel preparation paradigms, and decreasing the inspection time. The results from the SEE-AI project dataset and CECleanliness database proved that the proposed scheme has good adaptability.
Abstract
Wireless capsule endoscopy (WCE) captures huge number of images, but only a fraction are medically relevant. We propose automated real-time small bowel visualization quality (SBVQ) assessment to eliminate transmission of irrelevant frames. Our aim is to design lightweight color-based models for segmenting clean and contaminated regions with minimal parameters, short training, and fast inference, suitable for WCE hardware integration. Using the Kvasir Capsule endoscopy dataset, we constructed models based on distinctive color patterns of clean and contaminated regions. While different classifiers have been trained and evaluated, the k-nearest neighbors (KNNs), multilayer perceptron (MLP), and gradient-boosted machine (GBM) obtained superior performance (accuracy: 0.87±0.12, Dice similarity score (DSC): 0.87±0.15, intersection over union (IOU): 0.80±0.19). Logistic regression (LR) had the shortest training and inference times. Our models offer simplicity, compactness, and robustness, delivering satisfactory real-time performance. Evaluation on the SEE-AI project dataset confirms good generalization capabilities, demonstrating practical solutions for WCE image analysis.
K E Y W O R D S
clean and contaminated, embedded image segmentation, low power consumption, small bowel visualization quality (SBVQ), wireless capsule endoscopy (WCE)
The stable decoding of movement parameters using neural activity is crucial for the success of brain-machine interfaces (BMIs). However, neural activity can be unstable over time, leading to changes in the parameters used for decoding movement, which can hinder accurate movement decoding. To tackle this issue, one approach is to transfer neural activity to a stable, low-dimensional manifold using dimensionality reduction techniques and align manifolds across sessions by maximizing correlations of the manifolds. However, the practical use of manifold stabilization techniques requires knowledge of the true subject intentions such as target direction or behavioral state. To overcome this limitation, an automatic unsupervised algorithm is proposed that determines movement target intention before manifold alignment in the presence of manifold rotation and scaling across sessions. This unsupervised algorithm is combined with a dimensionality reduction and alignment method to overcome decoder instabilities. The effectiveness of the BMI stabilizer method is represented by decoding the two-dimensional (2D) hand velocity of two rhesus macaque monkeys during a center-out-reaching movement task. The performance of the proposed method is evaluated using correlation coefficient and R-squared measures, demonstrating higher decoding performance compared to a state-of-the-art unsupervised BMI stabilizer. The results offer benefits for the automatic determination of movement intents in long-term BMI decoding. Overall, the proposed method offers a promising automatic solution for achieving stable and accurate movement decoding in BMI applications.
a b s t r a c t:
Wireless capsule endoscopy (WCE) is capable of noninvasively visualizing the small intestine, the most complicated segment of the gastrointestinal tract, to detect different types of abnormalities. However, its main drawback is reviewing the vast number of captured images (more than 50,000 frames). The recorded images are only sometimes clear, and different contaminating agents, such as turbid materials and air bubbles, degrade the visualization quality of the WCE images. This condition could cause serious problems such as reducing mucosal view visualization, prolonging recorded video reviewing time, and increasing the risks of missing pathology. On the other hand, accurately quantifying the amount of turbid fluids and bubbles can indicate potential motility malfunction. To assist in developing computer vision-based techniques, we have constructed the first multicentre publicly available clear and contaminated annotated dataset by precisely segmenting 17,593 capsule endoscopy images from three different databases. In contrast to the existing datasets, our dataset has been annotated at the pixel level, discriminating the clear and contaminated regions and subsequently differentiating bubbles and turbid fluids from normal tissue. To create the dataset, we first selected all of the images (2906 frames) in the reduced mucosal view class covering different levels of contamination and randomly selected 12,237 images from the normal class of the copyright-free CC BY 4.0 licensed small bowel capsule endoscopy (SBCE) images from the Kvasir capsule endoscopy database. To mitigate the possible available bias in the mentioned dataset and to increase the sample size, the number of 2077 and 373 images have been stochastically chosen from the SEE-AI project and CECleanliness datasets respectively for the subsequent annotation. Randomly selected images have been annotated with the aid of ImageJ and ITK-SNAP software under the supervision of an expert SBCE reader with extensive experience in gastroenterology and endoscopy. For each image, two binary and tricolour ground truth (GT) masks have been created in which each pixel has been indexed into two classes (clear and contaminated) and three classes (bubble, turbid fluids, and normal), respectively.
To the best of the author’s knowledge, there is no implemented clear and contaminated region segmentation on the capsule endoscopy reading software. Curated multicentre dataset can be utilized to implement applicable segmentation algorithms for identification of clear and contaminated regions and discrimination bubbles, as well as turbid fluids from normal tissue in the small intestine.
Since the annotated images belong to three different sources, they provide a diverse representation of the clear and contaminated patterns in the WCE images. This diversity is valuable for training the models that are more robust to variations in data characteristics and can generalize well across different subjects and settings. The inclusion of images from three different centres allows for robust cross-validation opportunities, where computer vision-based models can be trained on one centre’s annotated images and evaluated on others.
Journal of Computational Science: Deep attention network for identifying ligand-protein binding sites
Fatemeh Nazem a,b, Reza Rasti c, Afshin Fassihi d, Alireza Mehri Dehnavi a,*, Fahimeh Ghasemi b,e,**
A B S T R A C T
One of the critical aspects of structure-based drug design is to choose important druggable binding sites in the
protein’s crystallography structures. As experimental processes are costly and time-consuming, computational
drug design using machine learning algorithms is recommended. Over recent years, deep learning methods have been utilized in a wide variety of research applications such as binding site prediction. In this study, a new combination of attention blocks in the 3D U-Net model based on semantic segmentation methods is used to improve localization of pocket prediction. The attention blocks are tuned to find which point and channel of
features should be emphasized along spatial and channel axes. Our model’s performance is evaluated through
extensive experiments on several datasets from different sources, and the results are compared to the most recent deep learning-based models. The results indicate the proposed attention model (Att-UNet) can predict binding sites accurately, i.e. the overlap of the predicted pocket using the proposed method with the true binding site shows statistically significant improvement when compared to other state-of-the-art models. The attention blocks may help the model focus on the target structure by suppressing features in irrelevant regions.
JMSS: Identification of Circular Patterns in Capsule Endoscopy Bubble Frames
Hossein Mir1, Vahid Sadeghi1, Alireza Vard1,2, Alireza Mehri Dehnavi1,2
Abstract
Background: A significant number of frames captured by the wireless capsule endoscopy are involved with varying amounts of bubbles. Whereas different studies have considered bubbles as nonuseful agents due to the fact that they reduce the visualization quality of the small intestine mucosa, this research aims to develop a practical way of assessing the rheological capability of the circular bubbles as a suggestion for future clinical diagnostic purposes.
Methods: From the Kvasir‑capsule endoscopy dataset, frames with varying levels of bubble engagements were chosen in two categories based on bubble size. Border reflections are present on the edges of round‑shaped bubbles in their boundaries, and in the frequency domain, high‑frequency bands correspond to these edges in the spatial domain. The first step is about high‑pass filtering of border reflections using wavelet transform (WT) and Differential of Gaussian, and the second step is related to applying the Fast Circlet Transform (FCT) and the Hough transform as circle detection tools on extracted borders and evaluating the distribution and abundance of all bubbles with the variety of radii.
Results: Border’s extraction using WT as a preprocessing approach makes it easier for circle detection tool for better concentration on high‑frequency circular patterns. Consequently, applying FCT with predefined parameters can specify the variety and range of radius and the abundance for all bubbles in an image. The overall discrimination factor (ODF) of 15.01, and 7.1 showing distinct bubble distributions in the gastrointestinal (GI) tract. The discrimination in ODF from datasets 1–2 suggests a relationship between the rheological properties of bubbles and their coverage area plus their abundance, highlighting the WT and FCT performance in determining bubbles’ distributions for diagnostic objectives.
Conclusion: The implementation of an object‑oriented attitude in gastrointestinal analysis makes it intelligible for gastroenterologists to approximate the constituent features of intra‑intestinal fluids. this can’t be evaluated until the bubbles are considered as non‑useful agents. The obtained results from the datasets proved that the difference between the calculated ODF can be used as an indicator for the quality estimation of intraintestinal fluids’ rheological features like viscosity, which helps gastroenterologists evaluate the quality of patient digestion.
Keywords: bubble, small bowel, fast circlet transform, wireless capsule endoscopy, foam analysis, foam metrics, rheological features analysis
Informatics in Medicine Unlocked: Segmentation and region quantification of bubbles in small bowel capsule endoscopy images using wavelet transform
ABSTRACT
Objective: A large number of captured frames by the wireless capsule endoscopy have been contaminated with different amounts of bubbles. Bubbles can degrade the visualization quality of the small intestine mucosa. The aim of this study is to develop an objective method for evaluating the amount of bubbles in WCE images. Methods: Frames with varying levels of bubble occlusion were selected from the Kvasir capsule endoscopy dataset. The round shape bubbles have an edge in their boundaries. Edges in the spatial domain correspond to high-frequency bands in the frequency domain. Two automated edge detection approaches have been developed in a rule-based manner and evaluated to assess the amount of bubbles. The first approach involved high pass filtering using fast Fourier transform (FFT), while the second approach has been based on wavelet image decomposition and reconstruction by omitting approximation coefficients subband. Results: Both Fourier and wavelet transforms obtained approximately the same dice similarity score (DSC), and precision metrics, which were equal to 0.87, and 0.91, respectively. Based on the specificity measure, the FFT outperformed the Hough and wavelet transforms. However, the wavelet transform obtained a higher dice similarity score (DSC) (0.93), accuracy (0.95), and sensitivity metric (0.97) and was the fastest, with an execution time of 0.01 s per frame, making it suitable for real-time applications. Conclusion: The proposed technique provides an easy-to-implement method for quality reporting or objective comparison tools of different bowel preparation paradigms in real-time applications due to its fast execution time. The obtained results from two different datasets proved that the presented method has good generalization.
Fatemeh Nazem1,2, Fahimeh Ghasemi2 , Afshin Fassihi3 , Reza Rasti4 , Alireza Mehri Dehnavi1
Abstract
Background: The first step in developing new drugs is to find binding sites for a protein structure that can be used as a starting point to design new antagonists and inhibitors. The methods relying on convolutional neural network for the prediction of binding sites have attracted much attention. This study focuses on the use of optimized neural network for three‑dimensional (3D) non‑Euclidean data. Methods: A graph, which is made from 3D protein structure, is fed to the proposed GU‑Net model based on graph convolutional operation. The features of each atom are considered as attributes of each node. The results of the proposed GU‑Net are compared with a classifier based on random forest (RF). A new data exhibition is used as the input of RF classifier. Results: The performance of our model is also examined through extensive experiments on various datasets from other sources. GU‑Net could predict the more number of pockets with accurate shape than RF. Conclusions: This study will enable future works on a better modeling of protein structures that will enhance knowledge of proteomics and offer deeper insight into drug design process.
Keywords: Graph convolutional neural network, point cloud semantic segmentation, protein–ligand‑binding sites, three‑dimensional U‑Net model