Image Analysis and Stereology
https://ias-iss.org/ojs/IAS
<p>Image Analysis and Stereology is the official journal of the <a href="http://www.issia.net/">International Society for Stereology & Image Analysis</a>. It promotes the exchange of scientific, technical, organizational, and other information on the quantitative analysis of data having a geometrical structure, including stereology, differential geometry, image analysis, image processing, mathematical morphology, stochastic geometry, statistics, pattern recognition, and related topics. The fields of application are not restricted and range from biomedicine, materials sciences, and physics to geology and geography. Image Analysis & Stereology is a continuation of <a href="https://popups.ulg.ac.be/0351-580x/index.php?page=presentation">Acta Stereologica</a>.</p> <p>This journal is regularly indexed or abstracted in: <a title="Cabell Publishing" href="http://www.cabells.com" target="_blank" rel="noopener">Cabell Publishing</a>, <a href="http://www.cas.org/" target="_blank" rel="noopener">Chemical Abstracts</a>, <a href="https://jcr.clarivate.com/" target="_blank" rel="noopener">Clarivate Journal Citation Reports<sup>TM</sup> (JCR<sup>TM</sup>)</a>, <a href="https://www.webofscience.com/" target="_blank" rel="noopener">Clarivate Web of Science<sup>TM</sup></a>, <a href="https://mathscinet.ams.org/cis" target="_blank" rel="noopener">Current Index to Statistics (CIS)</a>, <a href="http://www.doaj.org/" target="_blank" rel="noopener">DOAJ</a>, <a href="http://www.ebscohost.com/" target="_blank" rel="noopener">EBSCO</a>, <a href="http://www.theiet.org/resources/inspec/" target="_blank" rel="noopener">INSPEC</a>, <a href="http://www.ams.org/mr-database" target="_blank" rel="noopener">Mathematical Reviews</a>, <a href="https://mathscinet.ams.org/mathscinet/publications-search">MathSciNet</a>, <a href="http://search.proquest.com/metadex">METADEX</a>, <a href="http://en.wikipedia.org/wiki/Referativny_Zhurnal">Referativnyj Zhurnal</a>, <a href="https://clarivate.com/products/scientific-and-academic-research/research-discovery-and-workflow-solutions/webofscience-platform/web-of-science-core-collection/science-citation-index-expanded/" target="_blank" rel="noopener">Science Citation Index Expanded (SciSearch®)</a>, <a href="https://www.scopus.com/sourceid/11700154323" target="_blank" rel="noopener">SCOPUS</a>, and <a href="http://www.emis.de/ZMATH/" target="_blank" rel="noopener">Zentrallblatt MATH</a>.</p> <p>The journal is financed by the <a href="https://www.aris-rs.si/en/" target="_blank" rel="noopener">Slovenian Research and Innovation Agency</a>.</p> <p>Image Analysis & Stereology publishes<span class="m_-7660546945188024108apple-converted-space"> </span><a title="Abstracts in Slovenian" href="https://www.ias-iss.org/ojs/IAS/slovenian" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?hl=sl&q=https://www.ias-iss.org/default_002.html&source=gmail&ust=1487657459560000&usg=AFQjCNHSoETlTm9q9h97FDn1oJv3gBxkTA">abstracts in Slovenian language</a>.</p> <p>The JCR Impact Factor (SCI) for 2022 is <strong>0</strong><strong>.900</strong></p> <p><a href="https://www.scopus.com/sourceid/11700154323" target="_blank" rel="noopener">Scopus journal metrics</a>:</p> <ul> <li>CiteScore 2022 <strong>2.0</strong></li> <li>SJR 2022 <strong>0.196</strong></li> <li>SNIP 2022 <strong>0.404</strong></li> </ul>Slovenian Society for Stereology and Quantitative Image Analysisen-USImage Analysis and Stereology1580-3139A Completed Multiple Threshold Encoding Pattern for Texture Classification
https://ias-iss.org/ojs/IAS/article/view/2824
<p>The binary pattern family has drawn wide attention for texture representation due to its promising performance and simple operation. However, most binary pattern methods focus on local neighborhoods but ignore center pixels. Even if some studies introduce the center based sub-pattern to provide complementary information, existing center based sub-patterns are much weaker than other local neighborhood based sub-patterns. This severe unbalance limits the classification performance of fusion features significantly. To alleviate this problem, this paper designs a multiple threshold center pattern (MTCP) to provide a more discriminative and complementary local texture representation with a compact form. First, a multiple threshold encoding strategy is designed to encode the center pixel that generates three 1-bit binary patterns. Second, it adopts a compact multi-pattern encoding strategy to combine them into a 3-bit MTCP. Furthermore, this paper proposes a completed multiple threshold encoding pattern by fusing the MTCP, local sign pattern, and local magnitude pattern. Comprehensive experimental evaluations on three popular texture classification benchmarks confirm that the completed multiple threshold encoding pattern achieves superior texture classification performance.</p>Bin LiYibing LiQ. M. Jonathan Wu
Copyright (c) 2023 Bin Li, Yibing Li, Q. M. Jonathan Wu
https://creativecommons.org/licenses/by/4.0
2023-10-222023-10-2242314515910.5566/ias.2824Sample-Balanced and IoU-Guided Anchor-Free Visual Tracking
https://ias-iss.org/ojs/IAS/article/view/2929
<p>Siamese network-based visual tracking algorithms have achieved excellent performance in recent years, but challenges such as fast target motion, shape and scale variations have made the tracking extremely difficult. The regression of anchor-free tracking has low computational complexity, strong real-time performance, and is suitable for visual tracking. Based on the anchor-free siamese tracking framework, this paper firstly introduces balance factors and modulation coefficients into the cross-entropy loss function to solve the classification inaccuracy caused by the imbalance between positive and negative samples as well as the imbalance between hard and easy samples during the training process, so that the model focuses more on the positive samples and the hard samples that make the major contribution to the training. Secondly, the intersection over union (IoU) loss function of the regression branch is improved, not only focusing on the IoU between the predicted box and the ground truth box, but also considering the aspect ratios of the two boxes and the minimum bounding box area that accommodate the two, which guides the generation of more accurate regression offsets. The overall loss of classification and regression is iteratively minimized and improves the accuracy and robustness of visual tracking. Experiments on four public datasets, OTB2015, VOT2016, UAV123 and GOT-10k, show that the proposed algorithm achieves the state-of-the-art performance.</p>Jueyu ZhuYu QinKai WangZhigao Zeng
Copyright (c) 2023 Jueyu Zhu, Yu Qin, Kai Wang, Zhigao Zeng
https://creativecommons.org/licenses/by/4.0
2023-11-012023-11-0142316117010.5566/ias.2929Existence and Approximation of Densities of Chord Length- and Cross Section Area Distributions
https://ias-iss.org/ojs/IAS/article/view/2923
<p>In various stereological problems a <em>n</em>-dimensional convex body is intersected with an (<em>n−</em>1)-dimensional Isotropic Uniformly Random (IUR) hyperplane. In this paper the cumulative distribution function associated with the (<em>n−</em>1)-dimensional volume of such a random section is studied. This distribution is also known as chord length distribution and cross section area distribution in the planar and spatial case respectively. For various classes of convex bodies it is shown that these distribution functions are absolutely continuous with respect to Lebesgue measure. A Monte Carlo simulation scheme is proposed for approximating the corresponding probability density functions.</p>Thomas van der JagtGeurt JongbloedMartina Vittorietti
Copyright (c) 2023 Thomas van der Jagt, Geurt Jongbloed, Martina Vittorietti
https://creativecommons.org/licenses/by/4.0
2023-10-272023-10-2742317118410.5566/ias.2923Improvement Procedure for Image Segmentation of Fruits and Vegetables Based on the Otsu Method
https://ias-iss.org/ojs/IAS/article/view/2939
<p>Currently, there are significant challenges in the classification, recognition, and detection of fruits and vegetables. An important step in solving this problem is to obtain an accurate segmentation of the object of interest. However, the background and object separation in a grayscale image shows high errors for some thresholding techniques due to uneven or poorly conditioned lighting. An accepted strategy to reduce segmentation errors is to select the channel of an RGB image with high contrast. This paper presents the results of an experimental procedure based on enhancing binary segmentation by using the Otsu method. The procedure was carried out with images of real agricultural products, both with and without additional noise, to corroborate the robustness of the proposed strategy. The experimental tests were performed using our database of RGB images of agricultural products under uncontrolled illumination. The results show that the best segmentation is achieved by selecting the Blue channel of the RGB test images due to its higher contrast. Here, the quantitative results are measured by applying the Jaccard and Dice metrics based on the ground-truth images as optimal reference. Most of the results using both metrics show an improvement greater than 45.5% in the two experimental tests.</p>Osbaldo Vite-ChávezJorge Flores-TroncosoReynel Olivera-ReynaJorge Ulises Munoz-Minjares
Copyright (c) 2023 Osbaldo Vite-Chávez, Jorge Flores-Troncoso, Reynel Olivera-Reyna, Jorge Ulises Munoz-Minjares
https://creativecommons.org/licenses/by/4.0
2023-10-262023-10-2642318519610.5566/ias.2939PU-NET Deep Learning Architecture for Gliomas Brain Tumor Segmentation in Magnetic Resonance Images
https://ias-iss.org/ojs/IAS/article/view/2879
<p>Automatic medical image segmentation is one of the main tasks for many organs and pathology structures delineation. It is also a crucial technique in the posterior clinical examination of brain tumors, like applying radiotherapy or tumor restrictions. Various image segmentation techniques have been proposed and applied to different image types. Recently, it has been shown that the deep learning approach accurately segments images, and its implementation is usually straightforward. In this paper, we proposed a novel approach, called PU-NET, for automatic brain tumor segmentation in multi-modal magnetic resonance images (MRI). We introduced an input processing block to a customized fully convolutional network derived from the U-Net network to handle the multi-modal inputs. We performed experiments over the Brain Tumor Segmentation (BRATS) dataset collected in 2018 and achieved Dice scores of 90.5%, 82.7%, and 80.3% for the whole tumor, tumor core, and enhancing tumor classes, respectively. This study provides promising results compared to the deep learning methods used in this context.</p>Yamina AzziAbdelouahab Moussaoui Mohand-Tahar Kechadi
Copyright (c) 2023 Yamina Azzi
https://creativecommons.org/licenses/by/4.0
2024-01-072024-01-0742319720610.5566/ias.2879