Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts

Image processing articles within Scientific Reports

Article 29 August 2024 | Open Access

Adaptive condition-aware high-dimensional decoupling remote sensing image object detection algorithm

  • Chenshuai Bai
  • , Xiaofeng Bai
  •  &  Yuanjie Ye

Article 28 August 2024 | Open Access

Deep learning-assisted segmentation of X-ray images for rapid and accurate assessment of foot arch morphology and plantar soft tissue thickness

  • , Tianhong Ru
  •  &  Ran Huang

A mixed Mamba U-net for prostate segmentation in MR images

  • , Luowu Wang
  •  &  Hao Chen

Article 26 August 2024 | Open Access

Multiwell-based G0-PCC assay for radiation biodosimetry

  • Ekaterina Royba
  • , Igor Shuryak
  •  &  David J. Brenner

Article 24 August 2024 | Open Access

Performance enhancement of deep learning based solutions for pharyngeal airway space segmentation on MRI scans

  • Chattapatr Leeraha
  • , Worapan Kusakunniran
  •  &  Thanongchai Siriapisith

Article 23 August 2024 | Open Access

Machine learning approaches to detect hepatocyte chromatin alterations from iron oxide nanoparticle exposure

  • Jovana Paunovic Pantic
  • , Danijela Vucevic
  •  &  Igor Pantic

Article 21 August 2024 | Open Access

An efficient segment anything model for the segmentation of medical images

  • Guanliang Dong
  • , Zhangquan Wang
  •  &  Haidong Cui

Article 20 August 2024 | Open Access

A novel approach for automatic classification of macular degeneration OCT images

  • Shilong Pang
  • , Beiji Zou
  •  &  Kejuan Yue

Article 18 August 2024 | Open Access

Subject-specific atlas for automatic brain tissue segmentation of neonatal magnetic resonance images

  • Negar Noorizadeh
  • , Kamran Kazemi
  •  &  Ardalan Aarabi

Article 17 August 2024 | Open Access

Three layered sparse dictionary learning algorithm for enhancing the subject wise segregation of brain networks

  • Muhammad Usman Khalid
  • , Malik Muhammad Nauman
  •  &  Kamran Ali

Article 14 August 2024 | Open Access

Development and performance evaluation of fully automated deep learning-based models for myocardial segmentation on T1 mapping MRI data

  • Mathias Manzke
  • , Simon Iseke
  •  &  Felix G. Meinel

Article 13 August 2024 | Open Access

Haemodynamic study of left nonthrombotic iliac vein lesions: a preliminary report

  • , Qijia Liu
  •  &  Xuan Li

Cross-modality sub-image retrieval using contrastive multimodal image representations

  • Eva Breznik
  • , Elisabeth Wetzer
  •  &  Nataša Sladoje

Article 11 August 2024 | Open Access

Effective descriptor extraction strategies for correspondence matching in coronary angiography images

  • Hyun-Woo Kim
  • , Soon-Cheol Noh
  •  &  Si-Hyuck Kang

Article 10 August 2024 | Open Access

Lightweight safflower cluster detection based on YOLOv5

  • , Tianlun Wu
  •  &  Haiyang Chen

Article 08 August 2024 | Open Access

Primiparous sow behaviour on the day of farrowing as one of the primary contributors to the growth of piglets in early lactation

  • Océane Girardie
  • , Denis Laloë
  •  &  Laurianne Canario

High-throughput image processing software for the study of nuclear architecture and gene expression

  • Adib Keikhosravi
  • , Faisal Almansour
  •  &  Gianluca Pegoraro

Article 07 August 2024 | Open Access

Puzzle: taking livestock tracking to the next level

  • Jehan-Antoine Vayssade
  •  &  Mathieu Bonneau

Article 02 August 2024 | Open Access

The impact of fine-tuning paradigms on unknown plant diseases recognition

  • Jiuqing Dong
  • , Alvaro Fuentes
  •  &  Dong Sun Park

Article 01 August 2024 | Open Access

AI-enhanced real-time cattle identification system through tracking across various environments

  • Su Larb Mon
  • , Tsubasa Onizuka
  •  &  Thi Thi Zin

Article 31 July 2024 | Open Access

Study on lung CT image segmentation algorithm based on threshold-gradient combination and improved convex hull method

  • Junbao Zheng
  • , Lixian Wang
  •  &  Abdulla Hamad Yussuf

Article 30 July 2024 | Open Access

A multibranch and multiscale neural network based on semantic perception for multimodal medical image fusion

  • , Yinjie Chen
  •  &  Mengxing Huang

Article 26 July 2024 | Open Access

Detection of diffusely abnormal white matter in multiple sclerosis on multiparametric brain MRI using semi-supervised deep learning

  • Benjamin C. Musall
  • , Refaat E. Gabr
  •  &  Khader M. Hasan

The integrity of the corticospinal tract and corpus callosum, and the risk of ALS: univariable and multivariable Mendelian randomization

  • , Gan Zhang
  •  &  Dongsheng Fan

Article 23 July 2024 | Open Access

Accelerating photoacoustic microscopy by reconstructing undersampled images using diffusion models

  •  &  M. Burcin Unlu

Article 20 July 2024 | Open Access

Automated segmentation of the median nerve in patients with carpal tunnel syndrome

  • Florentin Moser
  • , Sébastien Muller
  •  &  Mari Hoff

Article 18 July 2024 | Open Access

Estimating infant age from skull X-ray images using deep learning

  • Heui Seung Lee
  • , Jaewoong Kang
  •  &  Bum-Joo Cho

Article 17 July 2024 | Open Access

Finite element models with automatic computed tomography bone segmentation for failure load computation

  • Emile Saillard
  • , Marc Gardegaront
  •  &  Hélène Follet

Article 16 July 2024 | Open Access

Deep learning pose detection model for sow locomotion

  • Tauana Maria Carlos Guimarães de Paula
  • , Rafael Vieira de Sousa
  •  &  Adroaldo José Zanella

Article 15 July 2024 | Open Access

Deep learning application of vertebral compression fracture detection using mask R-CNN

  • Seungyoon Paik
  • , Jiwon Park
  •  &  Sung Won Han

Article 11 July 2024 | Open Access

Morphological classification of neurons based on Sugeno fuzzy integration and multi-classifier fusion

  • , Guanglian Li
  •  &  Haixing Song

Preoperative prediction of MGMT promoter methylation in glioblastoma based on multiregional and multi-sequence MRI radiomics analysis

  • , Feng Xiao
  •  &  Haibo Xu

Article 09 July 2024 | Open Access

Noninvasive, label-free image approaches to predict multimodal molecular markers in pluripotency assessment

  • Ryutaro Akiyoshi
  • , Takeshi Hase
  •  &  Ayako Yachie

Article 08 July 2024 | Open Access

A prospective multi-center study quantifying visual inattention in delirium using generative models of the visual processing stream

  • Ahmed Al-Hindawi
  • , Marcela Vizcaychipi
  •  &  Yiannis Demiris

Article 06 July 2024 | Open Access

Advancing common bean ( Phaseolus vulgaris L.) disease detection with YOLO driven deep learning to enhance agricultural AI

  • Daniela Gomez
  • , Michael Gomez Selvaraj
  •  &  Ernesto Espitia

Article 05 July 2024 | Open Access

Image processing based modeling for Rosa roxburghii fruits mass and volume estimation

  • Zhiping Xie
  • , Junhao Wang
  •  &  Manyu Sun

On leveraging self-supervised learning for accurate HCV genotyping

  • Ahmed M. Fahmy
  • , Muhammed S. Hammad
  •  &  Walid I. Al-atabany

A semantic feature enhanced YOLOv5-based network for polyp detection from colonoscopy images

  • Jing-Jing Wan
  • , Peng-Cheng Zhu
  •  &  Yong-Tao Yu

Article 03 July 2024 | Open Access

DSnet: a new dual-branch network for hippocampus subfield segmentation

  • , Wangang Cheng
  •  &  Guanghua He

Quantification of cardiac capillarization in basement-membrane-immunostained myocardial slices using Segment Anything Model

  • , Xiwen Chen
  •  &  Tong Ye

Article 02 July 2024 | Open Access

Matrix metalloproteinase 9 expression and glioblastoma survival prediction using machine learning on digital pathological images

  • , Yuan Yang
  •  &  Yunfei Zha

Article 01 July 2024 | Open Access

Generalized div-curl based regularization for physically constrained deformable image registration

  • Paris Tzitzimpasis
  • , Mario Ries
  •  &  Cornel Zachiu

Multi-branch CNN and grouping cascade attention for medical image classification

  • , Wenwen Yue
  •  &  Liejun Wang

Article 25 June 2024 | Open Access

Spatial control of perilacunar canalicular remodeling during lactation

  • Michael Sieverts
  • , Cristal Yee
  •  &  Claire Acevedo

Deep learning-based localization algorithms on fluorescence human brain 3D reconstruction: a comparative study using stereology as a reference

  • Curzio Checcucci
  • , Bridget Wicinski
  •  &  Paolo Frasconi

Article 24 June 2024 | Open Access

Tongue image fusion and analysis of thermal and visible images in diabetes mellitus using machine learning techniques

  • Usharani Thirunavukkarasu
  • , Snekhalatha Umapathy
  •  &  Tahani Jaser Alahmadi

Article 22 June 2024 | Open Access

YOLOv8-CML: a lightweight target detection method for color-changing melon ripening in intelligent agriculture

  • Guojun Chen
  • , Yongjie Hou
  •  &  Lei Cao

Article 21 June 2024 | Open Access

Performance evaluation of the digital morphology analyser Sysmex DI-60 for white blood cell differentials in abnormal samples

  • , Yingying Diao
  •  &  Hong Luan

Article 20 June 2024 | Open Access

Machine-learning-guided recognition of α and β cells from label-free infrared micrographs of living human islets of Langerhans

  • Fabio Azzarello
  • , Francesco Carli
  •  &  Francesco Cardarelli

Article 10 June 2024 | Open Access

Fast and robust feature-based stitching algorithm for microscopic images

  • Fatemeh Sadat Mohammadi
  • , Hasti Shabani
  •  &  Mojtaba Zarei

Advertisement

Browse broader subjects

  • Computational biology and bioinformatics

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research papers on image processing topics

chrome icon

Showing papers on "Image processing published in 2023"

Ieee transactions on image processing.

123  citations

Image Processing On Line

13  citations

SNIS: A Signal Noise Separation-Based Network for Post-Processed Image Forgery Detection

4  citations

Deep learning-based real-world object detection and improved anomaly detection for surveillance videos

3  citations

Noncontact Sensing Techniques for AI-Aided Structural Health Monitoring: A Systematic Review

Automatic seat identification system in smart transport using iot and image processing, practical application of digital image processing in measuring concrete crack widths in field studies.

2  citations

Saliency map in image visual quality assessment and processing

Integrated diffusion image operator (idio): a pipeline for automated configuration and processing of diffusion mri data, development of complete image processing system including image filtering, image compression & image security, android-based herpes disease detection application using image processing, automated invoice data extraction using image processing, efficient object detection and classification approach using htyolov4 and m2rfo-cnn, deep and low-rank quaternion priors for color image processing, implementation of automated pipeline for resting-state fmri analysis with pacs integration, stress detection using machine learning and image processing, automated extraction of seed morphological traits from images, identification of counterfeit indian currency note using image processing and machine learning classifiers, computer vision on x-ray data in industrial production and security applications: a comprehensive survey, iot based image processing filters, comprehensive automatic processing and analysis of adaptive optics flood illumination retinal images on healthy subjects., research on super-resolution image based on deep learning, ocr-mrd: performance analysis of different optical character recognition engines for medical report digitization, joint graph attention and asymmetric convolutional neural network for deep image compression, brain tumor diagnosis using image fusion and deep learning, brain tumor diagnosis using machine learning: a review, a study of air-water flow in a narrow rectangular duct using an image processing technique, deep learning using a residual deconvolutional network enables real-time high-density single-molecule localization microscopy., improved frqi on superconducting processors and its restrictions in the nisq era, darsia: an open-source python toolbox for two-scale image processing of dynamics in porous media.

digital image processing Recently Published Documents

Total documents.

  • Latest Documents
  • Most Cited Documents
  • Contributed Authors
  • Related Sources
  • Related Keywords

Developing Digital Photomicroscopy

(1) The need for efficient ways of recording and presenting multicolour immunohistochemistry images in a pioneering laboratory developing new techniques motivated a move away from photography to electronic and ultimately digital photomicroscopy. (2) Initially broadcast quality analogue cameras were used in the absence of practical digital cameras. This allowed the development of digital image processing, storage and presentation. (3) As early adopters of digital cameras, their advantages and limitations were recognised in implementation. (4) The adoption of immunofluorescence for multiprobe detection prompted further developments, particularly a critical approach to probe colocalization. (5) Subsequently, whole-slide scanning was implemented, greatly enhancing histology for diagnosis, research and teaching.

Parallel Algorithm of Digital Image Processing Based on GPU

Quantitative identification cracks of heritage rock based on digital image technology.

Abstract Digital image processing technologies are used to extract and evaluate the cracks of heritage rock in this paper. Firstly, the image needs to go through a series of image preprocessing operations such as graying, enhancement, filtering and binaryzation to filter out a large part of the noise. Then, in order to achieve the requirements of accurately extracting the crack area, the image is again divided into the crack area and morphological filtering. After evaluation, the obtained fracture area can provide data support for the restoration and protection of heritage rock. In this paper, the cracks of heritage rock are extracted in three different locations.The results show that the three groups of rock fractures have different effects on the rocks, but they all need to be repaired to maintain the appearance of the heritage rock.

Determination of Optical Rotation Based on Liquid Crystal Polymer Vortex Retarder and Digital Image Processing

Discussion on curriculum reform of digital image processing under the certification of engineering education, influence and application of digital image processing technology on oil painting creation in the era of big data, geometric correction analysis of highly distortion of near equatorial satellite images using remote sensing and digital image processing techniques, color enhancement of low illumination garden landscape images.

The unfavorable shooting environment severely hinders the acquisition of actual landscape information in garden landscape design. Low quality, low illumination garden landscape images (GLIs) can be enhanced through advanced digital image processing. However, the current color enhancement models have poor applicability. When the environment changes, these models are easy to lose image details, and perform with a low robustness. Therefore, this paper tries to enhance the color of low illumination GLIs. Specifically, the color restoration of GLIs was realized based on modified dynamic threshold. After color correction, the low illumination GLI were restored and enhanced by a self-designed convolutional neural network (CNN). In this way, the authors achieved ideal effects of color restoration and clarity enhancement, while solving the difficulty of manual feature design in landscape design renderings. Finally, experiments were carried out to verify the feasibility and effectiveness of the proposed image color enhancement approach.

Discovery of EDA-Complex Photocatalyzed Reactions Using Multidimensional Image Processing: Iminophosphorane Synthesis as a Case Study

Abstract Herein, we report a multidimensional screening strategy for the discovery of EDA-complex photocatalyzed reactions using only photographic devices (webcam, cellphone) and TLC analysis. An algorithm was designed to identify automatically EDA-complex reactive mixtures in solution from digital image processing in a 96-wells microplate and by TLC-analysis. The code highlights the region of absorption of the mixture in the visible spectrum, and the quantity of the color change through grayscale values. Furthermore, the code identifies automatically the blurs on the TLC plate and classifies the mixture as colorimetric reactions, non-reactive or potentially reactive EDA mixtures. This strategy allowed us to discover and then optimize a new EDA-mediated approach for obtaining iminophosphoranes in up to 90% yield.

Mangosteen Quality Grading for Export Markets Using Digital Image Processing Techniques

Export citation format, share document.

SPECIALTY GRAND CHALLENGE article

Grand challenges in image processing.

Frdric Dufaux

  • Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des signaux et Systèmes, Gif-sur-Yvette, France

Introduction

The field of image processing has been the subject of intensive research and development activities for several decades. This broad area encompasses topics such as image/video processing, image/video analysis, image/video communications, image/video sensing, modeling and representation, computational imaging, electronic imaging, information forensics and security, 3D imaging, medical imaging, and machine learning applied to these respective topics. Hereafter, we will consider both image and video content (i.e. sequence of images), and more generally all forms of visual information.

Rapid technological advances, especially in terms of computing power and network transmission bandwidth, have resulted in many remarkable and successful applications. Nowadays, images are ubiquitous in our daily life. Entertainment is one class of applications that has greatly benefited, including digital TV (e.g., broadcast, cable, and satellite TV), Internet video streaming, digital cinema, and video games. Beyond entertainment, imaging technologies are central in many other applications, including digital photography, video conferencing, video monitoring and surveillance, satellite imaging, but also in more distant domains such as healthcare and medicine, distance learning, digital archiving, cultural heritage or the automotive industry.

In this paper, we highlight a few research grand challenges for future imaging and video systems, in order to achieve breakthroughs to meet the growing expectations of end users. Given the vastness of the field, this list is by no means exhaustive.

A Brief Historical Perspective

We first briefly discuss a few key milestones in the field of image processing. Key inventions in the development of photography and motion pictures can be traced to the 19th century. The earliest surviving photograph of a real-world scene was made by Nicéphore Niépce in 1827 ( Hirsch, 1999 ). The Lumière brothers made the first cinematographic film in 1895, with a public screening the same year ( Lumiere, 1996 ). After decades of remarkable developments, the second half of the 20th century saw the emergence of new technologies launching the digital revolution. While the first prototype digital camera using a Charge-Coupled Device (CCD) was demonstrated in 1975, the first commercial consumer digital cameras started appearing in the early 1990s. These digital cameras quickly surpassed cameras using films and the digital revolution in the field of imaging was underway. As a key consequence, the digital process enabled computational imaging, in other words the use of sophisticated processing algorithms in order to produce high quality images.

In 1992, the Joint Photographic Experts Group (JPEG) released the JPEG standard for still image coding ( Wallace, 1992 ). In parallel, in 1993, the Moving Picture Experts Group (MPEG) published its first standard for coding of moving pictures and associated audio, MPEG-1 ( Le Gall, 1991 ), and a few years later MPEG-2 ( Haskell et al., 1996 ). By guaranteeing interoperability, these standards have been essential in many successful applications and services, for both the consumer and business markets. In particular, it is remarkable that, almost 30 years later, JPEG remains the dominant format for still images and photographs.

In the late 2000s and early 2010s, we could observe a paradigm shift with the appearance of smartphones integrating a camera. Thanks to advances in computational photography, these new smartphones soon became capable of rivaling the quality of consumer digital cameras at the time. Moreover, these smartphones were also capable of acquiring video sequences. Almost concurrently, another key evolution was the development of high bandwidth networks. In particular, the launch of 4G wireless services circa 2010 enabled users to quickly and efficiently exchange multimedia content. From this point, most of us are carrying a camera, anywhere and anytime, allowing to capture images and videos at will and to seamlessly exchange them with our contacts.

As a direct consequence of the above developments, we are currently observing a boom in the usage of multimedia content. It is estimated that today 3.2 billion images are shared each day on social media platforms, and 300 h of video are uploaded every minute on YouTube 1 . In a 2019 report, Cisco estimated that video content represented 75% of all Internet traffic in 2017, and this share is forecasted to grow to 82% in 2022 ( Cisco, 2019 ). While Internet video streaming and Over-The-Top (OTT) media services account for a significant bulk of this traffic, other applications are also expected to see significant increases, including video surveillance and Virtual Reality (VR)/Augmented Reality (AR).

Hyper-Realistic and Immersive Imaging

A major direction and key driver to research and development activities over the years has been the objective to deliver an ever-improving image quality and user experience.

For instance, in the realm of video, we have observed constantly increasing spatial and temporal resolutions, with the emergence nowadays of Ultra High Definition (UHD). Another aim has been to provide a sense of the depth in the scene. For this purpose, various 3D video representations have been explored, including stereoscopic 3D and multi-view ( Dufaux et al., 2013 ).

In this context, the ultimate goal is to be able to faithfully represent the physical world and to deliver an immersive and perceptually hyperrealist experience. For this purpose, we discuss hereafter some emerging innovations. These developments are also very relevant in VR and AR applications ( Slater, 2014 ). Finally, while this paper is only focusing on the visual information processing aspects, it is obvious that emerging display technologies ( Masia et al., 2013 ) and audio also plays key roles in many application scenarios.

Light Fields, Point Clouds, Volumetric Imaging

In order to wholly represent a scene, the light information coming from all the directions has to be represented. For this purpose, the 7D plenoptic function is a key concept ( Adelson and Bergen, 1991 ), although it is unmanageable in practice.

By introducing additional constraints, the light field representation collects radiance from rays in all directions. Therefore, it contains a much richer information, when compared to traditional 2D imaging that captures a 2D projection of the light in the scene integrating the angular domain. For instance, this allows post-capture processing such as refocusing and changing the viewpoint. However, it also entails several technical challenges, in terms of acquisition and calibration, as well as computational image processing steps including depth estimation, super-resolution, compression and image synthesis ( Ihrke et al., 2016 ; Wu et al., 2017 ). The resolution trade-off between spatial and angular resolutions is a fundamental issue. With a significant fraction of the earlier work focusing on static light fields, it is also expected that dynamic light field videos will stimulate more interest in the future. In particular, dense multi-camera arrays are becoming more tractable. Finally, the development of efficient light field compression and streaming techniques is a key enabler in many applications ( Conti et al., 2020 ).

Another promising direction is to consider a point cloud representation. A point cloud is a set of points in the 3D space represented by their spatial coordinates and additional attributes, including color pixel values, normals, or reflectance. They are often very large, easily ranging in the millions of points, and are typically sparse. One major distinguishing feature of point clouds is that, unlike images, they do not have a regular structure, calling for new algorithms. To remove the noise often present in acquired data, while preserving the intrinsic characteristics, effective 3D point cloud filtering approaches are needed ( Han et al., 2017 ). It is also important to develop efficient techniques for Point Cloud Compression (PCC). For this purpose, MPEG is developing two standards: Geometry-based PCC (G-PCC) and Video-based PCC (V-PCC) ( Graziosi et al., 2020 ). G-PCC considers the point cloud in its native form and compress it using 3D data structures such as octrees. Conversely, V-PCC projects the point cloud onto 2D planes and then applies existing video coding schemes. More recently, deep learning-based approaches for PCC have been shown to be effective ( Guarda et al., 2020 ). Another challenge is to develop generic and robust solutions able to handle potentially widely varying characteristics of point clouds, e.g. in terms of size and non-uniform density. Efficient solutions for dynamic point clouds are also needed. Finally, while many techniques focus on the geometric information or the attributes independently, it is paramount to process them jointly.

High Dynamic Range and Wide Color Gamut

The human visual system is able to perceive, using various adaptation mechanisms, a broad range of luminous intensities, from very bright to very dark, as experienced every day in the real world. Nonetheless, current imaging technologies are still limited in terms of capturing or rendering such a wide range of conditions. High Dynamic Range (HDR) imaging aims at addressing this issue. Wide Color Gamut (WCG) is also often associated with HDR in order to provide a wider colorimetry.

HDR has reached some levels of maturity in the context of photography. However, extending HDR to video sequences raises scientific challenges in order to provide high quality and cost-effective solutions, impacting the whole imaging processing pipeline, including content acquisition, tone reproduction, color management, coding, and display ( Dufaux et al., 2016 ; Chalmers and Debattista, 2017 ). Backward compatibility with legacy content and traditional systems is another issue. Despite recent progress, the potential of HDR has not been fully exploited yet.

Coding and Transmission

Three decades of standardization activities have continuously improved the hybrid video coding scheme based on the principles of transform coding and predictive coding. The Versatile Video Coding (VVC) standard has been finalized in 2020 ( Bross et al., 2021 ), achieving approximately 50% bit rate reduction for the same subjective quality when compared to its predecessor, High Efficiency Video Coding (HEVC). While substantially outperforming VVC in the short term may be difficult, one encouraging direction is to rely on improved perceptual models to further optimize compression in terms of visual quality. Another direction, which has already shown promising results, is to apply deep learning-based approaches ( Ding et al., 2021 ). Here, one key issue is the ability to generalize these deep models to a wide diversity of video content. The second key issue is the implementation complexity, both in terms of computation and memory requirements, which is a significant obstacle to a widespread deployment. Besides, the emergence of new video formats targeting immersive communications is also calling for new coding schemes ( Wien et al., 2019 ).

Considering that in many application scenarios, videos are processed by intelligent analytic algorithms rather than viewed by users, another interesting track is the development of video coding for machines ( Duan et al., 2020 ). In this context, the compression is optimized taking into account the performance of video analysis tasks.

The push toward hyper-realistic and immersive visual communications entails most often an increasing raw data rate. Despite improved compression schemes, more transmission bandwidth is needed. Moreover, some emerging applications, such as VR/AR, autonomous driving, and Industry 4.0, bring a strong requirement for low latency transmission, with implications on both the imaging processing pipeline and the transmission channel. In this context, the emergence of 5G wireless networks will positively contribute to the deployment of new multimedia applications, and the development of future wireless communication technologies points toward promising advances ( Da Costa and Yang, 2020 ).

Human Perception and Visual Quality Assessment

It is important to develop effective models of human perception. On the one hand, it can contribute to the development of perceptually inspired algorithms. On the other hand, perceptual quality assessment methods are needed in order to optimize and validate new imaging solutions.

The notion of Quality of Experience (QoE) relates to the degree of delight or annoyance of the user of an application or service ( Le Callet et al., 2012 ). QoE is strongly linked to subjective and objective quality assessment methods. Many years of research have resulted in the successful development of perceptual visual quality metrics based on models of human perception ( Lin and Kuo, 2011 ; Bovik, 2013 ). More recently, deep learning-based approaches have also been successfully applied to this problem ( Bosse et al., 2017 ). While these perceptual quality metrics have achieved good performances, several significant challenges remain. First, when applied to video sequences, most current perceptual metrics are applied on individual images, neglecting temporal modeling. Second, whereas color is a key attribute, there are currently no widely accepted perceptual quality metrics explicitly considering color. Finally, new modalities, such as 360° videos, light fields, point clouds, and HDR, require new approaches.

Another closely related topic is image esthetic assessment ( Deng et al., 2017 ). The esthetic quality of an image is affected by numerous factors, such as lighting, color, contrast, and composition. It is useful in different application scenarios such as image retrieval and ranking, recommendation, and photos enhancement. While earlier attempts have used handcrafted features, most recent techniques to predict esthetic quality are data driven and based on deep learning approaches, leveraging the availability of large annotated datasets for training ( Murray et al., 2012 ). One key challenge is the inherently subjective nature of esthetics assessment, resulting in ambiguity in the ground-truth labels. Another important issue is to explain the behavior of deep esthetic prediction models.

Analysis, Interpretation and Understanding

Another major research direction has been the objective to efficiently analyze, interpret and understand visual data. This goal is challenging, due to the high diversity and complexity of visual data. This has led to many research activities, involving both low-level and high-level analysis, addressing topics such as image classification and segmentation, optical flow, image indexing and retrieval, object detection and tracking, and scene interpretation and understanding. Hereafter, we discuss some trends and challenges.

Keypoints Detection and Local Descriptors

Local imaging matching has been the cornerstone of many analysis tasks. It involves the detection of keypoints, i.e. salient visual points that can be robustly and repeatedly detected, and descriptors, i.e. a compact signature locally describing the visual features at each keypoint. It allows to subsequently compute pairwise matching between the features to reveal local correspondences. In this context, several frameworks have been proposed, including Scale Invariant Feature Transform (SIFT) ( Lowe, 2004 ) and Speeded Up Robust Features (SURF) ( Bay et al., 2008 ), and later binary variants including Binary Robust Independent Elementary Feature (BRIEF) ( Calonder et al., 2010 ), Oriented FAST and Rotated BRIEF (ORB) ( Rublee et al., 2011 ) and Binary Robust Invariant Scalable Keypoints (BRISK) ( Leutenegger et al., 2011 ). Although these approaches exhibit scale and rotation invariance, they are less suited to deal with large 3D distortions such as perspective deformations, out-of-plane rotations, and significant viewpoint changes. Besides, they tend to fail under significantly varying and challenging illumination conditions.

These traditional approaches based on handcrafted features have been successfully applied to problems such as image and video retrieval, object detection, visual Simultaneous Localization And Mapping (SLAM), and visual odometry. Besides, the emergence of new imaging modalities as introduced above can also be beneficial for image analysis tasks, including light fields ( Galdi et al., 2019 ), point clouds ( Guo et al., 2020 ), and HDR ( Rana et al., 2018 ). However, when applied to high-dimensional visual data for semantic analysis and understanding, these approaches based on handcrafted features have been supplanted in recent years by approaches based on deep learning.

Deep Learning-Based Methods

Data-driven deep learning-based approaches ( LeCun et al., 2015 ), and in particular the Convolutional Neural Network (CNN) architecture, represent nowadays the state-of-the-art in terms of performances for complex pattern recognition tasks in scene analysis and understanding. By combining multiple processing layers, deep models are able to learn data representations with different levels of abstraction.

Supervised learning is the most common form of deep learning. It requires a large and fully labeled training dataset, a typically time-consuming and expensive process needed whenever tackling a new application scenario. Moreover, in some specialized domains, e.g. medical data, it can be very difficult to obtain annotations. To alleviate this major burden, methods such as transfer learning and weakly supervised learning have been proposed.

In another direction, deep models have been shown to be vulnerable to adversarial attacks ( Akhtar and Mian, 2018 ). Those attacks consist in introducing subtle perturbations to the input, such that the model predicts an incorrect output. For instance, in the case of images, imperceptible pixel differences are able to fool deep learning models. Such adversarial attacks are definitively an important obstacle to the successful deployment of deep learning, especially in applications where safety and security are critical. While some early solutions have been proposed, a significant challenge is to develop effective defense mechanisms against those attacks.

Finally, another challenge is to enable low complexity and efficient implementations. This is especially important for mobile or embedded applications. For this purpose, further interactions between signal processing and machine learning can potentially bring additional benefits. For instance, one direction is to compress deep neural networks in order to enable their more efficient handling. Moreover, by combining traditional processing techniques with deep learning models, it is possible to develop low complexity solutions while preserving high performance.

Explainability in Deep Learning

While data-driven deep learning models often achieve impressive performances on many visual analysis tasks, their black-box nature often makes it inherently very difficult to understand how they reach a predicted output and how it relates to particular characteristics of the input data. However, this is a major impediment in many decision-critical application scenarios. Moreover, it is important not only to have confidence in the proposed solution, but also to gain further insights from it. Based on these considerations, some deep learning systems aim at promoting explainability ( Adadi and Berrada, 2018 ; Xie et al., 2020 ). This can be achieved by exhibiting traits related to confidence, trust, safety, and ethics.

However, explainable deep learning is still in its early phase. More developments are needed, in particular to develop a systematic theory of model explanation. Important aspects include the need to understand and quantify risk, to comprehend how the model makes predictions for transparency and trustworthiness, and to quantify the uncertainty in the model prediction. This challenge is key in order to deploy and use deep learning-based solutions in an accountable way, for instance in application domains such as healthcare or autonomous driving.

Self-Supervised Learning

Self-supervised learning refers to methods that learn general visual features from large-scale unlabeled data, without the need for manual annotations. Self-supervised learning is therefore very appealing, as it allows exploiting the vast amount of unlabeled images and videos available. Moreover, it is widely believed that it is closer to how humans actually learn. One common approach is to use the data to provide the supervision, leveraging its structure. More generally, a pretext task can be defined, e.g. image inpainting, colorizing grayscale images, predicting future frames in videos, by withholding some parts of the data and by training the neural network to predict it ( Jing and Tian, 2020 ). By learning an objective function corresponding to the pretext task, the network is forced to learn relevant visual features in order to solve the problem. Self-supervised learning has also been successfully applied to autonomous vehicles perception. More specifically, the complementarity between analytical and learning methods can be exploited to address various autonomous driving perception tasks, without the prerequisite of an annotated data set ( Chiaroni et al., 2021 ).

While good performances have already been obtained using self-supervised learning, further work is still needed. A few promising directions are outlined hereafter. Combining self-supervised learning with other learning methods is a first interesting path. For instance, semi-supervised learning ( Van Engelen and Hoos, 2020 ) and few-short learning ( Fei-Fei et al., 2006 ) methods have been proposed for scenarios where limited labeled data is available. The performance of these methods can potentially be boosted by incorporating a self-supervised pre-training. The pretext task can also serve to add regularization. Another interesting trend in self-supervised learning is to train neural networks with synthetic data. The challenge here is to bridge the domain gap between the synthetic and real data. Finally, another compelling direction is to exploit data from different modalities. A simple example is to consider both the video and audio signals in a video sequence. In another example in the context of autonomous driving, vehicles are typically equipped with multiple sensors, including cameras, LIght Detection And Ranging (LIDAR), Global Positioning System (GPS), and Inertial Measurement Units (IMU). In such cases, it is easy to acquire large unlabeled multimodal datasets, where the different modalities can be effectively exploited in self-supervised learning methods.

Reproducible Research and Large Public Datasets

The reproducible research initiative is another way to further ensure high-quality research for the benefit of our community ( Vandewalle et al., 2009 ). Reproducibility, referring to the ability by someone else working independently to accurately reproduce the results of an experiment, is a key principle of the scientific method. In the context of image and video processing, it is usually not sufficient to provide a detailed description of the proposed algorithm. Most often, it is essential to also provide access to the code and data. This is even more imperative in the case of deep learning-based models.

In parallel, the availability of large public datasets is also highly desirable in order to support research activities. This is especially critical for new emerging modalities or specific application scenarios, where it is difficult to get access to relevant data. Moreover, with the emergence of deep learning, large datasets, along with labels, are often needed for training, which can be another burden.

Conclusion and Perspectives

The field of image processing is very broad and rich, with many successful applications in both the consumer and business markets. However, many technical challenges remain in order to further push the limits in imaging technologies. Two main trends are on the one hand to always improve the quality and realism of image and video content, and on the other hand to be able to effectively interpret and understand this vast and complex amount of visual data. However, the list is certainly not exhaustive and there are many other interesting problems, e.g. related to computational imaging, information security and forensics, or medical imaging. Key innovations will be found at the crossroad of image processing, optics, psychophysics, communication, computer vision, artificial intelligence, and computer graphics. Multi-disciplinary collaborations are therefore critical moving forward, involving actors from both academia and the industry, in order to drive these breakthroughs.

The “Image Processing” section of Frontier in Signal Processing aims at giving to the research community a forum to exchange, discuss and improve new ideas, with the goal to contribute to the further advancement of the field of image processing and to bring exciting innovations in the foreseeable future.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

1 https://www.brandwatch.com/blog/amazing-social-media-statistics-and-facts/ (accessed on Feb. 23, 2021).

Adadi, A., and Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access 6, 52138–52160. doi:10.1109/access.2018.2870052

CrossRef Full Text | Google Scholar

Adelson, E. H., and Bergen, J. R. (1991). “The plenoptic function and the elements of early vision” Computational models of visual processing . Cambridge, MA: MIT Press , 3-20.

Google Scholar

Akhtar, N., and Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430. doi:10.1109/access.2018.2807385

Bay, H., Ess, A., Tuytelaars, T., and Van Gool, L. (2008). Speeded-up robust features (SURF). Computer Vis. image understanding 110 (3), 346–359. doi:10.1016/j.cviu.2007.09.014

Bosse, S., Maniry, D., Müller, K. R., Wiegand, T., and Samek, W. (2017). Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27 (1), 206–219. doi:10.1109/TIP.2017.2760518

PubMed Abstract | CrossRef Full Text | Google Scholar

Bovik, A. C. (2013). Automatic prediction of perceptual image and video quality. Proc. IEEE 101 (9), 2008–2024. doi:10.1109/JPROC.2013.2257632

Bross, B., Chen, J., Ohm, J. R., Sullivan, G. J., and Wang, Y. K. (2021). Developments in international video coding standardization after AVC, with an overview of Versatile Video Coding (VVC). Proc. IEEE . doi:10.1109/JPROC.2020.3043399

Calonder, M., Lepetit, V., Strecha, C., and Fua, P. (2010). Brief: binary robust independent elementary features. In K. Daniilidis, P. Maragos, and N. Paragios (eds) European conference on computer vision . Berlin, Heidelberg: Springer , 778–792. doi:10.1007/978-3-642-15561-1_56

Chalmers, A., and Debattista, K. (2017). HDR video past, present and future: a perspective. Signal. Processing: Image Commun. 54, 49–55. doi:10.1016/j.image.2017.02.003

Chiaroni, F., Rahal, M.-C., Hueber, N., and Dufaux, F. (2021). Self-supervised learning for autonomous vehicles perception: a conciliation between analytical and learning methods. IEEE Signal. Process. Mag. 38 (1), 31–41. doi:10.1109/msp.2020.2977269

Cisco, (20192019). Cisco visual networking index: forecast and trends, 2017-2022 (white paper) , Indianapolis, Indiana: Cisco Press .

Conti, C., Soares, L. D., and Nunes, P. (2020). Dense light field coding: a survey. IEEE Access 8, 49244–49284. doi:10.1109/ACCESS.2020.2977767

Da Costa, D. B., and Yang, H.-C. (2020). Grand challenges in wireless communications. Front. Commun. Networks 1 (1), 1–5. doi:10.3389/frcmn.2020.00001

Deng, Y., Loy, C. C., and Tang, X. (2017). Image aesthetic assessment: an experimental survey. IEEE Signal. Process. Mag. 34 (4), 80–106. doi:10.1109/msp.2017.2696576

Ding, D., Ma, Z., Chen, D., Chen, Q., Liu, Z., and Zhu, F. (2021). Advances in video compression system using deep neural network: a review and case studies . Ithaca, NY: Cornell university .

Duan, L., Liu, J., Yang, W., Huang, T., and Gao, W. (2020). Video coding for machines: a paradigm of collaborative compression and intelligent analytics. IEEE Trans. Image Process. 29, 8680–8695. doi:10.1109/tip.2020.3016485

Dufaux, F., Le Callet, P., Mantiuk, R., and Mrak, M. (2016). High dynamic range video - from acquisition, to display and applications . Cambridge, Massachusetts: Academic Press .

Dufaux, F., Pesquet-Popescu, B., and Cagnazzo, M. (2013). Emerging technologies for 3D video: creation, coding, transmission and rendering . Hoboken, NJ: Wiley .

Fei-Fei, L., Fergus, R., and Perona, P. (2006). One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach Intell. 28 (4), 594–611. doi:10.1109/TPAMI.2006.79

Galdi, C., Chiesa, V., Busch, C., Lobato Correia, P., Dugelay, J.-L., and Guillemot, C. (2019). Light fields for face analysis. Sensors 19 (12), 2687. doi:10.3390/s19122687

Graziosi, D., Nakagami, O., Kuma, S., Zaghetto, A., Suzuki, T., and Tabatabai, A. (2020). An overview of ongoing point cloud compression standardization activities: video-based (V-PCC) and geometry-based (G-PCC). APSIPA Trans. Signal Inf. Process. 9, 2020. doi:10.1017/ATSIP.2020.12

Guarda, A., Rodrigues, N., and Pereira, F. (2020). Adaptive deep learning-based point cloud geometry coding. IEEE J. Selected Top. Signal Process. 15, 415-430. doi:10.1109/mmsp48831.2020.9287060

Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L., and Bennamoun, M. (2020). Deep learning for 3D point clouds: a survey. IEEE transactions on pattern analysis and machine intelligence . doi:10.1109/TPAMI.2020.3005434

Han, X.-F., Jin, J. S., Wang, M.-J., Jiang, W., Gao, L., and Xiao, L. (2017). A review of algorithms for filtering the 3D point cloud. Signal. Processing: Image Commun. 57, 103–112. doi:10.1016/j.image.2017.05.009

Haskell, B. G., Puri, A., and Netravali, A. N. (1996). Digital video: an introduction to MPEG-2 . Berlin, Germany: Springer Science and Business Media .

Hirsch, R. (1999). Seizing the light: a history of photography . New York, NY: McGraw-Hill .

Ihrke, I., Restrepo, J., and Mignard-Debise, L. (2016). Principles of light field imaging: briefly revisiting 25 years of research. IEEE Signal. Process. Mag. 33 (5), 59–69. doi:10.1109/MSP.2016.2582220

Jing, L., and Tian, Y. (2020). “Self-supervised visual feature learning with deep neural networks: a survey,” IEEE transactions on pattern analysis and machine intelligence , Ithaca, NY: Cornell University .

Le Callet, P., Möller, S., and Perkis, A. (2012). Qualinet white paper on definitions of quality of experience. European network on quality of experience in multimedia systems and services (COST Action IC 1003), 3(2012) .

Le Gall, D. (1991). Mpeg: A Video Compression Standard for Multimedia Applications. Commun. ACM 34, 46–58. doi:10.1145/103085.103090

LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. nature 521 (7553), 436–444. doi:10.1038/nature14539

Leutenegger, S., Chli, M., and Siegwart, R. Y. (2011). “BRISK: binary robust invariant scalable keypoints,” IEEE International conference on computer vision , Barcelona, Spain , 6-13 Nov, 2011 ( IEEE ), 2548–2555.

Lin, W., and Jay Kuo, C.-C. (2011). Perceptual visual quality metrics: a survey. J. Vis. Commun. image representation 22 (4), 297–312. doi:10.1016/j.jvcir.2011.01.005

Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60 (2), 91–110. doi:10.1023/b:visi.0000029664.99615.94

Lumiere, L. (1996). 1936 the lumière cinematograph. J. Smpte 105 (10), 608–611. doi:10.5594/j17187

Masia, B., Wetzstein, G., Didyk, P., and Gutierrez, D. (2013). A survey on computational displays: pushing the boundaries of optics, computation, and perception. Comput. & Graphics 37 (8), 1012–1038. doi:10.1016/j.cag.2013.10.003

Murray, N., Marchesotti, L., and Perronnin, F. (2012). “AVA: a large-scale database for aesthetic visual analysis,” IEEE conference on computer vision and pattern recognition , Providence, RI , June, 2012 . ( IEEE ), 2408–2415. doi:10.1109/CVPR.2012.6247954

Rana, A., Valenzise, G., and Dufaux, F. (2018). Learning-based tone mapping operator for efficient image matching. IEEE Trans. Multimedia 21 (1), 256–268. doi:10.1109/TMM.2018.2839885

Rublee, E., Rabaud, V., Konolige, K., and Bradski, G. (2011). “ORB: an efficient alternative to SIFT or SURF,” IEEE International conference on computer vision , Barcelona, Spain , November, 2011 ( IEEE ), 2564–2571. doi:10.1109/ICCV.2011.6126544

Slater, M. (2014). Grand challenges in virtual environments. Front. Robotics AI 1, 3. doi:10.3389/frobt.2014.00003

Van Engelen, J. E., and Hoos, H. H. (2020). A survey on semi-supervised learning. Mach Learn. 109 (2), 373–440. doi:10.1007/s10994-019-05855-6

Vandewalle, P., Kovacevic, J., and Vetterli, M. (2009). Reproducible research in signal processing. IEEE Signal. Process. Mag. 26 (3), 37–47. doi:10.1109/msp.2009.932122

Wallace, G. K. (1992). The JPEG still picture compression standard. IEEE Trans. Consumer Electron.Feb 38 (1), xviii-xxxiv. doi:10.1109/30.125072

Wien, M., Boyce, J. M., Stockhammer, T., and Peng, W.-H. (20192019). Standardization status of immersive video coding. IEEE J. Emerg. Sel. Top. Circuits Syst. 9 (1), 5–17. doi:10.1109/JETCAS.2019.2898948

Wu, G., Masia, B., Jarabo, A., Zhang, Y., Wang, L., Dai, Q., et al. (2017). Light field image processing: an overview. IEEE J. Sel. Top. Signal. Process. 11 (7), 926–954. doi:10.1109/JSTSP.2017.2747126

Xie, N., Ras, G., van Gerven, M., and Doran, D. (2020). Explainable deep learning: a field guide for the uninitiated , Ithaca, NY: Cornell University ..

Keywords: image processing, immersive, image analysis, image understanding, deep learning, video processing

Citation: Dufaux F (2021) Grand Challenges in Image Processing. Front. Sig. Proc. 1:675547. doi: 10.3389/frsip.2021.675547

Received: 03 March 2021; Accepted: 10 March 2021; Published: 12 April 2021.

Reviewed and Edited by:

Copyright © 2021 Dufaux. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Frédéric Dufaux, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Top 10 Digital Image Processing Project Topics

We guide research scholars in choosing novel digital image processing project topics. What is meant by digital image processing? Digital Image Processing is a method of handling images to get different insights into the digital image. It has a set of technologies to analyze the image in multiple aspects for better human / machine image interpretation . To be clearer, it is used to improve the actual quality of the image or to abstract the essential features from the entire picture is achieved through digital image processing projects.

This page is about the new upcoming Digital Image Processing Project Topics for scholars who wish to create a masterpiece in their research career!!!

Generally, the digital image is represented in the form of pixels which are arranged in array format. The dimension of the rectangular array gives the size of the image (MxN), where M denotes the column and N denotes the row. Further, x and y coordinates are used to signify the single-pixel position of an image. At the same time, the x value increases from left to right, and the y value increases from top to bottom in the coordinate representation of the image. When you get into the DIP research field, you need to know the following key terminologies.

Top 10 Digital Image Processing Project Topics Guidance

Important Digital Image Processing Terminologies  

  • Stereo Vision and Super Resolution
  • Multi-Spectral Remote Sensing and Imaging
  • Digital Photography and Imaging
  • Acoustic Imaging and Holographic Imaging
  • Computer Vision and Graphics
  • Image Manipulation and Retrieval
  • Quality Enrichment in Volumetric Imaging
  • Color Imaging and Bio-Medical Imaging
  • Pattern Recognition and Analysis
  • Imaging Software Tools, Technologies and Languages
  • Image Acquisition and Compression Techniques
  • Mathematical Morphological Image Segmentation

Image Processing Algorithms

In general, image processing techniques/methods are used to perform certain actions over the input images, and according to that, the desired information is extracted in it. For that, input is an image, and the result is an improved/expected image associated with their task. It is essential to find that the algorithms for image processing play a crucial role in current real-time applications. Various algorithms are used for various purposes as follows, 

  • Digital Image Detection
  • Image Reconstruction
  • Image Restoration
  • Image Enhancement
  • Image Quality Estimation
  • Spectral Image Estimation
  • Image Data Compression

For the above image processing tasks, algorithms are customized for the number of training and testing samples and also can be used for real-time/online processing. Till now, filtering techniques are used for image processing and enhancement, and their main functions are as follows, 

  • Brightness Correction
  • Contrast Enhancement
  • Resolution and Noise Level of Image
  • Contouring and Image Sharpening
  • Blurring, Edge Detection and Embossing

Some of the commonly used techniques for image processing can be classified into the following, 

  • Medium Level Image Processing Techniques – Binarization and Compression
  • Higher Level Image Processing Techniques – Image Segmentation
  • Low-Level Image Processing Techniques – Noise Elimination and Color Contrast Enhancement
  • Recognition and Detection Image Processing Algorithms – Semantic Analysis

Next, let’s see about some of the traditional image processing algorithms for your information. Our research team will guide in handpicking apt solutions for research problems . If there is a need, we are also ready to design own hybrid algorithms and techniques for sorting out complicated model . 

Types of Digital Image Processing Algorithms

  • Hough Transform Algorithm
  • Canny Edge Detector Algorithm
  • Scale-Invariant Feature Transform (SIFT) Algorithm
  • Generalized Hough Transform Algorithm
  • Speeded Up Robust Features (SURF) Algorithm
  • Marr–Hildreth Algorithm
  • Connected-component labeling algorithm: Identify and classify the disconnected areas
  • Histogram equalization algorithm: Enhance the contrast of image by utilizing the histogram
  • Adaptive histogram equalization algorithm: Perform slight alteration in contrast for the  equalization of the histogram
  • Error Diffusion Algorithm
  • Ordered Dithering Algorithm
  • Floyd–Steinberg Dithering Algorithm
  • Riemersma Dithering Algorithm
  • Richardson–Lucy deconvolution algorithm : It is also known as a deblurring algorithm, which removes the misrepresentation of the image to recover the original image
  • Seam carving algorithm : Differentiate the edge based on the image background information and also known as content-aware image resizing algorithm
  • Region Growing Algorithm
  • GrowCut Algorithm
  • Watershed Transformation Algorithm
  • Random Walker Algorithm
  • Elser difference-map algorithm: It is a search based algorithm primarily used for X-Ray diffraction microscopy to solve the general constraint satisfaction problems
  • Blind deconvolution algorithm : It is similar to Richardson–Lucy deconvolution to reconstruct the sharp point of blur image. In other words, it’s the process of deblurring the image.

Nowadays, various industries are also utilizing digital image processing by developing customizing procedures to satisfy their requirements. It may be achieved either from scratch or hybrid algorithmic functions . As a result, it is clear that image processing is revolutionary developed in many information technology sectors and applications.  

Research Digital Image Processing Project Topics

Digital Image Processing Techniques

  • In order to smooth the image, substitutes neighbor median / common value in the place of the actual pixel value. Whereas it is performed in the case of weak edge sharpness and blur image effect.
  • Eliminate the distortion in an image by scaling, wrapping, translation, and rotation process
  • Differentiate the in-depth image content to figure out the original hidden data or to convert the color image into a gray-scale image
  • Breaking up of image into multiple forms based on certain constraints. For instance: foreground, background
  • Enhance the image display through pixel-based threshold operation 
  • Reduce the noise in an image by the average of diverse quality multiple images 
  • Sharpening the image by improving the pixel value in the edge
  • Extract the specific feature for removal of noise in an image
  • Perform arithmetic operations (add, sub, divide and multiply) to identify the variation in between the images 

Beyond this, this field will give you numerous Digital Image Processing Project Topics for current and upcoming scholars . Below, we have mentioned some research ideas that help you to classify analysis, represent and display the images or particular characteristics of an image.

Latest 11 Interesting Digital Image Processing Project Topics

  • Acoustic and Color Image Processing
  • Digital Video and Signal Processing
  • Multi-spectral and Laser Polarimetric Imaging
  • Image Processing and Sensing Techniques
  • Super-resolution Imaging and Applications
  • Passive and Active Remote Sensing
  • Time-Frequency Signal Processing and Analysis
  • 3-D Surface Reconstruction using Remote Sensed Image
  • Digital Image based Steganalysis and Steganography
  • Radar Image Processing for Remote Sensing Applications
  • Adaptive Clustering Algorithms for Image processing

Moreover, if you want to know more about Digital Image Processing Project Topics for your research, then communicate with our team. We will give detailed information on current trends, future developments, and real-time challenges in the research grounds of Digital Image Processing.

Why Work With Us ?

Senior research member, research experience, journal member, book publisher, research ethics, business ethics, valid references, explanations, paper publication, 9 big reasons to select us.

Our Editor-in-Chief has Website Ownership who control and deliver all aspects of PhD Direction to scholars and students and also keep the look to fully manage all our clients.

Our world-class certified experts have 18+years of experience in Research & Development programs (Industrial Research) who absolutely immersed as many scholars as possible in developing strong PhD research projects.

We associated with 200+reputed SCI and SCOPUS indexed journals (SJR ranking) for getting research work to be published in standard journals (Your first-choice journal).

PhDdirection.com is world’s largest book publishing platform that predominantly work subject-wise categories for scholars/students to assist their books writing and takes out into the University Library.

Our researchers provide required research ethics such as Confidentiality & Privacy, Novelty (valuable research), Plagiarism-Free, and Timely Delivery. Our customers have freedom to examine their current specific research activities.

Our organization take into consideration of customer satisfaction, online, offline support and professional works deliver since these are the actual inspiring business factors.

Solid works delivering by young qualified global research team. "References" is the key to evaluating works easier because we carefully assess scholars findings.

Detailed Videos, Readme files, Screenshots are provided for all research projects. We provide Teamviewer support and other online channels for project explanation.

Worthy journal publication is our main thing like IEEE, ACM, Springer, IET, Elsevier, etc. We substantially reduces scholars burden in publication side. We carry scholars from initial submission to final acceptance.

Related Pages

Our benefits, throughout reference, confidential agreement, research no way resale, plagiarism-free, publication guarantee, customize support, fair revisions, business professionalism, domains & tools, we generally use, wireless communication (4g lte, and 5g), ad hoc networks (vanet, manet, etc.), wireless sensor networks, software defined networks, network security, internet of things (mqtt, coap), internet of vehicles, cloud computing, fog computing, edge computing, mobile computing, mobile cloud computing, ubiquitous computing, digital image processing, medical image processing, pattern analysis and machine intelligence, geoscience and remote sensing, big data analytics, data mining, power electronics, web of things, digital forensics, natural language processing, automation systems, artificial intelligence, mininet 2.1.0, matlab (r2018b/r2019a), matlab and simulink, apache hadoop, apache spark mlib, apache mahout, apache flink, apache storm, apache cassandra, pig and hive, rapid miner, support 24/7, call us @ any time, +91 9444829042, [email protected].

Questions ?

Click here to chat with us

Research Topics

Biomedical Imaging

Biomedical Imaging

The current plethora of imaging technologies such as magnetic resonance imaging (MR), computed tomography (CT), position emission tomography (PET), optical coherence tomography (OCT), and ultrasound provide great insight into the different anatomical and functional processes of the human body.

Computer Vision

Computer Vision

Computer vision is the science and technology of teaching a computer to interpret images and video as well as a typical human. Technically, computer vision encompasses the fields of image/video processing, pattern recognition, biological vision, artificial intelligence, augmented reality, mathematical modeling, statistics, probability, optimization, 2D sensors, and photography.

Image Segmentation/Classification

Image Segmentation/Classification

Extracting information from a digital image often depends on first identifying desired objects or breaking down the image into homogenous regions (a process called 'segmentation') and then assigning these objects to particular classes (a process called 'classification'). This is a fundamental part of computer vision, combining image processing and pattern recognition techniques.

Multiresolution Techniques

Multiresolution   Techniques

The VIP lab has a particularly extensive history with multiresolution methods, and a significant number of research students have explored this theme. Multiresolution methods are very broad, essentially meaning than an image or video is modeled, represented, or features extracted on more than one scale, somehow allowing both local and non-local phenomena.

Remote Sensing

Remote Sensing

Remote sensing, or the science of capturing data of the earth from airplanes or satellites, enables regular monitoring of land, ocean, and atmosphere expanses, representing data that cannot be captured using any other means. A vast amount of information is generated by remote sensing platforms and there is an obvious need to analyze the data accurately and efficiently.

Scientific Imaging

Scientific Imaging

Scientific Imaging refers to working on two- or three-dimensional imagery taken for a scientific purpose, in most cases acquired either through a microscope or remotely-sensed images taken at a distance.

Stochastic Models

Stochastic Models

In many image processing, computer vision, and pattern recognition applications, there is often a large degree of uncertainty associated with factors such as the appearance of the underlying scene within the acquired data, the location and trajectory of the object of interest, the physical appearance (e.g., size, shape, color, etc.) of the objects being detected, etc.

Video Analysis

Video Analysis

Video analysis is a field within  computer vision  that involves the automatic interpretation of digital video using computer algorithms. Although humans are readily able to interpret digital video, developing algorithms for the computer to perform the same task has been highly evasive and is now an active research field.

Deep Evolution Figure

Evolutionary Deep Intelligence

Deep learning has shown considerable promise in recent years, producing tremendous results and significantly improving the accuracy of a variety of challenging problems when compared to other machine learning methods.

Discovered Radiomics Sequencer

Discovery Radiomics

Radiomics, which involves the high-throughput extraction and analysis of a large amount of quantitative features from medical imaging data to characterize tumor phenotype in a quantitative manner, is ushering in a new era of imaging-driven quantitative personalized cancer decision support and management. 

Discovered Radiomics Sequencer

Sports Analytics

Sports Analytics is a growing field in computer vision that analyzes visual cues from images to provide statistical data on players, teams, and games. Want to know how a player's technique improves the quality of the team? Can a team, based on their defensive position, increase their chances to the finals? These are a few out of a plethora of questions that are answered in sports analytics.

Share via Facebook

  • Contact Waterloo
  • Maps & Directions
  • Accessibility

The University of Waterloo acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg, and Haudenosaunee peoples. Our main campus is situated on the Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research, learning, teaching, and community building, and is co-ordinated within the Office of Indigenous Relations .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Bentham Open Access

Logo of benthamopen

Viewpoints on Medical Image Processing: From Science to Application

Thomas m. deserno (né lehmann).

1 Department of Medical Informatics, Uniklinik RWTH Aachen, Germany;

Heinz Handels

2 Institute of Medical Informatics, University of Lübeck, Germany;

Klaus H. Maier-Hein (né Fritzsche)

3 Medical and Biological Informatics, German Cancer Research Center, Heidelberg, Germany;

Sven Mersmann

4 Medical and Biological Informatics, Junior Group Computer-assisted Interventions, German Cancer Research Center, Heidelberg, Germany;

Christoph Palm

5 Regensburg – Medical Image Computing (Re-MIC), Faculty of Computer Science and Mathematics, Regensburg University of Applied Sciences, Regensburg, Germany;

Thomas Tolxdorff

6 Institute of Medical Informatics, Charité - Universitätsmedizin Berlin, Germany;

Gudrun Wagenknecht

7 Electronic Systems (ZEA-2), Central Institute of Engineering, Electronics and Analytics, Forschungszentrum Jülich GmbH, Germany;

Thomas Wittenberg

8 Image Processing & Biomedical Engineering Department, Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany

Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment.

1.  INTRODUCTION

Current advances in medical imaging are made in fields such as instrumentation, diagnostics, and therapeutic applications and most of them are based on imaging technology and image processing. In fact, medical image processing has been established as a core field of innovation in modern health care [ 1 ] combining medical informatics, neuro-informatics and bioinformatics [ 2 ].

In 1984, the Society of Photo-Optical Instrumentation Engineers (SPIE) has launched a multi-track conference on medical imaging, which still is considered as the core event for innovation in the field [Methods]. Analogously in Germany, the workshop “Bildverarbeitung für die Medizin (BVM)” (Image Processing for Medicine) has recently celebrated its 20 th annual performance. The meeting has evolved over the years to a multi-track conference on international standard [ 3 , 4 , 5 , 6 , 7 , 8 , 9 ].

Nonetheless, it is hard to name the most important and innovative trends within this broad field ranging from image acquisition using novel imaging modalities to information extraction in diagnostics and treatment. Ritter et al. recently emphasized on the following aspects: (i) enhancement, (ii) segmentation, (iii) registration, (iv) quantification, (v) visualization, and (vi) computer-aided detection (CAD) [ 10 ].

Another concept of structuring is here referred to as the “from-to” approach. For instance,

  • From nano to macro : Co-founded in 2002 by Michael Unser of EPFL, Switzerland, The Institute of Electrical and Electronics Engineers (IEEE) has launched an international symposium on biomedical imaging (ISBI). This conference is focused in the motto from nano to macro covering all aspects of medical imaging from sub-cellular to the organ level.
  • From production to sharing : Another “from-to” migration is seen in the shift from acquisition to communication [ 11 ]. Clark et al. expected advances in the medical imaging fields along the following four axes: (i) image production and new modalities; (ii) image processing, visualization, and system simulation; (iii) image management and retrieval; and (iv) image communication and telemedicine.
  • From kilobyte to terabyte : Deserno et al. identified another “from-to” migration, which is seen in the amount of data that is produced by medical imagery [ 12 ]. Today, High-resolution CT reconstructs images with 8000 x 8000 pixels per slice with 0.7 μm isotropic detail detectability, and whole body scans with this resolution reach several Gigabytes (GB) of data load. Also, microscopic whole-slide scanning systems can easily provide so-called virtual slices in the rage of 30.000 x 50.000 pixels, which equals 16.8 GB on 10 bit gray scale.
  • From science to application : Finally, in this paper, we aim at analyzing recent advantages in medical imaging on another level. The focus is to identify core fields fostering transfer of algorithms into clinical use and addressing gaps still remaining to be bridged in future research.

The remainder of this review is organized as follows. In Section 3, we briefly analyze the history of the German workshop BVM. More than 15 years of proceedings are currently available and statistics is applied to identify trends in content of conference papers. Section 4 then provides personal viewpoints to challenging and pioneering fields. The results are discussed in Section 5.

2.  THE GERMAN HISTORY FROM SCIENCE TO APPLICATION

Since 1994, annual proceedings of the presented contributions from the BVM workshops have been published, which are available electronically in postscript (PS) or the portable document format (PDF) from 1996. Disregarding the type of presentation (oral, poster, or software demonstration), the authors are allowed to submit papers with a length of up to five pages. In 2012 the length was increased to six pages. Both, English and German papers are allowed. The number of English contributions increased steadily over the years, and reached about 50% in 2008 [ 8 ].

In order to analyze the content of the on average 124k words long proceedings regarding the most relevant topics that were discussed on the BVM workshops, the incidence of the most frequent words has been assessed for each proceeding from 1996 until 2012. From this investigation, about 300 common words of the German and English language (e.g. and / und, etc.) have been excluded. (Fig. ​ 1 1 ) presents a word cloud computed from the 100 most frequent terms used in the proceedings of the 2012 BVM workshop. The font sizes of the words refer to their counted frequency in the text.

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F1.jpg

Word cloud representing the most frequent 100 terms counted from the 469 page long BVM proceedings 2012 [13].

It can be seen, in 2012, “image” was the most frequent word occurring in the BVM proceedings (920 incidences), as also observed in all the other years (1996-2012: 10,123 incidences). Together with terms like “reconstruction”, “analysis”, or “processing”, medical imaging is clearly recognizable as the major subject of the BVM workshops.

Concerning the scientific direction of the BVM meeting over time, terms such as “segmentation”, “registration”, and “navigation”, which indicate image processing procedures relevant for clinical applications, have been used with increasing frequencies (Fig. ​ 2 2 , left). The same holds for terms like “evaluation” or “experiment”, which are related to the validation of the contributions (Fig. ​ 2 2 , middle), constituting a first step towards the transition of the scientific results into a clinical application. (Fig. ​ 2 2 right) shows the occurrence of the words “patient” and “application” in the contributed papers of the BVM workshops between 1996 and 2012. Here, rather constant numbers of occurrences are found indicating a stringent focus on clinical applications.

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F2.jpg

Trends from BVM workshop proceedings from important terms of processing procedures (left), experimental verification (middle), and application to humans (right).

3.  VIEWPOINTS FROM SCIENCE TO APPLICATION

3.1. multi-modal image processing for imaging and diagnosis.

Multi-modal imaging refers to (i) different measurements at a single tomographic system (e.g., MRI and functional MRI), (ii) measurements at different tomographic systems (e.g., computed tomography (CT), positron emission tomography (PET), and single photon emission computed tomography (SPECT)), and (iii) measurements at integrated tomographic systems (PET/CT, PET/MR). Hence, multi-modal tomography has become increasingly popular in clinical and preclinical applications (Fig. ​ 3 3 ) providing images of morphology and function (Fig. ​ 4 4 ).

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F3.jpg

PubMed cited papers for search “multimodal AND (imaging OR tomography OR image)”.

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F4.jpg

Morphological and functional imaging in clinical and pre-clinical applications.

Multi-modal image processing for enhancing multi-modal imaging procedures primarily deals with image reconstruction and artifact reduction. Examples are the integration of additional information about tissue types from MRI as an anatomical prior to the iterative reconstruction of PET images [ 14 ] and the CT- or MR-based correction of attenuation artifacts in PET, respectively, which is an essential prerequisite for quantitative PET analysis [ 15 , 16 ]. Since these algorithms are part of the imaging workflow, only highly automated, fast, and robust algorithms providing adequate accuracy are appropriate solutions. Accordingly, the whole image in the different modalities must be considered.

This requirement differs for multi-modal diagnostic approaches. In most applications, a single organ or parts of an organ are of interest. Anatomical and particularly pathological regions often show a high variability due to structure, deformation, or movement, which is difficult to predict and is thus a great challenge for image processing. In multi-modality applications, images represent complementary information often obtained at different time-scales introducing additional complexity for algorithms. Other inequalities are introduced by the different resolutions and fields of view showing the organ of interest in different degrees of completeness. From a scientific and thus algorithmic point of view, image processing methods for multi-modal images must meet higher requirements than those applied to single-modality images.

Looking exemplarily at segmentation as one of the most complex and demanding problems in medical image processing, the modality showing anatomical and pathological structures in high resolution and contrast (e.g., MRI, CT) is typically used to segment the structure or volume of interest (VOI) to subsequently analyze other properties such as function within these target structures. Here, the different resolutions have to be regarded to correct for partial volume effects in the functional modality (e.g., PET, SPECT). Since the structures to be analyzed are dependent on the disease of the actual patient examined, automatic segmentation approaches are appropriate solutions if the anatomical structures of interest are known beforehand [ 17 ], while semi-automatic approaches are advantageous if flexibility is needed [ 18 , 19 ].

Transferring research into diagnostic application software requires a graphical user interface (GUI) to parameterize the algorithms, 2D and 3D visualization of multi-modal images and segmentation results, and tools to interact with the visualized images during the segmentation procedure. The Medical Interaction Toolkit [ 20 ] or the MevisLab [ 21 ] provide the developer with frameworks for multi-modal visualization, interaction and tools to build appropriate GUIs, yielding an interface to integrate new algorithms from science to application.

Another important aspect transferring algorithms from pure academics to clinical practice is evaluation. Phantoms can be used for evaluating specific properties of an algorithm, but not for evaluating the real situation with all its uncertainties and variability. Thus, the most important step of migrating is extensive testing of algorithms on large amounts of real clinical data, which is a great challenge particularly for multi-modal approaches, and should in future be more supported by publicly available databases.

3.2. Analysis of Diffusion Weighted Images

Due to its sensitivity to micro-structural changes in white matter, diffusion weighted imaging (DWI) is of particular interest to brain research. Stroke is the most common and well known clinical application of DWI, where the images allow the non-invasive detection of ischemia within minutes of onset and are sensitive and relatively specific in detecting changes triggered by strokes [ 22 ]. The technique has also allowed deeper insights into the pathogenesis of Alzheimer’s disease, Parkinson disease, autism spectrum disorder, schizophrenia, and many other psychiatric and non-psychiatric brain diseases. DWI is also applied in the imaging of (mild) traumatic brain injury, where conventional techniques lack sensitivity to detect the subtle changes occurring in the brain. Here, studies on sports-related traumata in the younger population have raised considerable debates in the recent past [ 23 ].

Methodologically, recent advances in the generation and analysis of large-scale networks on basis of DWI are particularly exciting and promise new dimensions in quantitative neuro-imaging via the application of the profound set of tools available in graph theory to brain image analysis [ 24 ]. DWI sheds light on the living brain network architecture, revealing the organization of fiber connections together with their development and change in disease.

Big challenges remain to be solved though: Despite many years of methodological development in DWI post-processing, the field still seems to be in its infancy. The reliable tractography-based reconstruction of known or pathological anatomy is still not solved. Current reconstruction challenges at the 2011 and 2012 annual meetings of the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society have demonstrated the lack of methods that can reliably reconstruct large and well-known structures like the cortico-spinal tract in datasets of clinical quality [ 25 ]. Missing reference-based evaluation techniques hinder the well-founded demonstration of the real advantages of novel tractography algorithms over previous methods [ 26 ]. The mentioned limitations have obscured a broader application of DWI tractography, e.g. in surgical guidance. Even though the application of DWI e.g. in surgical resection has shown to facilitate the identification of risk structures [ 27 ], the widespread use of these techniques in surgical practice remains limited mainly by the lack of robust and standardized methods that can be applied multi-centered across institutions and comprehensive evaluation of these algorithms.

However, there are numerous applications of DWI in cancer imaging, which bridge imaging science and clinical application. The imaging modality has shown potential in the detection, staging and characterization of tumors (Fig. ​ 5 5 ), the evaluation of therapy response, or even in the prediction of therapy outcome [ 28 ]. DWI was also applied in the detection and characterization of lesions in the abdomen and the pelvis, where increased cellularity of malignant tissue leads to restricted diffusion when compared to the surrounding tissue [ 29 ]. The challenge here again will be the establishment of reliable sequences and post-processing methods for the wide-spread and multi-centric application of the techniques in the future.

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F5.jpg

Depiction of fiber tracts in the vicinity of a grade IV glioblastoma. The volumetric tracking result (yellow) was overlaid on an axial T2-FLAIR image. Red and green arrows indicate the necrotic tumor core and peritumoral hyperintensity, respectively. In the frontal parts, fiber tracts are still depicted, whereas in the dorsal part, tracts seem to be either displaced or destructed by the tumor.

3.3. Model-Based Image Analysis

As already emphasized in the previous viewpoints, there is a big gap between the state of the art in current research and methods available in clinical application, especially in the field of medical image analysis [ 30 ]. Segmentation of relevant image structures (tissues, tumors, vessels etc.) is still one of the key problems in medical image computing lacking robust and automatic methods. The application of pure data-driven approaches like thresholding, region growing, edge detection, or enhanced data-driven methods like watershed algorithms, Markov random field (MRF)-based approaches, or graph cuts often leads to weak segmentations due to low contrasts between neighboring image objects, image artifacts, noise, partial volume effects etc.

Model-based segmentation integrates a-priori knowledge of the shapes and appearance of relevant structures into the segmentation process. For example, the local shape of a vessel can be characterized by the vesselness operator [ 31 ], which generates images with an enhanced representation of vessels. Using the vesselness information in combination with the original grey value image segmentation of vessels can be improved significantly and especially the segmentation of a small vessel becomes possible (e.g. [ 32 ]).

In statistical or active shape and appearance models [ 33 , 34 ], shape variability in organ distribution among individuals and characteristic gray value distributions in the neighborhood of the organ can be represented. In these approaches, a set of segmented image data is used to train active shape and active appearance models, which include information about the mean shape and shape variations as well as characteristic gray value distributions and their variation in the population represented in the training data set. Instead of direct point-to-point correspondences that are used during the generation of classical statistical shape models, Hufnagel et al. have suggested probabilistic point-to-point correspondences [ 35 ]. This approach takes into account that often inaccuracies are unavoidable by the definition of direct point correspondences between organs of different persons. In probabilistic statistical shape models, these correspondence uncertainties are respected explicitly to improve the robustness and accuracy of shape modeling and model-based segmentation. Integrated in an energy minimizing level set framework, the probabilistic statistical shape models can be used for enhanced organ segmentation [ 36 ].

In contrast thereto, atlas-based segmentation methods (e.g., [ 37 ]) realize a case-based approach and make use of the segmentation information contained in a single segmented data set, which is transferred to an unseen patient image data set. The transfer of the atlas segmentation to the patient segmentation is done by inter-individual non-linear registration methods. Multi-atlas segmentation methods using several atlases have been proposed (e.g. [ 38 ]) and show an improved accuracy and robustness in comparison to single atlas segmentation methods. Hence, multi-atlas approaches are currently in the focus of further research [ 39 , 40 ].

In future, more task-oriented systems integrated into diagnostic processes, intervention planning, therapy and follow-up are needed. In the field of image analysis, due the limited time of the physicians, automatic procedures are of special interest to segment and extract quantitative object parameters in an accurate, reproducible and robust way. Furthermore, intelligent and easy-to-use methods for fast correction of unavoidable segmentation errors are needed.

3.4. Registration of Section Images

Imaging techniques such as histology [ 41 ] or auto-radiography [ 42 ] are based on thin post-mortem sections. In comparison to in-vivo imaging, e.g. positron emission tomography (PET), magnetic resonance imaging (MRI), or DWI (as addressed in the previous viewpoint, cf. Section 4.1), several properties are considered advantageous. For instance, tissue can be processed after sectioning to enhance contrast (e.g. staining) [ 43 ], to mark specific properties like receptors [ 44 ] or to apply laser ablation studying the spatial element distribution [ 45 ]; tissue can be scanned in high-resolution [ 43 ]; and tissue is thin enough to allow optical light transmission imaging, e.g. polarized light imaging (PLI) [ 46 ]. Therefore, section imaging results in high space-resolved and high-contrasted data, which supports findings such as cytoarchitectonic boundaries [ 47 ], neuronal fiber directions [ 48 ], and receptor or element distributions [ 45 ].

Restacking of 2D sections into a 3D volume followed by the fusion of this stack with an in-vivo volume is the challenging task of medical image processing on the track from science to application. The 3D section stacks then serve as an atlas for a large variety of applications. Sections are non-linearly deformed during cutting and post-processing. Additionally, discontinuous artifacts like tears or enrolled tissue hamper the correspondence of true structure and tissue imaged.

The so-called “problem of the digitized banana” [ 41 ] prohibits the section-by-section registration without 3D reference. Smoothness of registered stacks is not equivalent to consistency and correctness. Whereas the deformations are section-specific, the orientation of the sections in comparison to the 3D structure depends on the cutting direction and, thus, is the same for all sections. In this tangled situation the question rises, if it is better to (i) restack the sections first, register the whole stack afterwards and correct for deformations at last (volume-first approach) or (ii) to register each section individually to the 3D reference volume while correcting deformations at the same time (section-first approach). Both approaches combine

  • Multi-modal registration : The need of a 3D reference and the application to correlate high-resolution section imaging findings with in-vivo imaging are sometimes solved at the same time. If possible, the 3D in-vivo modality itself is used as a reference.

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F6.jpg

Characteristic flow chart of volume-first approach and volume generation with (gray boxes) or without blockface images as intermediate reference modality (Column I). Either the in-vivo volume is post-processed to generate a pseudo-high-resolution volume with propagated section gaps (Column II) or the section volume is post-processed to get a low-resolution stack with filled gaps (Column III) [42].

Due to the variety of difficulties, missing evaluation possibilities and section specifics like post-processing, embedding, cutting procedure and tissue type there is not just one best approach to come from 2D to 3D. But careful work in this field is paid off by cutting edge applications. Not least within the European flagship, The Human Brain Project (HBP), further research in this area of medical image processing is demanded. The state-of-the-art review of HBP states in the context of human brain mapping: “What is missing to date is an integrated open source tool providing a standard application programming interface (API) for data registration and coordinate transformations and guaranteeing multi-scale and multi-modal data accuracy” [ 49 ]. Such a tool will narrow the gap from science to application.

3.5. From Images to Information in Digital Endoscopy

Basic endoscopic technologies and their routine applications (Fig. ​ 7 7 , bottom layers) still are purely data-oriented, as the complete image analysis and interpretation is performed solely by the physician. If content of endoscopic imagery is analyzed automatically, several new application scenarios for diagnostics and intervention with increasing complexity can be identified (Fig. ​ 7 7 , upper layers). As these new possibilities of endoscopy are inherently coupled with the use of computers, these new endoscopic methods and applications can be referred to as computer-integrated endoscopy [ 50 ]. Information, however, is referred to on the highest of the five levels of semantics (Fig. ​ 7 7 ):

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F7.jpg

Modules to build computer-integrated endoscopy, which enables information gain from image data.

  • 1. Acquisition : Advancements in diagnostic endoscopy were obtained by glass fibers for the transmission of electric light into and image information out of the body. Besides the pure wire-bound transmission of endoscopic imagery, in the past 10 years wireless broadcast came available for gastroscopic video data captured from capsule endoscopes [ 51 ].
  • 2. Transportation : Based on digital technologies, essential basic processes of endoscopic still image and image sequence capturing, storage, archiving, documentation, annotation and transmission have been simplified. These developments have initially led to the possibilities for tele-diagnosis and tele-consultations in diagnostic endoscopy, where the image data is shared using local networks or the internet [ 52 ].
  • 3. Enhancement : Methods and applications for image enhancement include intelligent removal of honey-comb patterns in fiberscopic recordings [ 53 ], temporal filtering for the reduction of ablation smoke and moving particles [ 54 ], image rectification for gastroscopes. Additionally, besides having an increased complexity, they have to work in real time with a maximum delay of 60 milliseconds, to be acceptable for surgeons and physicians.
  • 4. Augmentation : Image processing enhances endoscopic views with additional type of information. Examples of this type are artificial working horizon, key-hole views to endoscopic panorama-images [ 55 ], 3D surfaces computed from point clouds obtained by special endoscopic imaging devices such as stereo endoscopes [ 56 ], time-of-flight endoscopes [ 57 ], or shape-from polarization approaches [ 58 ]. This level also includes the possibilities of visualization and image fusion of endoscopic views with preoperative acquired radiological imagery such as angiography or CT data [ 59 ] for better intra-operative orientation and navigation, as well as image-based tracking and navigation through tubular structures [ 60 ].
  • 5. Content : Methods of content-based image analysis consider the automated segmentation, characterization and classification of diagnostic image content. Such methods describe computer-assisted detection (CADe) [ 61 ] of lesions (such as e.g. polyps) or computer-assisted diagnostics (CADx) [ 62 ], where already detected and delineated regions are characterized and classified into, for instance, benign or malign tissue areas. Furthermore, such methods automatically identify and track surgical instruments, e.g. supporting robotic surgery approaches.

On the technical side the semantics of the extracted image contents increases from the pure image recording up to the image content analysis level. This complexity also relates to the expected time axis needed to bring these methods from science to clinical applications.

From the clinical side, the most complex methods such as automated polyp detection (CADe) are considered as most important. However, it is expected that computer-integrated endoscopy systems will increasingly enter clinical applications and as such will contribute to the quality of the patient’s healthcare.

3.6. Virtual Reality and Robotics

Virtual reality (VR) and robotics are two rapidly expanding fields with growing application in surgery. VR creates three-dimensional environments increasing the capability for sensory immersion, which provides the sensation of being present in the virtual space. Applications of VR include surgical planning, case rehearsal, and case playback, which could change the paradigm of surgical training, which is especially necessary as the regulations surrounding residencies continue to change [ 63 ]. Surgeons are enabled to practice in controlled situations with preset variables to gain experience in a wide variety of surgical scenarios [ 64 ].

With the availability of inexpensive computational power and the need for cost-effective solutions in healthcare, medical technology products are being commercialized at an increasingly rapid pace. VR is already incorporated into several emerging products for medical education, radiology, surgical planning and procedures, physical rehabilitation, disability solutions, and mental health [ 65 ]. For example, VR is helping surgeons learn invasive techniques before operating, and allowing physicians to conduct real-time remote diagnosis and treatment. Other applications of VR include the modeling of molecular structures in three dimensions as well as aiding in genetic mapping and drug synthesis.

In addition, the contribution of robotics has accelerated the replacement of many open surgical treatments with more efficient minimally invasive surgical techniques using 3D visualization techniques. Robotics provides mechanical assistance with surgical tasks, contributing greater precision and accuracy and allowing automation. Robots contain features that can augment surgical performance, for instance, by steadying a surgeon’s hand or scaling the surgeon’s hand motions [ 66 ]. Current robots work in tandem with human operators to combine the advantages of human thinking with the capabilities of robots to provide data, to optimize localization on a moving subject, to operate in difficult positions, or to perform without muscle fatigue. Surgical robots require spatial orientation between the robotic manipulators and the human operator, which can be provided by VR environments that re-create the surgical space. This enables surgeons to perform with the advantage of mechanical assistance but without being alienated from the sights, sounds, and touch of surgery [ 67 ].

After many years of research and development, Japanese scientists recently presented an autonomous robot which is able to realize surgery within the human body [ 68 ]. They send a miniature robot inside the patient’s body, perceive what the robot saw and touched before conducting surgery by using the robot’s minute arms as though as it were the one’s of the surgeon.

While the possibilities – and the need – for medical VR and robotics are immense, approaches and solutions using new applications require diligent, cooperative efforts among technology developers, medical practitioners and medical consumers to establish where future requirements and demand will lie. Augmented and virtual reality substituting or enhancing the reality can be considered as multi-reality approaches [ 69 ], which are already available in commercial products for clinical applications.

4.  DISCUSSION

In this paper, we have analyzed the written proceedings of the German annual meeting on Medical Imaging (BVM) and presented personal viewpoints on medical image processing focusing on the transfer from science to application. Reflecting successful clinical applications and promising technologies that have been recently developed, it turned out that medical image computing has transferred from single- to multi-images, and there are several ways to combine these images:

  • Multi-modality : Figs. ​ 2 2 and ​ 3 3 have emphasized that medical image processing has been moved away from the simple 2D radiograph via 3D imaging modalities to multi-modal processing and analyzing. Successful applications that are transferrable into the clinics jointly process imagery from different modalities.
  • Multi-resolution : Here, images with different properties from the same subject and body area need alignment and comparison. Usually, this implies a multi-resolution approach, since different modalities work on different scales of resolutions.
  • Multi-scale : If data becomes large, as pointed out for digital pathology, algorithms must operate on different scales, iteratively refining the alignment from coarse-to-fine. Such algorithmic design usually is referred to as multi-scale approach.
  • Multi-subject : Models have been identified as key issue for implementing applicable image computing. Such models are used for segmentation, content understanding, and intervention planning. They are generated from a reliable set of references, usually based on several subjects.
  • Multi-atlas : Even more complex, the personal viewpoints have identified multi-atlas approaches that are nowadays addressed in research. For instance in segmentation, accuracy and robustness of algorithms are improved if they are based on multiple rather than a single atlas. Both, accuracy and robustness are essential requirements for transferring algorithms into the clinical use.
  • Multi-semantics : Based on the example of digital endoscopy, another “multi” term is introduced. Image understanding and interpretation has been defined on several levels of semantics, and successful applications in computer-integrated endoscopy are operating on several of such levels.
  • Multi-reality : Finally, our last viewpoint has addressed the augmentation of the physician’s view by means of virtual reality. Medical image computing is applied to generate and superimpose such views, which results in a multi-reality world.

Andriole, Barish, and Khorasani also have discussed issues to consider for advanced image processing in the clinical arena [ 70 ]. In completion of the collection of “multi” issues, they emphasized that radiology practices are experiencing a tremendous increase in the number of images associated with each imaging study, due to multi-slice , multi-plane and/or multi-detector 3D imaging equipment. Computer-aided detection used as a second reader or as a first-pass screener will help maintaining or perhaps improving readers' performance on such big data in terms of sensitivity and specificity.

Last not least, with all these “multies”, the computational load of algorithms again becomes an issue. Modern computers provide enormous computational power and yield a revisiting and applications of several “old” approaches, which did not find their way into the clinical use yet, just because of the processing times. However, combining many images of large sizes, processing time becomes crucial again. Scholl et al. have recently addressed this issue reviewing applications based on parallel processing and usage of graphical processors for image analysis [ 12 ]. These are seen as multi-processing methods.

In summary, medical image processing is a progressive field of research, and more and more applications are becoming part of the clinical practice. These applications are based on one or more of the “multi” concepts that we have addressed in this review. However, effects from current trends in the Medical Device Directives that increase the efforts needed for clinical trials of new medical imaging procedure, cannot be observed until today. It will hence be an interesting point to follow the trend of the translation of scientific results of future BVM workshops into clinical applications.

ACKNOWLEDGEMENTS

We would like to thank Hans-Peter Meinzer, Co-Chair of the German BVM, for his helpful suggestions and for encouraging his research fellows to contribute and hence, giving this paper a “ multi-generation ” view.

CONFLICT OF INTEREST

The author(s) confirm that this article content has no conflict of interest.

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Artificial Intelligence Image Processing Based on Wireless Sensor Networks Application in Lake Environmental Landscape

  • Published: 28 August 2024

Cite this article

research papers on image processing topics

  • Junnan Lv 1 &
  • Sun Yao 1  

With the rapid development of Internet of Things (IoT) technology, wireless sensor networks are increasingly used in environmental monitoring and management. In the protection and restoration of lake ecological environment, real-time monitoring of water quality, water temperature and other environmental factors becomes particularly important. The purpose of this study is to explore the application of artificial intelligence image processing technology based on wireless sensor network in lake environment landscape monitoring, in order to improve monitoring efficiency and strengthen environmental protection measures. A network of wireless sensor nodes was constructed to collect data on lake water quality and environment in real time. At the same time, the image processing algorithm and deep learning model are combined to analyze the lake image to identify and evaluate the ecological state. Mobile devices are used for remote access and analysis of data. Through comparative experiments, the data collection method based on wireless sensor network has significantly improved the accuracy and timeliness of data compared with traditional water quality monitoring methods. The results of image processing show that the change trend of lake ecological environment can be quickly identified, and the change of multiple environmental indicators can be successfully predicted. Therefore, the artificial intelligence image processing technology based on wireless sensor network has a broad application prospect in the lake environment landscape monitoring.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research papers on image processing topics

Similar content being viewed by others

research papers on image processing topics

Improving Sustainability with Deep Learning Models for Inland Water Quality Monitoring Using Satellite Imagery

research papers on image processing topics

Monitoring the Dynamic Changes in Urban Lakes Based on Multi-source Remote Sensing Images

research papers on image processing topics

Spatiotemporal Monitoring of Land Use-Land Cover and Its Relationship with Land Surface Temperature Changes Based on Remote Sensing, GIS, and Deep Learning

Explore related subjects.

  • Artificial Intelligence

Data Availability

No datasets were generated or analysed during the current study.

Liang D, Wang Q, Wei N, Tang C, Sun X, Yang Y (2020) Biological indicators of ecological quality in typical urban river-lake ecosystems: the planktonic rotifer community and its response to environmental factors. Ecol Ind 112:106127

Article   Google Scholar  

Wei D, Shang G, Huang T, Li Y (2012) Compilation mode for the integrated planning of lake scenic areas-a case study of the Longhu Lake Scenic Area in the Taihang Mountains. J Landsc Res 4(5):1–6

Google Scholar  

Grenni P, Ancona V, Caracciolo AB (2018) Ecological effects of antibiotics on natural ecosystems: a review. Microchem J 136:25–39

Zhang Y, Han M, Chen W (2018) The strategy of digital scenic area planning from the perspective of intangible cultural heritage protection. Eurasip J Image Video Process 2018(1):1–11

Li J, Wei P (2022) Three-dimensional landscape rendering and landscape spatial distribution of traditional villages based on big data information system. Mobile Information Systems 1–13 (2022)

Mao Q, Cui H, Hu Q, Ren X (2018) A rigorous fastener inspection approach for high-speed railway from structured light sensors. ISPRS J Photogrammetry Remote Sens 143:249–267

Robertson S, Azizpour H, Smith K, Hartman J (2018) Digital image analysis in breast pathology—from image processing techniques to artificial intelligence. Translational Res 194:19–35

Wei H, Li S, Li C, Zhao F, Xiong L, Tang G (2021) Quantification of loess landforms from three-dimensional landscape pattern perspective by using DEMs. ISPRS Int J Geo-Information 10(10):693

Cui L, Yang S, Chen Z, Pan Y, Ming Z, Xu M (2019) A decentralized and trusted edge computing platform for internet of things. IEEE Internet Things J 7(5):3910–3922

Gu J, Wang Z, Kuen J et al (2018) Recent advances in convolutional neural networks. Pattern Recogn 77:354–377

Ogbuanya TC, Onele NO (2018) Investigating the effectiveness of desktop virtual reality for teaching and learning of electrical/electronics technology in universities. Computers Schools 35(3):226–248

Klimkowska A, Cavazzi S, Leach R, Grebby S (2022) Detailed three-dimensional building façade reconstruction: a review on applications, data and technologies. Remote Sens 14(11):2579

Nagpal S, Mueller C, Aijazi A, Reinhart CF (2019) A methodology for auto-calibrating urban building energy models using surrogate modeling techniques. J Build Perform Simul 12(1):1–16

Park YM, Han YD, Chun HJ, Yoon HC (2017) Ambient light-based optical biosensing platform with smartphone-embedded illumination sensor. Biosens Bioelectron 93:205–211

Li X, Rao W, Geng D (2018) Design and analysis of weak optical signal detection system based on photoelectric detection technology. J Nanoelectronics Optoelectron 13(4):458–466

Yu J, Chen Y, Li J (2016) Color scheme adaptation to enhance user experience on smartphone displays leveraging ambient light. IEEE Trans Mob Comput 16(3):688–701

Article   MathSciNet   Google Scholar  

Ruan S, Hong Y, Zhuang Y (2021) Evolution and restoration of water quality in the process of urban development: a case study in urban lake, China. Environ Monit Assess 193(7):407

Vierikko K, Yli-Pelkonen V (2019) Seasonality in recreation supply and demand in an urban lake ecosystem in Finland. Urban Ecosyst 22:769–783

Gracioli G, Alhammad A, Mancuso R, Fröhlich AA, Pellizzoni R (2015) A survey on cache management mechanisms for real-time embedded systems. ACM Comput Surv (CSUR) 48(2):1–36

Download references

The authors have not disclosed any funding.

Author information

Authors and affiliations.

College of Arts, Sichuan University, Chengdu, 610000, Sichuan, China

Junnan Lv & Sun Yao

You can also search for this author in PubMed   Google Scholar

Contributions

The first version was written by Junnan Lv. Sun Yao has done the simulation. All authors have contributed to the paper’s analysis, discussion, writing, and revision.

Corresponding author

Correspondence to Sun Yao .

Ethics declarations

Ethical approval.

Not applicable.

Conflict of Interest

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Lv, J., Yao, S. Artificial Intelligence Image Processing Based on Wireless Sensor Networks Application in Lake Environmental Landscape. Mobile Netw Appl (2024). https://doi.org/10.1007/s11036-024-02413-w

Download citation

Accepted : 22 August 2024

Published : 28 August 2024

DOI : https://doi.org/10.1007/s11036-024-02413-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Wireless Sensor Network
  • Image Processing
  • Lake Environment
  • Landscape Design

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

remotesensing-logo

Article Menu

research papers on image processing topics

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Soil organic carbon estimation via remote sensing and machine learning techniques: global topic modeling and research trend exploration.

research papers on image processing topics

1. Introduction

2. materials and methods, 2.1. literature retrieval methods, 2.2. bertopic modeling methods, 2.2.1. data preprocessing, 2.2.2. topic selection, keyword analysis, and model accuracy, 2.2.3. visualization and additional analysis methods, 2.3. other visualization methods, 3.1. basic publication statistics, 3.2. research trends and landscape of soc estimation using rs techniques, 3.2.1. the utilization frequency of different rs platforms, 3.2.2. bertopic modeling analysis for studies of soc estimation using rs, 3.3. research landscape of soc estimation via rs and ml techniques, 3.3.1. author keyword co-occurrence analysis, 3.3.2. dynamics and characterization of author keywords and keywords plus over time, 3.3.3. bertopic modeling, 4. discussion, 4.1. bertopic modeling clusters for soc with rs techniques, 4.2. relationship between soc estimation and ml and rs techniques, 4.2.1. keyword clustering and dynamics in soc research, 4.2.2. in-depth analysis of bertopic modeling clusters, 4.3. challenges in the integration of rs and ml techniques for soc estimation, 4.4. limitations, 5. conclusions, author contributions, data availability statement, conflicts of interest.

  • Sommer, R.; Bossio, D. Dynamics and climate change mitigation potential of soil organic carbon sequestration. J. Environ. Manag. 2014 , 144 , 83–87. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Lal, R. Digging deeper: A holistic perspective of factors affecting soil organic carbon sequestration in agroecosystems. Glob. Chang. Biol. 2018 , 24 , 3285–3301. [ Google Scholar ] [ CrossRef ]
  • Hernandez-Morcillo, M.; Burgess, P.; Mirck, J.; Pantera, A.; Plieninger, T. Scanning agroforestry-based solutions for climate change mitigation and adaptation in Europe. Environ. Sci. Policy 2018 , 80 , 44–52. [ Google Scholar ] [ CrossRef ]
  • Minasny, B.; Malone, B.P.; McBratney, A.B.; Angers, D.A.; Arrouays, D.; Chambers, A.; Chaplot, V.; Chen, Z.-S.; Cheng, K.; Das, B.S.; et al. Soil carbon 4 per mille. Geoderma 2017 , 292 , 59–86. [ Google Scholar ] [ CrossRef ]
  • Scialabba, N.E.H.; Muller-Lindenlauf, M. Organic agriculture and climate change. Renew. Agric. Food Syst. 2010 , 25 , 158–169. [ Google Scholar ] [ CrossRef ]
  • Dong, S.K.; Shang, Z.H.; Gao, J.X.; Boone, R.B. Enhancing sustainability of grassland ecosystems through ecological restoration and grazing management in an era of climate change on Qinghai-Tibetan Plateau. Agric. Ecosyst. Environ. 2020 , 287 , 106684. [ Google Scholar ] [ CrossRef ]
  • Jiang, C.; Zhang, L. Ecosystem change assessment in the Three-river Headwater Region, China: Patterns, causes, and implications. Ecol. Eng. 2016 , 93 , 24–36. [ Google Scholar ] [ CrossRef ]
  • Burle, M.L.; Mielniczuk, J.; Focchi, S. Effect of cropping systems on soil chemical characteristics, with emphasis on soil acidification. Plant Soil 1997 , 190 , 309–316. [ Google Scholar ] [ CrossRef ]
  • Bandara, T.; Franks, A.; Xu, J.; Bolan, N.; Wang, H.; Tang, C. Chemical and biological immobilization mechanisms of potentially toxic elements in biochar-amended soils. Crit. Rev. Environ. Sci. Technol. 2020 , 50 , 903–978. [ Google Scholar ] [ CrossRef ]
  • Xu, Z.; Tsang, D.C. Redox-induced transformation of potentially toxic elements with organic carbon in soil. Carbon Res. 2022 , 1 , 9. [ Google Scholar ] [ CrossRef ]
  • Zhu, X.B.; He, H.L.; Ma, M.G.; Ren, X.L.; Zhang, L.; Zhang, F.W.; Li, Y.N.; Shi, P.L.; Chen, S.P.; Wang, Y.F.; et al. Estimating Ecosystem Respiration in the Grasslands of Northern China Using Machine Learning: Model Evaluation and Comparison. Sustainability 2020 , 12 , 2099. [ Google Scholar ] [ CrossRef ]
  • Wang, S.; Guan, K.; Zhang, C.; Lee, D.; Margenot, A.J.; Ge, Y.; Peng, J.; Zhou, W.; Zhou, Q.; Huang, Y. Using soil library hyperspectral reflectance and machine learning to predict soil organic carbon: Assessing potential of airborne and spaceborne optical soil sensing. Remote Sens. Environ. 2022 , 271 , 112914. [ Google Scholar ] [ CrossRef ]
  • Zhou, Y.; Zhao, X.; Guo, X.; Li, Y. Mapping of soil organic carbon using machine learning models: Combination of optical and radar remote sensing data. Soil Sci. Soc. Am. J. 2022 , 86 , 293–310. [ Google Scholar ] [ CrossRef ]
  • Angelopoulou, T.; Tziolas, N.; Balafoutis, A.; Zalidis, G.; Bochtis, D. Remote sensing techniques for soil organic carbon estimation: A review. Remote Sens. 2019 , 11 , 676. [ Google Scholar ] [ CrossRef ]
  • Li, T.; Xia, A.; McLaren, T.I.; Pandey, R.; Xu, Z.; Liu, H.; Manning, S.; Madgett, O.; Duncan, S.; Rasmussen, P. Preliminary Results in Innovative Solutions for Soil Carbon Estimation: Integrating Remote Sensing, Machine Learning, and Proximal Sensing Spectroscopy. Remote Sens. 2023 , 15 , 5571. [ Google Scholar ] [ CrossRef ]
  • Silvero, N.E.; Demattê, J.A.; Minasny, B.; Rosin, N.A.; Nascimento, J.G.; Albarracín, H.S.R.; Bellinaso, H.; Gómez, A.M. Sensing technologies for characterizing and monitoring soil functions: A review. Adv. Agron. 2023 , 177 , 125–168. [ Google Scholar ]
  • Thulin, S.; Hill, M.J.; Held, A.; Jones, S.; Woodgate, P. Hyperspectral determination of feed quality constituents in temperate pastures: Effect of processing methods on predictive relationships from partial least squares regression. Int. J. Appl. Earth Obs. Geoinf. 2012 , 19 , 322–334. [ Google Scholar ] [ CrossRef ]
  • Darvishzadeh, R.; Skidmore, A.; Schlerf, M.; Atzberger, C.; Corsi, F.; Cho, M. LAI and chlorophyll estimation for a heterogeneous grassland using hyperspectral measurements. ISPRS J. Photogramm. Remote Sens. 2008 , 63 , 409–426. [ Google Scholar ] [ CrossRef ]
  • Kumar, M.; Kumar, A.; Thakur, T.K.; Sahoo, U.K.; Kumar, R.; Konsam, B.; Pandey, R. Soil organic carbon estimation along an altitudinal gradient of chir pine forests in the Garhwal Himalaya, India: A field inventory to remote sensing approach. Land Degrad. Dev. 2022 , 33 , 3387–3400. [ Google Scholar ] [ CrossRef ]
  • Odebiri, O.; Odindi, J.; Mutanga, O. Basic and deep learning models in remote sensing of soil organic carbon estimation: A brief review. Int. J. Appl. Earth Obs. Geoinf. 2021 , 102 , 102389. [ Google Scholar ] [ CrossRef ]
  • Angelopoulou, T.; Balafoutis, A.; Zalidis, G.; Bochtis, D. From laboratory to proximal sensing spectroscopy for soil organic carbon estimation—A review. Sustainability 2020 , 12 , 443. [ Google Scholar ] [ CrossRef ]
  • Yuzugullu, O.; Lorenz, F.; Fröhlich, P.; Liebisch, F. Understanding Fields by Remote Sensing: Soil Zoning and Property Mapping. Remote Sens. 2020 , 12 , 1116. [ Google Scholar ] [ CrossRef ]
  • Xiao, J.; Chevallier, F.; Gomez, C.; Guanter, L.; Hicke, J.A.; Huete, A.R.; Ichii, K.; Ni, W.; Pang, Y.; Rahman, A.F. Remote sensing of the terrestrial carbon cycle: A review of advances over 50 years. Remote Sens. Environ. 2019 , 233 , 111383. [ Google Scholar ] [ CrossRef ]
  • Li, T.; Cui, L.; Xu, Z.; Hu, R.; Joshi, P.K.; Song, X.; Tang, L.; Xia, A.; Wang, Y.; Guo, D.; et al. Quantitative analysis of the research trends and areas in grassland remote sensing: A scientometrics analysis of web of science from 1980 to 2020. Remote Sens. 2021 , 13 , 1279. [ Google Scholar ] [ CrossRef ]
  • Liu, H.; Cui, L.; Li, T.; Schillaci, C.; Song, X.; Pastorino, P.; Zou, H.; Cui, X.; Xu, Z.; Fantke, P. Micro- and Nanoplastics in Soils: Tracing Research Progression from Comprehensive Analysis to Ecotoxicological Effects. Ecol. Indic. 2023 , 156 , 111109. [ Google Scholar ] [ CrossRef ]
  • Wang, Z.; Chen, J.; Chen, J.; Chen, H. Identifying interdisciplinary topics and their evolution based on BERTopic. Scientometrics 2023 , 1–26. [ Google Scholar ] [ CrossRef ]
  • McInnes, L.; Healy, J.; Melville, J. Uniform manifold approximation and projection for dimension reduction. arXiv 2020 , arXiv:1802.03426. [ Google Scholar ]
  • Wang, B.; Kuo, C.-C.J. Sbert-wk: A sentence embedding method by dissecting bert-based word models. IEEE/ACM Trans. Audio Speech Lang. Process. 2020 , 28 , 2146–2157. [ Google Scholar ] [ CrossRef ]
  • Grootendorst, M. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv 2022 , arXiv:2203.05794. [ Google Scholar ]
  • Axelborn, H.; Berggren, J. Topic Modeling for Customer Insights: A Comparative Analysis of LDA and BERTopic in Categorizing Customer Calls. Master’s Thesis, Umeå University, Umeå, Sweden, 2023. [ Google Scholar ]
  • Atzeni, D.; Bacciu, D.; Mazzei, D.; Prencipe, G. A Systematic Review of Wi-Fi and Machine Learning Integration with Topic Modeling Techniques. Sensors 2022 , 22 , 4925. [ Google Scholar ] [ CrossRef ]
  • Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018 , arXiv:1810.04805. [ Google Scholar ]
  • Frick, R.A.; Vogel, I. Fraunhofer SIT at CheckThat! 2022: Ensemble similarity estimation for finding previously fact-checked claims. In Proceedings of the CLEF 2022: Conference and Labs of the Evaluation Forum, Bologna, Italy, 5–8 September 2022; Notes of CLEF. pp. 5–8. [ Google Scholar ]
  • Yu, C.-W.; Chuang, Y.-S.; Lotsos, A.N.; Haase, C.M. Decoding Affect in Dyadic Conversations: Leveraging Semantic Similarity through Sentence Embedding. arXiv 2023 , arXiv:2309.12646. [ Google Scholar ]
  • McInnes, L.; Healy, J.; Melville, J. UMAP: Uniform Manifold Approximation and Projection. J. Open Source Softw. 2018 , 3 , 861. [ Google Scholar ] [ CrossRef ]
  • Aytaç, E.; Khayet, M. A Topic Modeling Approach to Discover the Global and Local Subjects in Membrane Distillation Separation Process. Separations 2023 , 10 , 482. [ Google Scholar ] [ CrossRef ]
  • Yang, D.; Wei, V.; Jin, Z.; Yang, Z.; Chen, X. A UMAP-based clustering method for multi-scale damage analysis of laminates. Appl. Math. Model. 2022 , 111 , 78–93. [ Google Scholar ] [ CrossRef ]
  • McInnes, L.; Healy, J.; Astels, S. hdbscan: Hierarchical density based clustering. J. Open Source Softw. 2017 , 2 , 205. [ Google Scholar ] [ CrossRef ]
  • Bushra, A.A.; Yi, G. Comparative analysis review of pioneering DBSCAN and successive density-based clustering algorithms. IEEE Access 2021 , 9 , 87918–87935. [ Google Scholar ] [ CrossRef ]
  • Abuzayed, A.; Al-Khalifa, H. BERT for Arabic topic modeling: An experimental study on BERTopic technique. Procedia Comput. Sci. 2021 , 189 , 191–194. [ Google Scholar ] [ CrossRef ]
  • van Eck, N.J.; Waltman, L. Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 2010 , 84 , 523–538. [ Google Scholar ] [ CrossRef ]
  • Bin, C.; Weiqi, C.; Shaoling, C.; Chunxia, H. Visual Analysis of Research Hot Spots, Characteristics, and Dynamic Evolution of International Competitive Basketball Based on Knowledge Mapping. SAGE Open 2021 , 11 , 2158244020988725. [ Google Scholar ] [ CrossRef ]
  • Li, T.; Wang, Y.; Cui, L.; Singh, R.K.; Liu, H.; Song, X.; Xu, Z.; Cui, X. Exploring the evolving landscape of COVID-19 interfaced with livelihoods. Humanit. Soc. Sci. Commun. 2023 , 10 , 908. [ Google Scholar ] [ CrossRef ]
  • Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent advances of hyperspectral imaging technology and applications in agriculture. Remote Sens. 2020 , 12 , 2659. [ Google Scholar ] [ CrossRef ]
  • Howitt, R.; Karp, L.; Rausser, G. Remote sensing technologies: Implications for agricultural and resource economics. In Modern Agricultural and Resource Economics and Policy: Essays in Honor of Gordon Rausser ; Springer: Berlin/Heidelberg, Germany, 2022; pp. 183–217. [ Google Scholar ]
  • Malhi, Y.; Girardin, C.; Metcalfe, D.B.; Doughty, C.E.; Aragão, L.E.; Rifai, S.W.; Oliveras, I.; Shenkin, A.; Aguirre-Gutiérrez, J.; Dahlsjö, C.A. The Global Ecosystems Monitoring network: Monitoring ecosystem productivity and carbon cycling across the tropics. Biol. Conserv. 2021 , 253 , 108889. [ Google Scholar ] [ CrossRef ]
  • Umair, M.; Kim, D.; Ray, R.L.; Choi, M. Evaluation of atmospheric and terrestrial effects in the carbon cycle for forest and grassland ecosystems using a remote sensing and modeling approach. Agric. For. Meteorol. 2020 , 295 , 108187. [ Google Scholar ] [ CrossRef ]
  • Gentine, P.; Green, J.K.; Guérin, M.; Humphrey, V.; Seneviratne, S.I.; Zhang, Y.; Zhou, S. Coupling between the terrestrial carbon and water cycles—A review. Environ. Res. Lett. 2019 , 14 , 083003. [ Google Scholar ] [ CrossRef ]
  • Mitra, S.; Wassmann, R.; Vlek, P.L. Global Inventory of Wetlands and Their Role in the Carbon Cycle ; ZEF Discussion Papers on Development Policy, No. 64; University of Bonn, Center for Development Research (ZEF): Bonn, Germany, 2003; Available online: https://ageconsearch.umn.edu/record/18771 (accessed on 25 August 2024).
  • Sjögersten, S.; Black, C.R.; Evers, S.; Hoyos-Santillan, J.; Wright, E.L.; Turner, B.L. Tropical wetlands: A missing link in the global carbon cycle? Glob. Biogeochem. Cycles 2014 , 28 , 1371–1386. [ Google Scholar ] [ CrossRef ]
  • Poulter, B.; Fluet-Chouinard, E.; Hugelius, G.; Koven, C.; Fatoyinbo, L.; Page, S.E.; Rosentreter, J.A.; Smart, L.S.; Taillie, P.J.; Thomas, N. A review of global wetland carbon stocks and management challenges. Wetl. Carbon Environ. Manag. 2021 , 1–20. [ Google Scholar ]
  • Were, D.; Kansiime, F.; Fetahi, T.; Cooper, A.; Jjuuko, C. Carbon sequestration by wetlands: A critical review of enhancement measures for climate change mitigation. Earth Syst. Environ. 2019 , 3 , 327–340. [ Google Scholar ] [ CrossRef ]
  • Thamaga, K.H.; Dube, T.; Shoko, C. Advances in satellite remote sensing of the wetland ecosystems in Sub-Saharan Africa. Geocarto Int. 2022 , 37 , 5891–5913. [ Google Scholar ] [ CrossRef ]
  • Gxokwe, S.; Dube, T.; Mazvimavi, D. Multispectral remote sensing of wetlands in semi-arid and arid areas: A review on applications, challenges and possible future research directions. Remote Sens. 2020 , 12 , 4190. [ Google Scholar ] [ CrossRef ]
  • Jakob, M. Landslides in a changing climate. In Landslide Hazards, Risks, and Disasters ; Elsevier: Amsterdam, The Netherlands, 2022; pp. 505–579. [ Google Scholar ]
  • Stoknes, P.E. What We Think about When We Try Not to Think about Global Warming: Toward a New Psychology of Climate Action ; Chelsea Green Publishing: Junction, VT, USA, 2015. [ Google Scholar ]
  • Li, Z.; Xu, W.; Kang, L.; Kuzyakov, Y.; Chen, L.; He, M.; Liu, F.; Zhang, D.; Zhou, W.; Liu, X.; et al. Accelerated organic matter decomposition in thermokarst lakes upon carbon and phosphorus inputs. Glob. Chang. Biol. 2023 , 29 , 6367–6382. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Godde, C.M.; de Boer, I.J.; Ermgassen, E.z.; Herrero, M.; van Middelaar, C.E.; Muller, A.; Röös, E.; Schader, C.; Smith, P.; Van Zanten, H.H. Soil carbon sequestration in grazing systems: Managing expectations. Clim. Chang. 2020 , 161 , 385–391. [ Google Scholar ] [ CrossRef ]
  • Tantalaki, N.; Souravlas, S.; Roumeliotis, M. Data-driven decision making in precision agriculture: The rise of big data in agricultural systems. J. Agric. Food Inf. 2019 , 20 , 344–380. [ Google Scholar ] [ CrossRef ]
  • Mandal, A.; Majumder, A.; Dhaliwal, S.; Toor, A.; Mani, P.K.; Naresh, R.; Gupta, R.K.; Mitran, T. Impact of agricultural management practices on soil carbon sequestration and its monitoring through simulation models and remote sensing techniques: A review. Crit. Rev. Environ. Sci. Technol. 2022 , 52 , 1–49. [ Google Scholar ] [ CrossRef ]
  • Verma, B.; Porwal, M.; Jha, A.; Vyshnavi, R.; Rajpoot, A.; Nagar, A.K. Enhancing Precision Agriculture and Environmental Monitoring Using Proximal Remote Sensing. J. Exp. Agric. Int. 2023 , 45 , 162–176. [ Google Scholar ] [ CrossRef ]
  • Dieleman, C.M.; Rogers, B.M.; Potter, S.; Veraverbeke, S.; Johnstone, J.F.; Laflamme, J.; Solvik, K.; Walker, X.J.; Mack, M.C.; Turetsky, M.R. Wildfire combustion and carbon stocks in the southern Canadian boreal forest: Implications for a warming world. Glob. Chang. Biol. 2020 , 26 , 6062–6079. [ Google Scholar ] [ CrossRef ]
  • Gonçalves, D.R.P.; Mishra, U.; Wills, S.; Gautam, S. Regional environmental controllers influence continental scale soil carbon stocks and future carbon dynamics. Sci. Rep. 2021 , 11 , 6474. [ Google Scholar ] [ CrossRef ]
  • Hunter, M.E.; Robles, M.D. Tamm review: The effects of prescribed fire on wildfire regimes and impacts: A framework for comparison. For. Ecol. Manag. 2020 , 475 , 118435. [ Google Scholar ] [ CrossRef ]
  • Marcos, B.; Gonçalves, J.; Alcaraz-Segura, D.; Cunha, M.; Honrado, J.P. A framework for multi-dimensional assessment of wildfire disturbance severity from remotely sensed ecosystem functioning attributes. Remote Sens. 2021 , 13 , 780. [ Google Scholar ] [ CrossRef ]
  • Hassani, A.; Azapagic, A.; Shokri, N. Predicting long-term dynamics of soil salinity and sodicity on a global scale. Proc. Natl. Acad. Sci. USA 2020 , 117 , 33017–33027. [ Google Scholar ] [ CrossRef ]
  • Yadav, G.S.; Lal, R.; Meena, R.S.; Babu, S.; Das, A.; Bhowmik, S.; Datta, M.; Layak, J.; Saha, P. Conservation tillage and nutrient management effects on productivity and soil carbon sequestration under double cropping of rice in north eastern region of India. Ecol. Indic. 2019 , 105 , 303–315. [ Google Scholar ] [ CrossRef ]
  • Nandan, R.; Singh, V.; Singh, S.S.; Kumar, V.; Hazra, K.K.; Nath, C.P.; Poonia, S.; Malik, R.K.; Bhattacharyya, R.; McDonald, A. Impact of conservation tillage in rice–based cropping systems on soil aggregation, carbon pools and nutrients. Geoderma 2019 , 340 , 104–114. [ Google Scholar ] [ CrossRef ]
  • Bunsen, M.S.; Loisel, J. Carbon storage dynamics in peatlands: Comparing recent-and long-term accumulation histories in southern Patagonia. Glob. Chang. Biol. 2020 , 26 , 5778–5795. [ Google Scholar ] [ CrossRef ]
  • Andrews, L.O. Peatland Carbon Balance and Climate Change: From the Past to the Future ; University of York: York, UK, 2021. [ Google Scholar ]
  • de Sousa Mendes, W.; Sommer, M.; Koszinski, S.; Wehrhan, M. The power of integrating proximal and high-resolution remote sensing for mapping SOC stocks in agricultural peatlands. Plant Soil 2023 , 492 , 501–517. [ Google Scholar ] [ CrossRef ]
  • Minasny, B.; Berglund, Ö.; Connolly, J.; Hedley, C.; de Vries, F.; Gimona, A.; Kempen, B.; Kidd, D.; Lilja, H.; Malone, B. Digital mapping of peatlands–A critical review. Earth-Sci. Rev. 2019 , 196 , 102870. [ Google Scholar ] [ CrossRef ]
  • Luo, X.; Bing, H.; Luo, Z.; Wang, Y.; Jin, L. Impacts of atmospheric particulate matter pollution on environmental biogeochemistry of trace metals in soil-plant system: A review. Environ. Pollut. 2019 , 255 , 113138. [ Google Scholar ] [ CrossRef ]
  • Werner, T.; Bebbington, A.; Gregory, G. Assessing impacts of mining: Recent contributions from GIS and remote sensing. Extr. Ind. Soc. 2019 , 6 , 993–1012. [ Google Scholar ] [ CrossRef ]
  • Shin, M.; Kang, Y.; Park, S.; Im, J.; Yoo, C.; Quackenbush, L.J. Estimating ground-level particulate matter concentrations using satellite-based data: A review. GISci. Remote Sens. 2020 , 57 , 174–189. [ Google Scholar ] [ CrossRef ]
  • Diao, M.; Holloway, T.; Choi, S.; O’Neill, S.M.; Al-Hamdan, M.Z.; Van Donkelaar, A.; Martin, R.V.; Jin, X.; Fiore, A.M.; Henze, D.K. Methods, availability, and applications of PM2. 5 exposure estimates derived from ground measurements, satellite, and atmospheric models. J. Air Waste Manag. Assoc. 2019 , 69 , 1391–1414. [ Google Scholar ] [ CrossRef ]
  • Odebiri, O.; Mutanga, O.; Odindi, J. Deep learning-based national scale soil organic carbon mapping with Sentinel-3 data. Geoderma 2022 , 411 , 115695. [ Google Scholar ] [ CrossRef ]
  • Zhou, J.; Xu, Y.; Gu, X.; Chen, T.; Sun, Q.; Zhang, S.; Pan, Y. High-Precision Mapping of Soil Organic Matter Based on UAV Imagery Using Machine Learning Algorithms. Drones 2023 , 7 , 290. [ Google Scholar ] [ CrossRef ]
  • Mayer, M.; Prescott, C.E.; Abaker, W.E.A.; Augusto, L.; Cecillon, L.; Ferreira, G.W.D.; James, J.; Jandl, R.; Katzensteiner, K.; Laclau, J.P.; et al. Tamm Review: Influence of forest management activities on soil organic carbon stocks: A knowledge synthesis. For. Ecol. Manag. 2020 , 466 , 118127. [ Google Scholar ] [ CrossRef ]
  • Gewali, U.B.; Monteiro, S.T.; Saber, E. Machine learning based hyperspectral image analysis: A survey. arXiv 2018 , arXiv:1802.08701. [ Google Scholar ]
  • Romero, A.; Gatta, C.; Camps-Valls, G. Unsupervised deep feature extraction for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2015 , 54 , 1349–1362. [ Google Scholar ] [ CrossRef ]
  • Chen, X.-W.; Lin, X. Big data deep learning: Challenges and perspectives. IEEE Access 2014 , 2 , 514–525. [ Google Scholar ] [ CrossRef ]
  • Odebiri, O.; Mutanga, O.; Odindi, J.; Naicker, R. Modelling soil organic carbon stock distribution across different land-uses in South Africa: A remote sensing and deep learning approach. ISPRS J. Photogramm. Remote Sens. 2022 , 188 , 351–362. [ Google Scholar ] [ CrossRef ]
  • Licen, S.; Astel, A.; Tsakovski, S. Self-organizing map algorithm for assessing spatial and temporal patterns of pollutants in environmental compartments: A review. Sci. Total Environ. 2023 , 878 , 163084. [ Google Scholar ] [ CrossRef ]
  • Mele, P.M.; Crowley, D.E. Application of self-organizing maps for assessing soil biological quality. Agric. Ecosyst. Environ. 2008 , 126 , 139–152. [ Google Scholar ] [ CrossRef ]
  • Tamiminia, H.; Salehi, B.; Mahdianpari, M.; Quackenbush, L.; Adeli, S.; Brisco, B. Google Earth Engine for geo-big data applications: A meta-analysis and systematic review. ISPRS J. Photogramm. Remote Sens. 2020 , 164 , 152–170. [ Google Scholar ] [ CrossRef ]
  • Lin, C.; Zhu, A.X.; Wang, Z.F.; Wang, X.R.; Ma, R.H. The refined spatiotemporal representation of soil organic matter based on remote images fusion of Sentinel-2 and Sentinel-3. Int. J. Appl. Earth Obs. Geoinf. 2020 , 89 , 102094. [ Google Scholar ] [ CrossRef ]
  • Hobley, E.; Steffens, M.; Bauke, S.L.; Kögel-Knabner, I. Hotspots of soil organic carbon storage revealed by laboratory hyperspectral imaging. Sci. Rep. 2018 , 8 , 13900. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zízala, D.; Minarík, R.; Zádorová, T. Soil Organic Carbon Mapping Using Multispectral Remote Sensing Data: Prediction Ability of Data with Different Spatial and Spectral Resolutions. Remote Sens. 2019 , 11 , 2947. [ Google Scholar ] [ CrossRef ]
  • Liu, J.; Li, Y.; Liu, S.B. Prediction Models of Soil Organic Matter Based on Spectral Curve in the Upstream of Heihe Basin. Spectrosc. Spectr. Anal. 2013 , 33 , 3354–3358. [ Google Scholar ] [ CrossRef ]
  • Bayer, A.D.; Bachmann, M.; Rogge, D.; Müller, A.; Kaufmann, H. Combining Field and Imaging Spectroscopy to Map Soil Organic Carbon in a Semiarid Environment. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016 , 9 , 3997–4010. [ Google Scholar ] [ CrossRef ]
  • Paruelo, J.M.; Pineiro, G.; Baldi, G.; Baeza, S.; Lezama, F.; Altesor, A.; Oesterheld, M. Carbon Stocks and Fluxes in Rangelands of the Rio de la Plata Basin. Rangel. Ecol. Manag. 2010 , 63 , 94–108. [ Google Scholar ] [ CrossRef ]
  • Ward, K.J.; Chabrillat, S.; Brell, M.; Castaldi, F.; Spengler, D.; Foerster, S. Mapping Soil Organic Carbon for Airborne and Simulated EnMAP Imagery Using the LUCAS Soil Database and a Local PLSR. Remote Sens. 2020 , 12 , 3451. [ Google Scholar ] [ CrossRef ]
  • Guo, L.; Sun, X.R.; Fu, P.; Shi, T.Z.; Dang, L.N.; Chen, Y.Y.; Linderman, M.; Zhang, G.L.; Zhang, Y.; Jiang, Q.H.; et al. Mapping soil organic carbon stock by hyperspectral and time-series multispectral remote sensing images in low-relief agricultural areas. Geoderma 2021 , 398 , 115118. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Li, T.; Cui, L.; Wu, Y.; McLaren, T.I.; Xia, A.; Pandey, R.; Liu, H.; Wang, W.; Xu, Z.; Song, X.; et al. Soil Organic Carbon Estimation via Remote Sensing and Machine Learning Techniques: Global Topic Modeling and Research Trend Exploration. Remote Sens. 2024 , 16 , 3168. https://doi.org/10.3390/rs16173168

Li T, Cui L, Wu Y, McLaren TI, Xia A, Pandey R, Liu H, Wang W, Xu Z, Song X, et al. Soil Organic Carbon Estimation via Remote Sensing and Machine Learning Techniques: Global Topic Modeling and Research Trend Exploration. Remote Sensing . 2024; 16(17):3168. https://doi.org/10.3390/rs16173168

Li, Tong, Lizhen Cui, Yu Wu, Timothy I. McLaren, Anquan Xia, Rajiv Pandey, Hongdou Liu, Weijin Wang, Zhihong Xu, Xiufang Song, and et al. 2024. "Soil Organic Carbon Estimation via Remote Sensing and Machine Learning Techniques: Global Topic Modeling and Research Trend Exploration" Remote Sensing 16, no. 17: 3168. https://doi.org/10.3390/rs16173168

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

COMMENTS

  1. Image processing

    Image processing is manipulation of an image that has been digitised and uploaded into a computer. Software programs modify the image to make it more useful, and can for example be used to enable ...

  2. 471383 PDFs

    All kinds of image processing approaches. | Explore the latest full-text research PDFs, articles, conference papers, preprints and more on IMAGE PROCESSING. Find methods information, sources ...

  3. Image Processing: Research Opportunities and Challenges

    Image Processing: Research O pportunities and Challenges. Ravindra S. Hegadi. Department of Computer Science. Karnatak University, Dharwad-580003. ravindrahegadi@rediffmail. Abstract. Interest in ...

  4. 267349 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on DIGITAL IMAGE PROCESSING. Find methods information, sources, references or conduct a literature ...

  5. Techniques and Applications of Image and Signal Processing : A

    This paper comprehensively overviews image and signal processing, including their fundamentals, advanced techniques, and applications. Image processing involves analyzing and manipulating digital images, while signal processing focuses on analyzing and interpreting signals in various domains. The fundamentals encompass digital signal representation, Fourier analysis, wavelet transforms ...

  6. Image processing

    High-throughput image processing software for the study of nuclear architecture and gene expression. Adib Keikhosravi. , Faisal Almansour. & Gianluca Pegoraro. Article. 07 August 2024 | Open Access.

  7. J. Imaging

    When we consider the volume of research developed, there is a clear increase in published research papers targeting image processing and DL, over the last decades. ... In the topic of image processing, some pertinent studies were found, especially using DRL [31,47,57,121]. Many novel applications continue to be proposed by researchers.

  8. Digital Image Processing: Advanced Technologies and Applications

    Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI. Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good ...

  9. Advances in image processing using machine learning techniques

    With the recent advances in digital technology, there is an eminent integration of ML and image processing to help resolve complex problems. In this special issue, we received six interesting papers covering the following topics: image prediction, image segmentation, clustering, compressed sensing, variational learning, and dynamic light coding.

  10. Deep Learning-based Image Text Processing Research

    Deep learning is a powerful multi-layer architecture that has important applications in image processing and text classification. This paper first introduces the development of deep learning and two important algorithms of deep learning: convolutional neural networks and recurrent neural networks. The paper then introduces three applications of deep learning for image recognition, image ...

  11. Top 1287 papers published in the topic of Image processing in 2023

    31 Jan 2023 - Practice Periodical on Structural Design and Construction. TL;DR: In this paper , the authors describe two field studies that used digital image processing to measure the width of cracks in concrete structures and demonstrate that image-based measurements are comparable to microscope measurements.

  12. Frontiers

    Editorial on the Research Topic Current Trends in Image Processing and Pattern Recognition Technological advancements in computing multiple opportunities in a wide variety of fields that range from document analysis ( Santosh, 2018 ), biomedical and healthcare informatics ( Santosh et al., 2019 ; Santosh et al., 2021 ; Santosh and Gaur, 2021 ...

  13. digital image processing Latest Research Papers

    Abstract Digital image processing technologies are used to extract and evaluate the cracks of heritage rock in this paper. Firstly, the image needs to go through a series of image preprocessing operations such as graying, enhancement, filtering and binaryzation to filter out a large part of the noise. Then, in order to achieve the requirements ...

  14. Frontiers

    Introduction. The field of image processing has been the subject of intensive research and development activities for several decades. This broad area encompasses topics such as image/video processing, image/video analysis, image/video communications, image/video sensing, modeling and representation, computational imaging, electronic imaging, information forensics and security, 3D imaging ...

  15. Recent Trends in Image Processing and Pattern Recognition

    The 5th International Conference on Recent Trends in Image Processing and Pattern Recognition (RTIP2R) aims to attract current and/or advanced research on image processing, pattern recognition, computer vision, and machine learning. The RTIP2R will take place at the Texas A&M University—Kingsville, Texas (USA), on November 22-23, 2022, in ...

  16. Home

    The journal is dedicated to the real-time aspects of image and video processing, bridging the gap between theory and practice. Covers real-time image processing systems and algorithms for various applications. Presents practical and real-time architectures for image processing systems. Provides tools, simulation and modeling for real-time image ...

  17. Image Processing Technology Based on Machine Learning

    Machine learning is a relatively new field. With the deepening of people's research in this field, the application of machine learning is increasingly extensive. On the other hand, with the advancement of science and technology, graphics have been an indispensable medium of information transmission, and image processing technology is also booming. However, the traditional image processing ...

  18. Developments in Image Processing Using Deep Learning and Reinforcement

    When we consider the volume of research developed, there is a clear increase in published research papers targeting image processing and DL, over the last decades. A search using the terms "image processing deep learning" in Springerlink generated results demonstrating an increase from 1309 articles in 2005 to 30,905 articles in 2022, only ...

  19. Top 10 Digital Image Processing Project Topics

    Radar Image Processing for Remote Sensing Applications; Adaptive Clustering Algorithms for Image processing; Moreover, if you want to know more about Digital Image Processing Project Topics for your research, then communicate with our team. We will give detailed information on current trends, future developments, and real-time challenges in the ...

  20. Research Topics

    Research Topics. Biomedical Imaging. The current plethora of imaging technologies such as magnetic resonance imaging (MR), computed tomography (CT), position emission tomography (PET), optical coherence tomography (OCT), and ultrasound provide great insight into the different anatomical and functional processes of the human body. Computer Vision.

  21. Artificial Intelligence (AI) for Image Processing

    This Special Issue presents a forum for the publication of articles describing the use of classical and modern artificial intelligence methods in image processing applications. The main aim of this Special Issue is to capture recent contributions of high-quality papers focusing on advanced image processing and analysis applications, including ...

  22. Viewpoints on Medical Image Processing: From Science to Application

    Abstract. Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their ...

  23. Digital Image Processing

    In this paper we give a tutorial overview of the field of digital image processing. Following a brief discussion of some basic concepts in this area, image processing algorithms are presented with emphasis on fundamental techniques which are broadly applicable to a number of applications. In addition to several real-world examples of such techniques, we also discuss the applicability of ...

  24. Artificial Intelligence Image Processing Based on Wireless ...

    A network of wireless sensor nodes was constructed to collect data on lake water quality and environment in real time. At the same time, the image processing algorithm and deep learning model are combined to analyze the lake image to identify and evaluate the ecological state. Mobile devices are used for remote access and analysis of data.

  25. Remote Sensing

    BERTopic, a topic modeling technique based on BERT (bidirectional encoder representatives from transformers), integrates recent advances in natural language processing. The research analyzed 1761 papers on SOC and remote sensing (RS), in addition to 490 related papers on machine learning (ML) techniques.