Abstract
Image segmentation is a key computer vision technique used to divide images into semantically meaningful regions. However, noise and artifacts often introduce anomalies that degrade the accuracy of traditional segmentation algorithms. To address this, we propose an adaptive Petri net token flow (APNTF) method that models segmentation as a concurrent, data-driven process using a formally defined Petri net. In this framework, each place represents an intensity bin, and transitions are generated based on a local entropy criterion, connecting neighboring bins only when additional texture refinement is needed. The process begins with an initial marking that assigns one token per pixel to its corresponding intensity value. Tokens then propagate through enabled transitions, yielding a context-aware region-growing mechanism. A region-merging stage follows, combining bins with similar mean intensities to reduce over-segmentation. The use of Petri nets ensures termination, reproducibility, and facilitates parallel execution. We evaluated the APNTF approach on a variety of images, including challenging medical images, using standard quantitative metrics. Experimental results show that the method improves segmentation accuracy and robustness in the presence of noise and artifacts. This approach provides a structured and adaptable solution for challenging image segmentation tasks involving complex visual data.
| Original language | English |
|---|---|
| Article number | e70255 |
| Pages (from-to) | 1-21 |
| Number of pages | 21 |
| Journal | IET Image Processing |
| Volume | 19 |
| Issue number | 1 |
| Early online date | 5 Dec 2025 |
| DOIs | |
| Publication status | Published - 5 Dec 2025 |