MENDEL https://mendel-journal.org/index.php/mendel <p>MENDEL Soft Computing Journal is an Open Access international journal<br>dedicated to rapid publication of high-quality, peer-reviewed research<br>articles in fields covered by Evolutionary Computation, Genetic<br>Programming, Swarm Intelligence, Neural Networks, Deep Learning, Fuzzy<br>Logic, Big Data, Chaos, Bayesian Methods, Optimization, Intelligent<br>Image Processing, and Bio-inspired Robotics.<br><br>The journal is fully open access, meaning that all articles are<br>available on the internet to all users immediately upon publication<br>(Gold Open Access).&nbsp; The journal has paper and electronic versions, and<br>the issues are Semi-Annual, in June and December.&nbsp; As well, under the<br>decision of the Editorial Board special issues may be published.<br><br>The journal is published under the auspices of the Institute of Automation and<br>Computer Science of the Brno University of Technology.</p> Institute of Automation and Computer Science, Brno University of Technology en-US MENDEL 1803-3814 <p>MENDEL open access articles are normally published under a&nbsp;Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA 4.0) <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" target="_blank" rel="noopener">https://creativecommons.org/licenses/by-nc-sa/4.0/</a>&nbsp;. Under the CC BY-NC-SA 4.0 license permitted 3rd party reuse is only applicable for non-commercial purposes.&nbsp;Articles posted under&nbsp;the CC BY-NC-SA 4.0 license allow users to share, copy, and redistribute the material in any medium of format, and adapt, remix, transform, and build upon the material for any purpose. Reusing under the CC BY-NC-SA 4.0 license requires that appropriate attribution to the source of the material must be included along with a link to the license, with any changes made to the original material indicated.</p> Quick Hidden Layer Size Tuning in ELM for Classification Problems https://mendel-journal.org/index.php/mendel/article/view/299 <p>The extreme learning machine is a fast neural network with outstanding performance. However, the selection of an appropriate number of hidden nodes is time-consuming, because training must be run for several values, and this is undesirable for a real-time response. We propose to use moving average, exponential moving average, and divide-and-conquer strategies to reduce the number of training’s required to select this size. Compared with the original, constrained, mixed, sum, and random sum extreme learning machines, the proposed methods achieve a percentage of time reduction up to 98\% with equal or better generalization ability.</p> Audi Albtoush Manuel Fernandez-Delgado Haitham Maarouf Asmaa Jameel Al Nawaiseh ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-sa/4.0 2024-07-15 2024-07-15 30 1 1 14 10.13164/mendel.2024.1.001 Speckle Noise Suppression in Digital Images Utilizing Deep Refinement Network https://mendel-journal.org/index.php/mendel/article/view/301 <p>This paper proposes a deep learning model for speckle noise suppression in digital images. The model consists of two interconnected networks: the first network focuses on the initial suppression of speckle noise. The second network refines these features, capturing more complex patterns, and preserving the texture details of the input images. The performance of the proposed model is evaluated with different backbones for the two networks: ResNet-18, ResNet-50, and SENet-154. Experimental results on two datasets, the Boss steganography, and COVIDx CXR-3, demonstrate that the proposed method yields competitive despeckling results. The proposed model with the SENet-154 encoder achieves PSNR and SNR values higher than 37 dB with the two datasets and outperforms other state-of-the-art methods (Pixel2Pixel, DiscoGAN, and BicycleGAN).</p> Mohamed AbdelNasser Ehab Alaa Saleh Mostafa I. Soliman ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-sa/4.0 2024-07-15 2024-07-15 30 1 15 22 10.13164/mendel.2024.1.015 Formation Tracking With Size Scaling of Double Integrator Agents https://mendel-journal.org/index.php/mendel/article/view/303 <p>This paper considers the problem of distributed formation scaling of Multi-Agent Systems (MASs) under a switching-directed graph where the scaling of formation is determined by one leader agent. A directed-sensing graph where neighboring agents exchange their relative displacement and a directed-communication graph where neighboring agents exchange the information about formation scaling factor and velocity factors are used in this paper. One leader agent which decides the formation scaling factor as well as the velocity of the group is chosen among agents. It is shown that under a switching-directed graph, the group of agents achieves the desired formation pattern with the desired scaling factor as well as the desired group's velocity if the union of the sensing and communication graphs contains a directed spanning tree.</p> Djati Wibowo Djamari Asif Awaludin Halimurrahman Halimurrahman Rommy Hartono Patria Rachman Hakim Adi Wirawan Haikal Satrya Utama Tiara Kusuma Dewi ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-sa/4.0 2024-07-15 2024-07-15 30 1 23 32 10.13164/mendel.2024.1.023 A Non-hydrostatic Model for Simulating Dam-Break Flow Through Various Obstacles https://mendel-journal.org/index.php/mendel/article/view/305 <p>In this paper, we develop a mathematical model for modelling and simulation of the dam-break flow through various obstacles. The model used here is an extension of one-layer non-hydrostatic (NH-1L) model by considering varying channel width (Saint Venant). The capability of our proposed scheme to simulate free surface wave generated by dam-break flow through various obstacles is demonstrated, by performing two types of simulation with various obstacles, such as; bottom obstacle and channel wall contraction. It is shown that our numerical scheme can produce the correct surface wave profile, comparable with existing experimental data. We found that our scheme demonstrates the evolution of a negative wave displacement followed by an oscillating dispersive wave train. These well-captured dispersive phenomena, indicated both the appropriate numerical treatment of the dispersive term in our model and the performance of our model.</p> Komang Dharmawan Putu Veri Swastika G K Gandhiadi Sri Redjeki Pudjaprasetya ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-sa/4.0 2024-07-15 2024-07-15 30 1 33 42 10.13164/mendel.2024.1.033 CLaRA: Cost-effective LLMs Function Calling Based on Vector Database https://mendel-journal.org/index.php/mendel/article/view/404 <p>Since their introduction, function calls have become a widely used feature within the OpenAI API ecosystem. They reliably connect GPT’s capabilities with external tools and APIs, and they quickly found their way into the other LLMs. The challenge one can encounter is the number of tokens consumed per request since each definition of each function is always sent as an input. We propose a simple solution to effectively decrease the number of tokens by sending only the function corresponding to the user question. The solution is based on saving the functions<br>to the vector database, where we use the similarity score to pick only the functions that need to be sent. We have benchmarked that our solution can decrease the average prompt token consumption by 210% and the average prompt (input) price by 244% vs the default function call. Our solution is not limited to specific LLMs. It can be integrated with any LLM that supports function calls, making it a versatile tool for reducing token consumption. This means that even cheaper models with a high volume of functions can benefit from our solution.</p> Miloslav Szczypka Lukáš Jochymek Alena Martinková ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-sa/4.0 2024-06-30 2024-06-30 30 1 43 50 10.13164/mendel.2024.1.043