Skip to main content

2024 | Buch

Reliability Engineering for Industrial Processes

An Analytics Perspective

insite
SUCHEN

Über dieses Buch

This book explores how transformative changes driven by the new-age economy can bring about improvements in a company's engineering and manufacturing capabilities.

The new-age economy is driven by advanced engineering and manufacturing practices, processes, and technologies, including the Internet of Things (IoT), Cloud Computing, Blockchain, Artificial Intelligence, Robotics, Cyber-Physical Systems (CPS), and Internet-enabled systems to automate industrial processes.

Today's business dynamics are governed by uncertainties, disruptions, complexities, and ambiguities that demand quicker and more intelligent decisions. These changes could relate a renaissance in the company's engineering and manufacturing capabilities. To sustain these volatile and ever-changing business dynamics, Industry 4.0 and 5.0 have revolutionized how organizations operate and make intelligent business decisions. Moreover, the extensive role of business analytics has overcome the limitations ofclassical computing through new technologies and intelligent computing methodologies.

Over the past few years, much emphasis has been given to investing in developing hardware and programming frameworks for achieving computational intelligence using fuzzy logic, evolutionary computation, neural networks, probabilistic methods, and learning theory. Within this frame of reference, the reliability, quality, and maintenance of complex industrial and manufacturing systems are essential for organizations to utilize them successfully for informed decisions.

This book focuses on studies that provide new solutions for system reliability, quality, security, and maintainability using quantitative and qualitative research. It emphasizes developments and problems in systems engineering management, systems integration, software and hardware engineering, and the development process.

Inhaltsverzeichnis

Frontmatter
Comparison of OSS Reliability Assessment Methods by Using Wiener Data Preprocessing Based on Deep Learning
Abstract
This chapter focuses on the comparison of the methods of open source software (OSS) reliability assessment. The fault detection phenomenon depends on the reporter and the severity, because the number of software fault is influenced by the reporter, severity, assignee, and component, etc. Actually, the software reliability growth models with testing-effort have been proposed in the past. In this chapter, we apply the deep learning approach to the OSS fault big data. Then, we show several reliability assessment measures based on the reporter and severity by using the the deep learning. Moreover, several numerical illustrations based on the proposed deep learning model and the data preprocessing are shown in this chapter.
Yoshinobu Tamura, Shoichiro Miyamoto, Lei Zhou, Shigeru Yamada
Reliability and Sensitivity Analysis of a Wastewater Treatment Plant Operating with Two Blowers as a Single System
Abstract
Utilizing wastewater for irrigation is a pressing necessity in regions grappling with water scarcity. However, before wastewater can be used, it must undergo treatment due to its contamination. Wastewater treatment plants are pivotal in purifying wastewater and rendering it suitable for irrigation. Within these treatment plants, blowers play a crucial role, serving as vital assets. The failure of blowers can result in substantial repair costs. Consequently, it is imperative to conduct a thorough reliability and sensitivity analysis to assess the performance of blowers in wastewater treatment plants. This paper focuses on the reliability and sensitivity analysis of a wastewater treatment facility equipped with two blowers. To support this research, actual failure data from the plant have been collected. When a blower fails, it undergoes an inspection to determine the nature of the failure, which can fall into three categories: instrumental, mechanical, or electrical. The reliability model is constructed by incorporating real-world situations derived from the collected data. This modeling process employs Markov and regenerative processes to estimate key plant performance metrics, including availability, the expected frequency of inspections and repairs, the anticipated busy time for the repairman, and the profit generated. Furthermore, the analysis determines the profit threshold. To understand the influence of various parameters on reliability outcomes, a sensitivity analysis has been undertaken.
S. Z. Taj, S. M. Rizwan, Kajal Sachdeva
A Review on Cardiovascular Disease/Heart Disease by Machine Learning Prediction
Abstract
Cardiovascular diseases commonly referred to as heart diseases, constitute a wide-ranging group of ailments impacting the heart. In our rapidly advancing technological era, machine learning plays a pivotal role in disease prediction. Among the plethora of health issues, heart and cardiovascular diseases stand out with elevated mortality rates. Machine learning techniques prove instrumental in anticipating these conditions, offering clinicians invaluable insights for early diagnosis and treatment. This review explores the application of various machine learning algorithms such as Naive Bayes, Random Forest, Logistic Regression, K-Nearest Neighbors, and Decision Trees in forecasting cardiac diseases. These algorithms not only predict but also categorize individuals with heart diseases, enhancing the accuracy of early detection. The integration of machine learning into healthcare demonstrates its potential to revolutionize predictive medicine, providing a proactive approach to managing cardiovascular illnesses. The proactive identification of cardiac diseases through these techniques empowers healthcare professionals with timely information, ultimately improving patient outcomes.
K. Swathi, G. K. Kamalam
A Role of Network Data Envelopment Analysis Approach in Manufacturing Industry: Review of Last 5 years
Abstract
In recent years, the manufacturing industry has been recognized as a key driver of economic growth and development of a nation. It generates significant value-added output, creates employment opportunities, and spurs technological advancements. As a result, performance evaluation of manufacturing firms has become a crucial task for assessing their efficiency, identifying improvement areas, and sustaining growth in a complex network environment. This study aims to explore the application of Network Data Envelopment Analysis (NDEA) models in the manufacturing industry to assess the efficiency of interconnected manufacturing units. By considering the complex relationships and interdependencies among various entities within the manufacturing process, these models offer a comprehensive approach to evaluate the efficiency of the decision-making unit (DMU) that is a manufacturing firm. Finally, this study shows that NDEA models provide valuable insights to decision-makers by identifying areas for improvement and suggesting strategies to enhance efficiency of the system.
Atul Kumar, Millie Pant
Exploring Software Systems Engineering Through Complexity of Code Changes: A Study Based on Bugs Count, Features Improvement and New Add-Ons
Abstract
While studying the reliability of software systems, the phenomenon to address the complexity of the code change process must be considered. The reasons behind this complexity in the code can either be bug removal phenomenon, new feature addition, or feature improvements, to name a few. Keeping in mind, the need to measure the complexity of the code changes, the authors have developed a modeling framework with an assumption that at any given time point, the complexity of code changes is impacted by at least any one of the above-specified attributes. To validate this developed framework, the authors have utilized certain open-source data sets and have presented their applicability using the SPSS software package. The obtained results are in line with the presented modeling framework.
Asha Yadav, Ompal Singh, Adarsh Anand, Raksha Verma, Indarpal Singh
Generating Image Captions in Hindi Based on Encoder-Decoder Based Deep Learning Techniques
Abstract
Image Captioning has experienced significant advancements recently, combining computer vision and natural language processing to create a new field that describes images in words. These approaches utilize an encoder-decoder architecture, where an image is encoded into features by an encoder and those features are decoded into a text sequence by a decoder. Typically, Convolutional Neural Networks (CNNs) are employed as encoders, while Recurrent Neural Networks (RNNs) serve as decoders in these models. Although much of the work in this domain focuses on English, research on Image Captioning models for regional languages is limited. Hindi, being a morphologically rich language and the third most spoken language worldwide, is the focus of this paper. The study conducts a comparative analysis of four state-of-the-art Image Captioning models (ResNet50, InceptionV3, VGG16, and VGG19) specifically applied to the Hindi language. The evaluation of these models’ performance in generating image captions on the widely used Flickr8k dataset employs BLEU, METEOR, and RIBES scores. The results indicate that the InceptionV3 model surpasses the other three models in terms of both BLEU and METEOR scores, making it a valuable reference for researchers operating within this field.
Priya Singh, Farhan Raja, Hariom Sharma
Fault Removal Efficiency: A Key Driver in Software Reliability Growth Modeling
Abstract
In the contemporary landscape of software development, the significance of software reliability cannot be overstated. With the escalating complexity and widespread integration of software systems across diverse domains, ensuring their dependability has emerged as a paramount concern. Software reliability growth models (SRGMs) play a crucial role in assessing and improving the reliability of software systems. These models provide a quantitative framework for understanding the evolution of faults and predicting the reliability of software during its development lifecycle, and illuminate the consequential enhancement in overall reliability over time. Central to this exploration is the concept of fault removal efficiency (FRE), quantifying the proportion of bugs eradicated through meticulous reviews, inspections, and testing processes. As a critical determinant of software quality and process management, FRE provides developers with invaluable insights into testing efficacy and aids in predicting additional efforts required. The chapter explores some SRGMs that incorporate FRE, providing readers with a comprehensive insight into how FRE shapes the dynamics of the SRGM.
Umashankar Samal, Ajay Kumar
Analysis of Progressively Censored Repair Time of Airborne Communication Transceiver with Burr-Hatke Exponential Model
Abstract
In this chapter, the parameter estimation of a Burr-Hatke exponential model based on the progressive type-II censored sample is investigated. Various methods of estimation for complete data are generalized to the case under progressive censored samples. These approaches comprise maximum likelihood, least squares, maximum product spacings, and Bayesian estimation. Interval estimate and coverage probability for the parameter are derived by the use of maximum likelihood and Bayesian estimation techniques. Markov chain Monte Carlo algorithm has been employed to obtain the Bayes estimator of the parameter with gamma prior under squared error loss function. A vast comparative analysis of the four methods is made using a Monte Carlo empirical study. The empirical findings are used in the formulation of certain suggestions, and a real-world data example is shown to illustrate how the developed theory may be applied in practice.
Kartik Waliya, Alka Chaudhary, Abhishek Tyagi
Bug Prediction Techniques: Analysis and Review
Abstract
Bug expectation could be a preparation where we attempt to foresee bugs based on authentic information about the specific application. The term distinguishes “bug hot spots” within the code base and banners as segments of code that, when adjusted, truly come about in many bugs. We have discussed various techniques for predicting bugs during the last two decades. Therefore, there is a need to know the models of research that summarise and compare techniques on different datasets. We present a complete catalogue of all known techniques in this paper. We found many techniques as a result of our study. They also support a variety of datasets, including Eclipse, Mozilla and Gnome, Bugzilla and others. We categorise different techniques to predict the models in this study based on their type, availability, model techniques, identified bugs, supported datasets, and main features.
Riya Sen, V. B. Singh
A Review on Kidney Failure Prediction Using Machine Learning Models
Abstract
End-stage renal disease (ESRD), commonly known as kidney failure, is a critical medical condition that has a significant impact on global health. Early detection of kidney failure is crucial in preventing and managing this condition. In recent years, machine learning (ML) models have emerged as promising tools for predicting kidney failure, offering the potential to improve patient outcomes through timely intervention. This comprehensive review provides an overview of the current state of research on kidney failure prediction using various ML models. The review begins by presenting an overview of kidney failure, its prevalence, and the challenges associated with its early detection. It then delves into the role of ML in healthcare and specifically focuses on its application in predicting kidney failure. The discussion encompasses a wide range of ML techniques, including logistic regression, decision trees, support vector machines, and deep learning. The review analyzes key studies and methodologies employed in predicting kidney failure, highlighting the strengths and limitations of different ML approaches. It emphasizes the importance of feature selection, data preprocessing, and model evaluation in enhancing the accuracy and reliability of predictions. Furthermore, it addresses the issue of data imbalance, a common challenge in medical datasets, and explores strategies to mitigate its impact on model performance. In addition to summarizing existing research, the review identifies current gaps in the literature and suggests avenues for future research. This includes the exploration of novel data sources, the integration of multi-modal data, and the development of interpretable models that can assist healthcare professionals in making informed decisions. Overall, this review serves as a valuable resource for researchers, clinicians, and healthcare professionals interested in the application of ML models for kidney failure prediction. By synthesizing the current state of knowledge, it provides insights into the potential of ML models to improve patient outcomes and highlights areas for further research.
B. P. Naveenya, J. Premalatha
Machine Learning Based Remaining Useful Life Estimation—Concept and Case Study
Abstract
With advancements in technology and machinery, human dependencies on them are increasing. This increased reliance makes maintenance in industrial applications indispensable. Traditional methods, like Reactive Maintenance, fail to detect problems beforehand and can jeopardize resources and/or lives. Proactive Maintenance measures, especially Predictive Maintenance has gained popularity with the advent of tons of data-handling resources. Remaining Useful Life (RUL) is an integral and principal measure of Predictive Maintenance that can give a fair indication of the usefulness of the component to decide when it needs to be replaced or repaired. Accurate prediction/estimation of RUL calls for developing data-driven methods like Machine Learning algorithms. We elaborate on relevant predictive maintenance concepts and describe how ML techniques can be effectively applied to predict the remaining useful life of machine components. We also demonstrate a case study using NASA’s CMAPSS (Commercial Modular Aero-Propulsion System Simulation) dataset. The case study incorporates the successful implementation of ML algorithms and the subsequent use of Evolutionary Computing techniques like Particle Swarm Optimization for optimization.
Svara Mehta, Ramnath V. Prabhu Bam, Rajesh S. Prabhu Gaonkar
Modelling Software Reliability Growth Incorporating Testing Coverage Function and Fault Reduction Factor
Abstract
Computer software has gradually evolved into a necessary component in many sectors of our everyday lives and a crucial component in many systems that requires quality software. A number of studies have been undertaken in recent years in order to develop an extremely trustworthy software system. To be more precise, there have been several analytical software reliability models put out for the evaluation of software reliability. Here, we examine reliability growth models that take into account testing coverage and fault reduction factor, two of the most important environmental factors, and how incorporating these factors into the models provides a more accurate and comprehensive measure of software reliability during the development phase.
Neha, Abhishek Tandon, Gurjeet Kaur, P. K. Kapur
Software Defect Prediction Using Abstract Syntax Trees Features and Object—Oriented Metrics
Abstract
Bug prediction systems have developed to assist developers in prioritizing testing tasks as software releases become more frequent due to changing requirements. Previous studies used methods such as classifying modules as faulty or not, or performing multi-class classification to predict the number of bugs. Some studies used Object-Oriented (OO) metrics, while others used Abstract Syntax Trees (ASTs) to extract code features for bug prediction. This research treated bug prediction as a regression problem and used deep learning models, such as LSTM and CNN, to solve it. The study compared the results of LSTM and CNN models trained on OO metrics with classical machine learning models and a multilayer perceptron model, and found that their LSTM model performed better in terms of MAE and MRE than three of the classical models. The LSTM and CNN models were also trained on features extracted from file-level ASTs of the source code of projects and compared with the models trained on OO metrics. The CNN model trained on file-level AST features produced MAE results similar to the LSTM model trained on OO metrics, but outperformed it in terms of MRE.
Anushka Sethi, Aseem Sangalay, Ruchika Malhotra
A Review of Alzheimer’s Disease Identification by Machine Learning
Abstract
In the pursuit of advancing Alzheimer’s disease identification, this research employs a comprehensive approach that integrates machine learning and deep learning techniques. Support Vector Machines (SVMs) and Decision Trees serve as robust tools, providing transparency and interpretability in the analysis of diverse datasets, including genetic, clinical, and imaging information. These methods contribute to the elucidation of key factors influencing Alzheimer’s, enhancing the understanding of disease-related patterns. Furthermore, Convolutional Neural Networks (CNNs) demonstrate their efficacy in neuroimaging analysis, capturing intricate spatial dependencies crucial for precise diagnosis. The synergy of SVMs, Decision Trees, and CNNs not only improves accuracy in disease detection but also opens avenues for early intervention and targeted treatment strategies. As machine learning and deep learning continue to evolve, the amalgamation of these techniques holds promise in revolutionizing our approach to Alzheimer’s disease, offering insights that may lead to more effective interventions and improved patient outcomes.
R. P. Harshini, R. Thangarajan
Weighted Entropic and Divergence Models in Probability Spaces and Their Solicitations for Influencing an Imprecise Distribution
Abstract
It is renowned rationality that an assortment of parametric and non-parametric information models is extraordinarily manageable however there is unavoidability to create accompanying revolutionary models to intensify their applications in a variety of disciplines. On the other hand, the theoretical observation around the “maximum entropy principle” indicates that it contributes through meaningful accountability for the knowledge of plentiful optimization problems connected using the information-theoretic models comprising entropy and divergence models. Additionally, the perception of weighted information has been ascertained to be extraordinarily fruitful because of its connotation in objective-oriented experiments. The current paper is a phase in the direction of mounting two newfangled discrete weighted models in the probability spaces and constructing the learning of this principle in approximating a probability distribution. With the support of these discrete weighted models, the “maximum entropy principle” has been validated.
Om Parkash, Vikramjeet Singh, Retneer Sharma
Considering Multiplicative Noise in a Software Reliability Growth Model Using Stochastic Differential Equation Approach
Abstract
A Software reliability growth models are very useful to investigate software reliability characteristics quantitatively and to establish relationship between the remaining faults in the software with testing time. The only way to enhance the quality and reliability of software is to detect and remove the faults during the testing phase of software. Usually, the fault removal process is assumed to be deterministic, but as software systems get bigger and more flaws are found during testing, the number of faults that are found and removed during each debugging process decreases until it is negligibly small compared to the fault content at the beginning of the testing phase. It is quite likely that software fault detection process in this scenario as a stochastic process with continuous state space. In this study, we have the concept of multiplicative noise and proposed a software reliability growth model under perfect debugging environments which is governed by stochastic differential equations. The proposed stochastic differential equation based SRGM has been validated on real-life failure data sets, and the results of the goodness of fit and comparison criteria for the proposed model exhibited the applicability of the model.
Kuldeep Chaudhary, Vijay Kumar, Deepansha Kumar, Pradeep Kumar
An Insight into Code Smell Detection Tool
Abstract
A code smell isn’t a bug and it won’t help your system operate exceptionally. It might simply make it more difficult for software engineers to comprehend and maintain project source code, resulting in extra maintenance expenses. Researchers have provided a variety of techniques and tools for extracting code smells throughout the last 20 years. Therefore, there is a need for comprehensive research that summarizes and compares the large range of existing tools. We present a complete catalogue of all known code smell detection tools in this paper. We found 112 tools as a result of our study, 52 of them available for download online. They also support a variety of programming languages including Java, JavaScript, C, C++, C#, Python, and others. We categorize different code smell detection tools in this study based on their type, availability, detection techniques, identified code smells, supported languages, and main features.
Shrasti Mourya, Piyush Pratap Singh, V. B. Singh
A Study on the Efficiency of Divergence Measure in Fuzzy TOPSIS Algorithm for Multi-attribute Decision Making—A Case Study on University Selection for Admission
Abstract
In our daily lives, individuals face countless choices across different aspects. Many a times these decisions are made based on a number of factors, some of which are obvious, whereas some of them are vague and not precise. These can result in facing certain challenges while making decisions. To handle such situations Multi-Criteria Decision Making (MCDM) techniques have been developed. The aim of this article is to suggest a method to rank and hence choose a university for students’ admission using the method Fuzzy Technique for Order of Preference by Similarity to Ideal Solution (FTOPSIS). To achieve the goal a novel distance measure has been proposed, and some axiomatic properties have also been proved for the same. The proposed approach aids in the process of university selection by ranking the universities based on certain criteria in fuzzy environment. The results obtained suggest that the proposed model provides a accurate way to select the best university among the large number of choices available for the considered universities. The paper settles with a discussion of a case study and experimental findings.
Mansi Bhatia, H. D. Arora, Riju Chaudhary, Vijay Kumar
Exploring the Impact of Latent and Obscure Factors on Left-Censored Data: Bayesian Approaches and Case Study
Abstract
In the realm of scientific investigation, traditional survival studies have historically focused on mitigating failures over time. However, when both observed and unobserved variables remain enigmatic, adverse consequences can arise. Frailty models offer a promising approach to understanding the effects of these latent factors. In this scholarly work, we hypothesize that frailty has a lasting impact on the reversed hazard rate. Notably, our research highlights the reliability of generalized Lindley frailty models, rooted in the generalized log-logistic type II distribution, as a robust framework for capturing the widespread influence of inherent variability. To estimate the associated parameters, we employ diverse loss functions such as SELF, MQSELF, and PLF within a Bayesian framework, forming the foundation for Markov Chain Monte Carlo methodology. We subsequently utilize Bayesian assessment strategies to assess the effectiveness of our proposed models. To illustrate their superiority, we employ data from renowned Australian twins as a demonstrative case study, establishing the innovative models’ advantages over those relying on inverse Gaussian and gamma frailty distributions. This study delves into the impact of hidden and obscure factors on left-censored data, utilizing Bayesian methodologies, with a specific emphasis on the application of generalized Lindley frailty models. Our findings contribute to a deeper understanding of survival analysis, particularly when dealing with complex and unobservable covariates.
Pragya Gupta, Arvind Pandey, David D. Hanagal, Shikhar Tyagi
Reliability Perspective of Software Models: An Overview
Abstract
The production engineers and system designers have been interested in computer-based system reliability and performance measurements due to a wide range of applications emerging both in the military and industrial world. Software failures can happen even with the best quality of computer-based systems due to a variety of failure mechanisms, resulting in major consequences such as human life loss, significant economic losses etc. There are numerous models worked out for measuring the reliability value of software assuming a wide variety of failure dependencies and compatibility issues. This investigation deals with software reliability and modelling steps for developing software models. We present several important factors, failure implications, system reliability computation procedure, as well as strategies implemented at the software reliability engineering level and quote recent developments of software models. Without software fault tolerance, it is practically impossible to build a totally fault tolerant system. Software fault tolerance is the capacity of software to recover and detect from a fault that is occurring or has already occurred. We explore some fault tolerant techniques that use protective redundancy at the software level to ensure the system reliability. A thorough examination of reliability modelling will be beneficial for both researchers and practitioners studying reliability assessment of software systems.
Ritu Gupta, Sudeep Kumar, Anu G. Aggarwal
Stress-Strength Modelling for a New Modified Lindley Distribution Under Progressively Censored Data
Abstract
In this chapter, the analysis of the stress-strength reliability of the type \(\Lambda = P\left( {X < Z} \right)\) is considered with progressive Type-II censored data when two independent random variables \(X\) (stress) and \(Z\) (strength) follow a modified form of Lindley distribution. The average amount of time a component can withstand stress is derived under this setup in the form of the mean remaining strength. In a classical example, the maximum likelihood and maximum product spacings estimators for the stress-strength parameter are examined. In addition to classical methods, the Bayes estimator of \(\Lambda\) is derived by taking independent gamma priors with a squared error loss function. In this non-classical approach, the estimation of \(\Lambda\) is carried out using a prevalent Markov chain Monte Carlo approach. An investigation using Monte Carlo simulations is accompanied so that the performance of the suggested estimators may be compared. Based on an analysis of a real-world dataset, it has been shown how the proposed stress-strength model may be used in actual practice.
Arvind Pandey, Neha Choudhary, Abhishek Tyagi, Ravindra Pratap Singh
Imperfect Debugging, Testing Coverage, and Compiler Error-Based SRGM with Two Types of Faults Under the Uncertainty of the Operating Environment
Abstract
The familiar supposition for many software reliability growth models is that the software’s faults are independent and can be fixed ideally. However, this is not always valid due to several aspects like developers’ efficiency, software complexity, testing environment, etc. The testing environment greatly affects the software’s reliability after being implemented in the actual field environment. Therefore, the improvement of the software’s reliability is needed while the software works in the natural environment. In this chapter, under the uncertainty of the environment, we have presented a model in which two different testing coverage function and a time-dependent fault content function is incorporated. The central thought about the proposed model is that the natural testing time differs from the theoretical testing time. The existing models are compared with the proposed model using three different data sets on eight goodness-of-fit criteria. It is shown that the proposed model fits the data set better than the existing models.
Sujit Kumar Pradhan, Anil Kumar, Vijay Kumar, P. K. Kapur
A Statistical Approach to Estimate Severe Accident Vehicle Collision Probability Inside a Multi-lane Road Tunnel with Unidirectional Traffic Flow
Abstract
Dynamic risk estimation of the tunnel is an important aspect of tunnel safety. Severe accident collision probability is an important parameter in the dynamic tunnel risk assessment process as it is needed to build a probabilistic dynamic risk model of a tunnel. This helps in continuous monitoring of the risk of the tunnel from severe accidents and can enable tunnel management to take appropriate control measures to restrict the risk when it crosses a certain threshold. This paper tries to estimate the severe accident probability of collision when a vehicle enters a lane of a tunnel at a certain speed. The three-lane Bhatan tunnel on Mumbai-Pune Expressway in India was considered for the analysis and modeling of the traffic flow. A traffic simulation of one year is performed with 13 million vehicles to come out with the number of overtaking that can happen with a given speed of a vehicle. An exponential regression model was used to predict the number of overtaking. A suitable Weibull distribution was used to predict the severe accident collision probability using the number of overtaking.
Jajati K. Jena, Ajit K. Verma, Uday Kumar, Srividya Ajit
Metadaten
Titel
Reliability Engineering for Industrial Processes
herausgegeben von
P. K. Kapur
Hoang Pham
Gurinder Singh
Vivek Kumar
Copyright-Jahr
2024
Electronic ISBN
978-3-031-55048-5
Print ISBN
978-3-031-55047-8
DOI
https://doi.org/10.1007/978-3-031-55048-5

Premium Partner