Skip to main content

2024 | Buch

Service-Oriented Computing – ICSOC 2023 Workshops

AI-PA, ASOCA, SAPD, SQS, SSCOPE, WESOACS and Satellite Events, Rome, Italy, November 28-December 1, 2023, Revised Selected Papers

herausgegeben von: Flavia Monti, Pierluigi Plebani, Naouel Moha, Hye-young Paik, Johanna Barzen, Gowri Ramachandran, Devis Bianchini, Damian A. Tamburri, Massimo Mecella

Verlag: Springer Nature Singapore

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes revised selected papers from the scientific satellite events held in conjunction with the 21st International Conference on Service-Oriented Computing, ICSOC 2023. The conference was held in Rome, Italy, during November 28 – December 1, 2023.

This year, these satellite events were organized around four main tracks, including a workshop track, a demonstration track, a Ph.D. symposium, and an invited tutorial track.

The ICSOC 2023 workshop track consisted of the following six workshops covering a wide range of topics that fall into the general area of service computing:

Third International Workshop on AI-Enabled Process Automation (AI-PA 2023)

7th Workshop on Adaptive Service-Oriented and Cloud Applications (ASOCA2023)

First International Workshop on Secure, Accountable and Privacy-Preserving Data-Driven Service-Oriented Computing (SAPD 2023)

First Services and Quantum Software Workshop (SQS 2023)

First International Workshop on Sustainable Service-Oriented Computing: Addressing Environmental, Social, and Economic Dimensions (SSCOPE 2023)

19th International Workshop on Engineering Service-Oriented Applications and Cloud Services (WESOACS 2023)

Inhaltsverzeichnis

Frontmatter

AI-PA: AI-enabled Process Automation Introduction

Frontmatter
Predictive Auto-scaling: LSTM-Based Multi-step Cloud Workload Prediction

Auto-scaling, also known as elasticity, provides the capacity to efficiently allocate computing resources on demand, rendering it beneficial for a wide array of applications, particularly web-based ones. However, the dynamic and unpredictable nature of workloads in web applications poses considerable challenges in designing effective strategies for cloud auto-scaling. Existing research primarily relies on single-step prediction methods or focuses solely on forecasting request arrival rates, thus overlooking the intricate nature of workload characteristics and system dynamics, which significantly affect resource demands in the cloud. In this study, we propose an innovative approach to address this limitation by introducing a multi-step workload prediction method using the Long Short-Term Memory (LSTM) model. By considering workload attributes over a specific time frame, our approach enables accurate predictions of future workloads over designated time intervals through multi-step forecasting. By utilising two real-world web workload datasets, our experiments aim to underscore the significance of using real-world data in delivering a comparative performance analysis between single-step and multi-step predictions. The results demonstrate that our proposed multi-step prediction model outperforms single-step predictions and other baseline models.

Basem Suleiman, Muhammad Johan Alibasa, Ya-Yuan Chang, Ali Anaissi
Adapting LLMs for Efficient, Personalized Information Retrieval: Methods and Implications

The advent of Large Language Models (LLMs) heralds a pivotal shift in online user interactions with information. Traditional Information Retrieval (IR) systems primarily relied on query-document matching, whereas LLMs excel in comprehending and generating human-like text, thereby enriching the IR experience significantly. While LLMs are often associated with chatbot functionalities, this paper extends the discussion to their explicit application in information retrieval. We explore methodologies to optimize the retrieval process, select optimal models, and effectively scale and orchestrate LLMs, aiming for cost-efficiency and enhanced result accuracy. A notable challenge, model hallucination-where the model yields inaccurate or misinterpreted data-is addressed alongside other model-specific hurdles. Our discourse extends to crucial considerations including user privacy, data optimization, and the necessity for system clarity and interpretability. Through a comprehensive examination, we unveil not only innovative strategies for integrating Language Models (LLMs) with Information Retrieval (IR) systems, but also the consequential considerations that underline the need for a balanced approach aligned with user-centric principles.

Samira Ghodratnama, Mehrdad Zakershahrak
Towards Improving Insurance Processes: A Time Series Analysis of Psychosocial Recovery After Workplace Injury Across Legislative Environments

Enhancing insurance processes when workers grapple with physical injuries necessitates a deep dive into the cognitive science facets to optimize recovery. Time series analysis emerges as an instrumental tool within this framework, offering profound insights and data-driven analysis, ultimately paving the way for a more refined and efficient insurance process. This paper uses time series analysis, a machine learning approach, to enhance insurance business processes by understanding the cognitive aspects of post-injury workers. We delve into the intertwined roles of legislative environments, administrative processes, and their impacts on recovery outcomes, gauged through psychometric measures. By distinguishing between “state” (changeable) and “trait” (constant) psychological variables, we ascertain how legislative measures influence these variables, especially under adverse impacts leading to discernible patterns in claims. Our study compares time series models across various legislative environments in Australia, examining the claims managed by multiple insurers to discern any variability due to legislation. This analysis is enriched by the data from the Navigator Support Program, which screens claimants through psychometric tests, providing insights into the effects of legislation and insurer behaviour on recovery from workplace injuries. The ultimate aim is to harness these insights to improve insurance business processes.

John E. McMahon, Rasool Roozegar, Ashley Craig, Ian Cameron
Uncovering LLMs for Service-Composition: Challenges and Opportunities

Large Language Models (LLMs) have gained significant attention for using natural language to generate program code without direct programming efforts, e.g., by using ChatGPT in a dialog-based interaction. In the field of Service-Oriented Computing, the potential of using LLMs’ capabilities is yet to be explored. LLMs may solve significant service composition challenges like automated service discovery or automated service composition by filling the gap between the availability of suitable services, e.g., in a registry, and their actual composition without explicit semantic annotations or modeling. We analyze the classical way of service composition and how LLMs are recently employed in code generation and service composition. As a result, we show that classical solution approaches usually require extensive domain modeling and computationally expensive planning processes, resulting in a long time needed to create the composition. To ground the research on LLMs for service compositions, we identify six representative scenarios of service compositions from the literature and perform experiments with ChatGPT and GPT-4 as a notable, representative application of LLMs. Finally, we frame open research challenges for service composition in the context of LLMs. With this position paper, we emphasize the importance of researching LLMs as the next step of automated service composition.

Robin D. Pesl, Miles Stötzner, Ilche Georgievski, Marco Aiello
Transformative Predictive Modelling in the Business of Health: Harnessing Decision Trees for Strategic Insights and Enhanced Operational Efficiency

Predictive modelling has emerged as an indispensable tool in the dynamic business realm, shaping strategies and driving impactful decisions. This study provides a framework to transform raw data (customer behaviours, employee responses) into actionable insights, emphasizing the importance of data-driven decision-making. This research aims to harness Decision Tree (DT) Analysis to develop a robust predictive modelling system suitable for regular application in business decision-making processes. Data from two customer types (350 versus 267) were analyzed to predict process outcomes categorized as Successful (S), Needing Further Intervention (NFI), or Non-Compliant (NC) with standard processes. Various predictive models, including Classification and Regression Tree (CRT), Chi-squared Automatic Interaction Detection (CHAID), Exhaustive Chi-Squared Automatic Interaction Detection (Ex-CHAID), and Quick Unbiased Efficient Statistical Tree (QUEST) were employed, with systematic tweaks in their hierarchical structures. Through this method, 324 DTs were generated, adjusting structural parameters. Upon consolidating both datasets, a CRT model yielded a correct classification rate of 71.6%. Specific indicators and interview data pinpointed the Ex-CHAID model as the most predictive for the first dataset at 70.1% accuracy, while the CRT model for the second dataset was most accurate at 74.5%. When diving deeper into specific indicators, the first dataset best aligned with a CHAID model, predicting 74.3% of outcomes, whereas the second dataset favoured a CRT model with a 77.7% prediction accuracy. A CRT model with specific structural parameters achieved the pinnacle of performance, registering an 88.6% accuracy. However, its intricate 15-leaf, 6-level structure suggests potential overfitting, and the complexity rendering it less practical for routine business applications. The ability to predict how consumers or clients might respond to a product or service after their first interaction can provide valuable feedback for product and program development teams. The unique outcome of this paper will result in service refinement, risk management, and improved operational efficiency.

John E. McMahon, Ashley Craig, Ian Cameron
Breaking Boundaries: Can a Unified Hardware Abstraction Layer Simplify Transformer Deployments on Edge Devices?

The deployment of transformer models on edge devices like smartphones and tablets is pivotal for leveraging machine learning benefits in real-world scenarios. However, it brings forth challenges including hardware compatibility, memory efficiency, energy efficiency, and real-time performance. We introduce a versatile Hardware Abstraction Layer (HAL) to (1) bridge pre-trained transformer models with the target hardware for optimized deployment, and (2) incorporate intermediate representations (IR) as a crucial element. The IR facilitates seamless execution of models across diverse hardware backends, ensuring enhanced privacy, security, and functionality, especially in regions with limited internet connectivity. Our HAL, endowed with configurable parameters, dynamic model optimizations, and a modular design, caters to varied performance objectives, offering a unified layer that eases the deployment of IR while focusing on user-specified performance priorities. The main contribution of this work is the introduction of IR within the HAL framework, pushing the frontier in edge-device machine learning deployments to focus on latency, energy efficiency, or memory usage. Our results exhibit that the proposed HAL, with its IR component, significantly trims down deployment time and boosts inference efficiency, without compromising model accuracy on iPhone devices.

Mehrdad Zakershahrak, Samira Ghodratnama
Improving Deep Learning Transparency: Leveraging the Power of LIME Heatmap

Deep learning techniques have recently demonstrated remarkable precision in executing tasks, particularly in image classification. However, their intricate structures make them mysterious even to knowledgeable users, obscuring the rationale behind their decision-making procedures. Therefore, interpreter methodologies have emerged to introduce clarity into these techniques. Among these approaches is the Local Interpretable Model-Agnostic Explanations (LIME), which stands out as a means to enhance comprehensibility. We believe that interpretable deep learning methods have unrealised potential in a variety of application domains, an aspect that has been largely neglected in the existing literature. This research aims to demonstrate the utility of features like the LIME heatmap in advancing classification accuracy within a designated decision-support framework. Real-world contexts take centre stage as we illustrate how the heatmap determines the image segments playing the greatest influence on class scoring. This critical insight empowers users to formulate sensitivity analyses and discover how manipulation of the identified feature could potentially mislead the deep learning classifier. As a second significant contribution, we examine the LIME heatmap data of GoogLeNet and SqueezeNet, two prevalent network models, in an effort to improve the comprehension of these models. Furthermore, we compare LIME with another recognised interpretive method known as Gradient-weighted Class Activation Mapping (Grad-CAM), evaluating their performance comprehensively. Experiments and evaluations conducted on real-world datasets containing images of fish readily demonstrate the superiority of the method, thereby validating our hypothesis.

Helia Farhood, Mohammad Najafi, Morteza Saberi

ASOCA: Adaptive Service-oriented and Cloud Applications Introduction

Frontmatter
Non-expert Level Analysis of Self-adaptive Systems

Self-adaptivity is mainly used to address uncertainties, unpredicted events, as well as to automate administration tasks. It allows systems to change themselves while executing in order to address expected or unexpected changes and to adapt as much as possible to the current execution context. Self-adaptivity is particularly meaningful for dynamic application domains such as Internet of Things (IoT), Cyber-Physical Systems (CPS), service oriented based solutions (SOA), cloud computing, robotics, among many others. There are various available solutions in these domains that exploit self-adaptivity. The question is how can we analyze them to understand how self-adaptivity is implemented and exploited in order to use and re-use, as well as to adapt existing solutions to new or other systems? In this paper, we propose a first step in this direction, by analyzing available self-adaptive systems (and especially their self-adaptive mechanisms) in various application domains using the Understand tool - widely used for software development, analysis, and quality assessment.

Claudia Raibulet, Xiaojun Ling

SAPD: Secure, Accountable and Privacy-Preserving Data-Driven Service-Oriented Computing Introduction

Frontmatter
Federated Data Products: A Confluence of Data Mesh and Gaia-X for Data Sharing

The goal of this paper is to investigate to which extent the principles defined by the Data Mesh paradigm can find a valuable support in Gaia-X. In particular, an alignment between the Data Mesh self-serve platform and the Gaia-X federated services has been analyzed to understand if the concept of data product, which is central in data mesh, it can evolve into a federated data mesh, serving as the architectural element that supports data sharing in a federated setting.

Farouk Jeffar, Pierluigi Plebani
XPS++: A Publish/Subscribe System with Built-In Security and Privacy by Design

This paper presents a content-based publish/subscribe (pub/sub) middleware system designed to securely broker/filter XML events over insecure computing platforms without the complexities of the traditional cryptographic approaches (i.e., homomorphic encryption). We adopt a combination of Micro-services based pub/sub service implementation, and XML predicate/filter and the metadata hashing scheme to simultaneously achieve security and privacy objectives by design. To illustrate the practicality of the proposed system, we discuss the design and implementation details with system demonstration of a prototype, dubbed XPS++. Then, show a preliminary performance results.

Noor Ahmed

SQS: Services and Quantum Software Introduction

Frontmatter
On Rounding Errors in the Simulation of Quantum Circuits

The realm of quantum computing is inherently tied to real numbers. However, quantum simulators nearly always rely on floating-point arithmetic and thus may introduce rounding errors in their calculations. In this work, we show how we can nevertheless trust the computations of simulators under certain conditions where we can rule out that floating-point errors disturb the obtained measurement results. We derive theoretical bounds for the errors of floating-point computations in quantum simulations and use these bounds to extend the implementation of an existing verification tool to show the soundness of the tool’s analysis for a number of well-established quantum algorithms.

Jonas Klamroth, Bernhard Beckert
Linear Structure of Training Samples in Quantum Neural Network Applications

Quantum Neural Networks (QNNs) use sets of training samples supplied as quantum states to approximate unitary operators. Recent results show that the average quality, measured as the error of the approximation, depends on the number of available training samples and the degree of entanglement of these samples. Furthermore, the linear structure of the training samples plays a vital role in determining the average quality of the trained QNNs. However, these results evaluate the quality of QNNs independently of the classical pre- and post-processing steps that are required in real-world applications. How the linear structure of the training samples affects the quality of QNNs when the classical steps are considered is not fully understood. Therefore, in this work, we experimentally evaluate QNNs that approximate an operator that predicts the outputs of a function from the automotive engineering area. We find that the linear structure of the training samples also influences the quality of QNNs in this real-world use case.

Alexander Mandl, Johanna Barzen, Marvin Bechtold, Michael Keckeisen, Frank Leymann, Patrick K. S. Vaudrevange
Towards Higher Abstraction Levels in Quantum Computing

This work is a survey and a position paper towards a higher abstraction in quantum computing (QC) programming frameworks and software development kits (SDKs). Since in 2003, Peter Shor complained about the limited increase in the number of QC algorithms [19], we see an urgent need to bridge the gap between well-established classical physics and quantum physics so that approaches become more intuitive, and - hopefully - more quantum algorithms can be discovered. In service-based hybrid QC frameworks, where algorithms need to be partitioned into quantum and classical tasks, we look at the methods available and the abstractions used.For this paper we have investigated the various levels of abstraction in Silq, Qrisp, OpenQl, Qiskit, Cirq, IonQ, and Ocean, which are originated in the QC domain, as well as CUDA Quantum, rooted in the classical software domain. With the rise of Large Language Models (LLMs), we have also explored the capabilities of LLM-powered tools like GitHub Copilot, which currently represents the top level of abstraction.

Hermann Fürntratt, Paul Schnabl, Florian Krebs, Roland Unterberger, Herwig Zeiner
Hybrid Data Management Architecture for Present Quantum Computing

Quantum computers promise polynomial or exponential speed-up in solving certain problems compared to classical computers. However, in practical use, there are currently a number of fundamental technical challenges. One of them concerns the loading of data into quantum computers, since they cannot access common databases. In this vision paper, we develop a hybrid data management architecture in which databases can serve as data sources for quantum algorithms. To test the architecture, we perform experiments in which we assign data points stored in a database to clusters. For cluster assignment, a quantum algorithm processes this data by determining the distances between data points and cluster centroids.

Markus Zajac, Uta Störl
Quantum Block-Matching Algorithm Using Dissimilarity Measure

Finding groups of similar image blocks within an ample search area is often necessary in different applications, such as video compression, image clustering, vector quantization, and nonlocal noise reduction. A block-matching algorithm that uses a dissimilarity measure can be applied in such scenarios. In this work, a measure that utilizes the quantum Fourier transform through the draper adder or the Swap test based on the Euclidean distance is proposed. Experiments on small representative cases with ideal and depolarizing noise simulations are implemented. In the case of the Swap test, the IBM, OQC and IonQ quantum devices have been used through the qBraid services, demonstrating potential for future near-term applications.

M. Martínez-Felipe, J. Montiel-Pérez, Victor Onofre, A. Maldonado-Romo, Ricky Young
Some Initial Guidelines for Building Reusable Quantum Oracles

The evolution of quantum hardware is highlighting the need for advances in quantum software engineering that help developers create quantum software with good quality attributes. Specifically, reusability has been traditionally considered an important quality attribute. Increasing the reusability of quantum software will help developers create more complex solutions. This work focuses on the reusability of oracles, a well-known pattern of quantum algorithms that can be used to perform functions used as input by other algorithms. In this work, we present several guidelines for making reusable quantum oracles. These guidelines include three different levels for oracle reuse: the reasoning behind the oracle algorithm, the function which creates the oracle, and the oracle itself. To demonstrate these guidelines, two different implementations of a range of integers oracle have been built by reusing simpler oracles. The quality of these implementations is evaluated in terms of functionality and quantum circuit depth. Then, we provide an example of documentation following the proposed guidelines for both implementations to foster reuse of the provided oracles. This work aims to be a first point of discussion towards quantum software reusability.

Javier Sanchez-Rivero, Daniel Talaván, Jose Garcia-Alonso, Antonio Ruiz-Cortés, Juan Manuel Murillo

SSCOPE: Sustainable Service-Oriented Computing: Addressing Environmental, Social, and Economic Dimensions Introduction

Frontmatter
Carbon-Awareness in CI/CD

While the environmental impact of cloud computing is increasingly evident, the climate crisis has become a major issue for society. For instance, data centers alone account for 2.7% of Europe’s energy consumption today. A considerable part of this load is accounted for by cloud-based services for automated software development, such as continuous integration and delivery (CI/CD) workflows.In this paper, we discuss opportunities and challenges for greening CI/CD services by better aligning their execution with the availability of low-carbon energy. We propose a system architecture for carbon-aware CI/CD services, which uses historical runtime information and, optionally, user-provided information. Our evaluation examines the potential effectiveness of different scheduling strategies using real carbon intensity data and 7,392 workflow executions of Github Actions, a popular CI/CD service. Results show, that user-provided information on workflow deadlines can effectively improve carbon-aware scheduling.

Henrik Claßen, Jonas Thierfeldt, Julian Tochman-Szewc, Philipp Wiesner, Odej Kao

WESOACS: Workshop on Engineering Service-Oriented Applications and Cloud Services Introduction

Frontmatter
Smart Public Transport with Be-in/Be-out System Supported by iBeacon Devices

The paper likely introduces the concept of “Be-in/Be-out” system, discussing the need for more efficient and user-friendly transportation systems in urban areas. This system often relies on technology like RFID or Bluetooth to automatically detect when passengers board and exit public transport. This can provide a seamless and convenient experience for travellers. iBeacon technology is commonly used for location-based services and could be applied to enhance the passenger experience or optimize operations. The paper also explains how iBeacon devices are used in the context of smart public transport and discusses the advantages of implementing such systems, including improved passenger convenience and better data collection.

Aneta Poniszewska-Marańda, Mateusz Kubiak, Lukasz Chomątek
Towards a Systematic Comparison Framework for Cloud Services Customer Agreements

The growing need to understand and compare elements in service agreements has generated strong interest in the industry. Although there are projects and tools for the automatic detection of information in contracts, automatic analysis is still a developing area of research. This becomes even more relevant with the rise of cloud service organizations, which highlights the need for tools for comparing contractual agreements. In this paper, we present a framework designed to automate contract analysis and comparison. In order to demonstrate the effectiveness of this approach, we created a prototype that uses language models to automatically detect obligations, rights, and parties involved in contracts. In addition, we applied an initial metric to determine the extent to which the customer benefits compared to the provider. The results of the evaluation support the effectiveness of the system by facilitating the understanding and reasoning of both parties regarding the terms of the agreement.

Elena Molino-Peña, José María García
Formalizing Microservices Patterns with Event-B: The Case of Service Registry

Microservices have emerged as an architectural style in which applications are composed of small and focused services. Several patterns have been proposed to guide the construction of microservices applications. However, they are usually stated in natural-language, which may lead to ambiguity and erroneous application. This paper addresses these issues by advancing in the formalization of microservices patterns using the Event-B method. An Event-B model for the Service Registry pattern is proposed, which is then leveraged for verification/validation purposes. The overall goal is to contribute to the comprehension of microservices patterns and the quality of microservices applications.

Sebastián Vergara, Laura González, Raúl Ruggia
Privacy Engineering in the Data Mesh: Towards a Decentralized Data Privacy Governance Framework

Privacy engineering, emphasizing data protection during the design, build, and maintenance of software systems, faces new challenges and opportunities in the emerging decentralized data architectures, namely data mesh. By decentralizing data product ownership across domains, data mesh offers a novel paradigm to rethink how privacy principles are incorporated and maintained in modern system architectures. This paper introduces a conceptual framework that integrates privacy engineering principles with the decentralized nature of data mesh. Our approach provides a holistic view, capturing essential dimensions from both domains. We explore the intersections of privacy engineering and data mesh dimensions and provide guidelines for the stakeholders of a data mesh initiative to embed better data privacy controls. Our framework aims to offer a blueprint to ensure robust privacy practices are inherent, not just additive, during the adoption of data mesh.

Nemania Borovits, Indika Kumara, Damian A. Tamburri, Willem-Jan Van Den Heuvel

Ph.D. Symposium

Frontmatter
Towards a Taxonomy and Software Architecture for Data Processing and Contextualization for the Internet of Things

Nowadays, the Internet of Things (IoT) and intelligent decision-making systems are growing exponentially, rising new needs to be addressed. Although we can currently find a large number of IoT applications that can process huge amounts of data in real time, it is difficult to find solutions that integrate data from different application domains for further contextualization and personalization of the offered services. To address this gap, we propose a taxonomy and a context-aware software architecture. The taxonomy will allow the description of data from different domains according to current needs and their use for further contextualization of smart applications. Through the software architecture it will be possible to easily integrate and correlate, thanks to the use of taxonomy, data from different application domains, processing large amounts of data in real time, and enabling the development of smarter decision making systems.

Adrian Bazan-Muñoz, Guadalupe Ortiz, Alfonso Garcia-de-Prado
Advanced Serverless Edge Computing

Serverless computing is becoming an attractive means to implement applications on top of edge infrastructures. Developers break applications into small components (functions), and this modularity allows one to cope with the limited resources of edge nodes and meet the stringent response times typical of edge applications. Different frameworks already support serverless edge computing, that is, the management and operation of serverless applications on top of edge infrastructures, but they usually cope with the different problems in isolation: for example, function placement, dependency management, cold starts, data management, and resource allocation. In contrast, we claim that these aspects must be dealt with all together. This work borrows from NEPTUNE and aims to fill the gap. We plan to complement NEPTUNE with dependency-aware function placement and resource allocation, to tackle image instantiation and cold start mitigation, and to address data management. The first results on the use of function dependencies to ameliorate resource allocation indicate significant improvements with respect to the state of the art.

Inacio Gaspar Ticongolo, Luciano Baresi, Giovanni Quattrocchi

Demos and Resources Introduction

Frontmatter
Immersive 3D Simulator for Drone-as-a-Service

We propose a 3D simulator tailored for the Drone-as-a-Service framework. The simulator enables employing dynamic algorithms for addressing realistic delivery scenarios. We present the simulator’s architectural design and its use of an energy consumption model for drone deliveries. We introduce two primary operational modes within the simulator: the edit mode and the runtime mode. Beyond its simulation capabilities, our simulator serves as a valuable data collection resource, facilitating the creation of datasets through simulated scenarios. Our simulator empowers researchers by providing an intuitive platform to visualize and interact with delivery environments. Moreover, it enables rigorous algorithm testing in a safe simulation setting, thus obviating the need for real-world drone deployments. Demo: https://youtu.be/HOLfo1JiFJ0 .

Jiamin Lin, Balsam Alkouz, Athman Bouguettaya, Amani Abusafia
SLA-Wizard - Automated Configuration of RESTful API Gateways Based on SLAs

In the digital age, the API Economy, fueled by microservice architectures, is revolutionizing software development. Crucial to this transition is the Open API Specification (OAS) that standardizes API description of functional elements and has been complemented with extensions like SLA4OAI to define limitations for the API users, like qoutas or rates in a standard way. Building on this, the paper presents SLA-Wizard, a tool designed to automate API Gateway configurations; it supports four, widely used, proxies that are used in the Industry as API Gateways (Envoy, Nginx, HAProxy and Traefik). This paper presents the tool and highlights its effectiveness in managing API Proxy configuration and how it paves the way for enhancing their capabilities and systematic benchmarking.Tool demonstration video available at: http://tiny.cc/sla-wizard .

Ignacio Peluaga Lozada, Pablo Fernandez, José María García
The IDL Tool Suite: Inter-parameter Dependency Management in Web APIs

Web APIs contain inter-parameter dependencies that restrict the way in which input parameters can be combined to form valid calls to the service. Inter-parameter dependencies are extremely common and pervasive: they appear in 4 out of every 5 APIs across all application domains and types of operations. In this demonstration paper, we present the IDL tool suite, a comprehensive collection of tools designed to facilitate dependency management in web APIs. The IDL tool suite includes a specification language for inter-parameter dependencies (IDL), and OAS extension (IDL4OAS), a web editor for IDL specifications, an analysis engine (IDLReasoner), a web API for the analysis of IDL, and a website with detailed information about the tool suite and a playground. In addition to these tools, we present a catalog of applications where the IDL tool suite has already proven useful, including automated testing, code generation, and dependency-aware API gateways. We trust that the IDL tool suite will enable promising new research and applications in the area of web API management. The demo video of the IDL tool suite is available at https://www.youtube.com/watch?v=Hy5HYGK8Yn4 .

Saman Barakat, Alberto Martin-Lopez, Carlos Müller, Sergio Segura
Smelling Homemade Crypto Code in Microservices, with KubeHound

Microservices are pervading enterprise IT, and securing microservices hence became crucial. KubeHound is an open-source tool devised for this purpose, as it enables detecting instances of so-called security smells in microservice applications deployed with Kubernetes. KubeHound features a plugin-based extensibility, meaning that its smell detection capabilities can be extended by developing plugins implementing additional smell detection techniques. In this demo paper, we illustrate how to extend KubeHound with plugins enabling to detect two different instances of the own crypto code security smell, whose detection was not yet featured by KubeHound. We also show the practical use of the newly added plugins by applying them to case studies, two of which are based on existing, third-party microservice applications.

Thomas Howard-Grubb, Jacopo Soldani, Giorgio Dell’Immagine, Francesca Arcelli Fontana, Antonio Brogi

Tutorials

Frontmatter
What is Blockchain and How Can it Help My Business? (Extended Tutorial Summary)

The content of this tutorial is drawn from a recent textbook that the authors have published. The book aims at introducing blockchain from scratch, providing first an implementation-agnostic view of the mechanisms underpinning blockchain, like immutable databases, consensus mechanisms, and smart contracts. Then, it moves to presenting the most prominent blockchain systems and platforms currently available. These range from widely known public blockchains and cryptocurrencies like Bitcoin and Ethereum, to platforms for building private blockchain network systems, such as Hyperledger Fabric. Next, the book introduces a set of tools to support decision making regarding the suitability of blockchain for a given business scenario. The book explains how business models can be used to analyze blockchain-based business scenarios. The book ends with illustrating how a blockchain system can be part of an innovative business application landscape.

Marco Comuzzi, Paul Grefen, Giovanni Meroni
Quantum Services: A Tutorial on the Technology and the Process

The emergence of quantum computing has introduced a new paradigm in the realm of computer science and software engineering, expanding the frontiers of computer applications designed for problem-solving. The transformation of quantum algorithms into services is a promising avenue to address this new paradigm, as it allows them to be integrated into conventional distributed applications. This tutorial provides an overview of the process of transforming quantum algorithms into quantum services. It explains how these quantum services can be effectively deployed, specifically using the Amazon Braket platform for quantum computing, and how they can be invoked through classical service endpoints. This tutorial not only presents the step-by-step methodology but also provides insight into best practices for successful implementation through a development process. It highlights the use of an extended version of the OpenAPI Specification and the automation capabilities offered by GitHub Actions, which play a key role in improving efficiency throughout the development and deployment phases.

Javier Romero-Álvarez, Jaime Alvarado-Valiente, Enrique Moguel, José Garcia-Alonso, Juan M. Murillo
Satellite Computing: From Space to Your Screen

The space industry is undergoing a transformative shift driven by the rapid growth of LEO satellite mega-constellations. These constellations cater to growing demands in various sectors, from intelligent transportation and smart cities to maritime surveillance and disaster response. Satellite computing emerges as a pivotal foundation in this evolution. In our tutorial lecture, we embark on a journey into the realm of satellite computing, a burgeoning field with immense potential. We begin by addressing a fundamental question: What is satellite computing? We delve into core concepts, revealing how satellites can function as computational powerhouses orbiting our planet. As we progress, we explore diverse scenarios where satellite computing shines. We also confront the unique challenges it faces in space’s harsh environment, featuring deep vacuum conditions, radiation exposure, strong vibrations, and extreme temperature ranges. Our tutorial offers insights into our research in satellite computing. We share practical experiences from deploying the Tiansuan constellation, showcasing the real-world applications of these cutting-edge technologies. Our vision is to democratize satellite computing access. By transforming satellites into servers “with wings”, we envision a future where every corner of the globe reaps the benefits of satellite computing’s vast potential.

Qing Li, Daliang Xu
Services in Industry 4.0. Modeling and Composition for Agile Supply Chains

In recent years, there has been a growing interest in employing intelligent techniques for managing manufacturing processes in smart manufacturing. These processes often involve tens of resources distributed across several different companies that make up the supply chain. The status of these various resources evolves over time in terms of cost, quality, and the likelihood of failure, necessitating an adaptive process that is resilient to disruptions. The tutorial explores the modeling of Industry 4.0 systems as services and their composition. We discuss how these systems are designed, integrated, and orchestrated to create an interconnected manufacturing environment. The potential and limitations of automated reasoning techniques in enabling decision-making and process optimization in the modeled systems are then analyzed. Finally, a case study and a demonstration (Adaptive Industrial APIs - AIDA) will be presented to illustrate the practical application of intelligent techniques in a real manufacturing environment.

Francesco Leotta, Flavia Monti, Luciana Silo
Backmatter
Metadaten
Titel
Service-Oriented Computing – ICSOC 2023 Workshops
herausgegeben von
Flavia Monti
Pierluigi Plebani
Naouel Moha
Hye-young Paik
Johanna Barzen
Gowri Ramachandran
Devis Bianchini
Damian A. Tamburri
Massimo Mecella
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
Electronic ISBN
978-981-9709-89-2
Print ISBN
978-981-9709-88-5
DOI
https://doi.org/10.1007/978-981-97-0989-2

Premium Partner