Skip to main content

2015 | Buch

Advanced Information Systems Engineering

27th International Conference, CAiSE 2015, Stockholm, Sweden, June 8-12, 2015, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 27th International Conference on Advanced Information Systems Engineering, CAiSE 2015, held in Stockholm, Sweden, in June 2015.

The 31 papers presented in this volume were carefully reviewed and selected from 236 submissions. They were organized in topical sections named: social and collaborative computing; business process modeling and languages; high volume and complex information management; requirements elicitation and management; enterprise data management; model conceptualisation and evolution; process mining, monitoring and predicting; intra- and inter-organizational process engineering; process compliance and alignment; enterprise IT integration and management; and service science and computing. The book also contains the abstracts of 3 keynote speeches and 5 tutorials, presented at the conference.

Inhaltsverzeichnis

Frontmatter

Social and Collaborative Computing

Frontmatter
Game Aspect: An Approach to Separation of Concerns in Crowdsourced Data Management

In data-centric crowdsourcing, it is well known that the incentive structure connected to workers’ behavior greatly affects output data. This paper proposes to use a declarative language to deal with both of data computation and the incentive structure explicitly. In the language, computation is modeled as a set of Datalog-like rules, and the incentive structures for the crowd are modeled as games in which the actions taken by players (workers) affect how much payoff they will obtain. The language is unique in that it introduces the game aspect that separates the code for the incentive structure from the other logic encoded in the program. This paper shows that the game aspect not only makes it easier to analyze and maintain the incentive structures, it gives a principled model of the fusion of human and machine computations. In addition, it reports the results of experiments with a real set of data.

Shun Fukusumi, Atsuyuki Morishima, Hiroyuki Kitagawa
Editing Anxiety in Corporate Wikis: From Private Drafting to Public Edits

Wikis promote work to be reviewed

after

publication, not before. This vision might not always fit organizations where a common employee concern is that sharing work-in-progress may negatively affect the assessments they receive. This might lead users to edit in distress, thus affecting task performance, and may minimize their participation in wikis. On this premise, this work advocates for complementing wiki editing with in-line drafting. By “drafting” is meant the personal process of collecting references or gradually forging a new structure of ideas, till the result is good-enough to be published. By “in-line”, we highlight that drafts will end up being article edits, and as such, their elaboration should take place

within

the wiki rather than being offloaded to third-party tools. This vision is realized by

Wikinote

, an extension for

Google

Chrome

that leverages

MediaWiki’s

Visual

Editor

with drafting facilities. First evidence indicates that

Wikinote

reduces contribution judgement anxiety, and to a lesser extent, editing anxiety.

Cristóbal Arellano, Oscar Díaz, Maider Azanza
Run-Time and Task-Based Performance of Event Detection Techniques for Twitter

Twitter’s increasing popularity as a source of up to date news and information about current events has spawned a body of research on event detection techniques for social media data streams. Although all proposed approaches provide some evidence as to the quality of the detected events, none relate this task-based performance to their run-time performance in terms of processing speed or data throughput. In particular, neither a quantitative nor a comparative evaluation of these aspects has been performed to date. In this paper, we study the run-time and task-based performance of several state-of-the-art event detection techniques for Twitter. In order to reproducibly compare run-time performance, our approach is based on a general-purpose data stream management system, whereas task-based performance is automatically assessed based on a series of novel measures.

Andreas Weiler, Michael Grossniklaus, Marc H. Scholl

Business Process Modeling and Languages

Frontmatter
RALph: A Graphical Notation for Resource Assignments in Business Processes

The business process (BP) resource perspective deals with the management of human as well as non-human resources throughout the process lifecycle. Although it has received increasing attention recently, there exists no graphical notation for it up until now that is both expressive enough to cover well-known resource selection conditions and independent of any BP modelling language. In this paper, we introduce RALph, a graphical notation for the assignment of human resources to BP activities. We define its semantics by mapping this notation to a language that has been formally defined in description logics, which enables its automated analysis. Although we show how RALph can be seamlessly integrated with BPMN, it is noteworthy that the notation is independent of the BP modelling language. Altogether, RALph will foster the visual modelling of the resource perspective in BPs.

Cristina Cabanillas, David Knuplesch, Manuel Resinas, Manfred Reichert, Jan Mendling, Antonio Ruiz-Cortés
Revising the Vocabulary of Business Process Element Labels

A variety of methods devoted to the behavior analysis of business process models has been suggested, which diminish the task of inspecting the correctness of the model by the process modeler. Although a correct behavior has been attested, the process model might still not be feasible because the modeler or intended user is hampered in her comprehension (and thus hesitates e.g., to reuse the process model). This paper addresses the improvement of comprehension of process element labels by revising their vocabulary. Process element labels are critical for an appropriate association between the symbol instance and the real world. If users do not (fully) understand the process element labels, an improper notion of the real process might arise. To improve the comprehension of element labels algorithms are presented, which base on common hints how to effectively recognize written words. Results from an empirical study indicate a preference for such revised process element labels.

Agnes Koschmider, Meike Ullrich, Antje Heine, Andreas Oberweis
Declarative Process Modeling in BPMN

Traditional business process modeling notations, including the standard Business Process Model and Notation (BPMN), rely on an imperative paradigm wherein the process model captures all allowed activity flows. In other words, every flow that is not specified is implicitly disallowed. In the past decade, several researchers have exposed the limitations of this paradigm in the context of business processes with high variability. As an alternative, declarative process modeling notations have been proposed (e.g., Declare). These notations allow modelers to capture constraints on the allowed activity flows, meaning that all flows are allowed provided that they do not violate the specified constraints. Recently, it has been recognized that the boundary between imperative and declarative process modeling is not crisp. Instead, mixtures of declarative and imperative process modeling styles are sometimes preferable, leading to proposals for hybrid process modeling notations. These developments raise the question of whether completely new notations are needed to support hybrid process modeling. This paper answers this question negatively. The paper presents a conservative extension of BPMN for declarative process modeling, namely BPMN-D, and shows that Declare models can be transformed into readable BPMN-D models.

Giuseppe De Giacomo, Marlon Dumas, Fabrizio Maria Maggi, Marco Montali

High Volume and Complex Information Management

Frontmatter
The Requirements and Needs of Global Data Usage in Product Lifecycle Management

This study examines global data movement in large businesses from a product data management (PDM) and enterprise resource planning (ERP) point-of-view. The purpose of this study was to understand and map out how a large global business handles its data in a multiple site structure and how it can be applied in practice. This was done by doing an empirical interview study on five different global businesses with design locations in multiple countries. Their master data management (MDM) solutions were inspected and analyzed to understand which solution would best benefit a large global architecture with many design locations. One working solution is a transactional hub which negates the effects of multisite transfers and reduces lead times.

Anni Siren, Kari Smolander, Mikko Jokela
Probabilistic Keys for Data Quality Management

Probabilistic databases address well the requirements of an increasing number of modern applications that produce large volumes of uncertain data from a variety of sources. We propose probabilistic keys as a principled tool helping organizations balance the consistency and completeness targets for their data quality. For this purpose, algorithms are established for an agile schema- and data-driven acquisition of the marginal probability by which keys should hold in a given application domain, and for reasoning about these keys. The efficiency of our acquisition framework is demonstrated theoretically and experimentally.

Pieta Brown, Sebastian Link
A Clustering Approach for Protecting GIS Vector Data

The availability of Geographic Information System (GIS) data has increased in recent years, as well as the need to prevent its unauthorized use. One way of protecting this type of data is by embedding within it a digital watermark. In this paper, we build on our previous work on watermarking vector map data, to improve the robustness to (unwanted) modifications to the maps that may prevent the identification of the rightful owner of the data. More specifically, we address the simplification (removing some vertices from GIS vector data) and interpolation (adding new vertices to GIS data) modifications by exploiting a particular property of vector data called a bounding box. In addition, we experiment with bigger maps to establish the feasibility of the approach for larger maps.

Ahmed Abubahia, Mihaela Cocea

Requirements Elicitation and Management

Frontmatter
Need-to-Share and Non-diffusion Requirements Verification in Exchange Policies

Whether be it for Earth observation, risk management or even companies relations, more and more interconnected organizations form decentralized systems in which the exchange, in terms of diffusion or non-diffusion of information between agents, can have critical consequences. In this paper, we present a formal framework to specify information exchange policies for such kinds of systems and two specific requirements, the need-to-share and the non-diffusion requirements, as well as properties strongly related to them. Wiser from these formal definitions, we see how to reconcile these sometimes antagonist requirements in a same policy specification with information filtering operations. We also explain how we use state of the art theorem provers to perform automatic analysis of these policies.

Rémi Delmas, Thomas Polacsek
Virtual Business Role-Play: Leveraging Familiar Environments to Prime Stakeholder Memory During Process Elicitation

Business process models have traditionally been an effective way of examining business practices to identify areas for improvement. While common information gathering approaches are generally efficacious, they can be quite time consuming and have the risk of developing inaccuracies when information is forgotten or incorrectly interpreted by analysts. In this study, the potential of a role-playing approach for process elicitation and specification has been examined. This method allows stakeholders to enter a virtual world and role-play actions as they would in reality. As actions are completed, a model is automatically developed, removing the need for stakeholders to learn and understand a modelling grammar. Empirical data obtained in this study suggests that this approach may not only improve both the number of individual process task steps remembered and the correctness of task ordering, but also provide a reduction in the time required for stakeholders to model a process view.

Joel Harman, Ross Brown, Daniel Johnson, Stefanie Rinderle-Ma, Udo Kannengiesser
Handling Regulatory Goal Model Families as Software Product Lines

Goal models can capture the essence of legal and regulation statements and many of their relationships, enabling compliance analysis. However, current goal modeling approaches do not scale well when handling large regulations with many variable parts that depend on different aspects of regulated organizations. In this paper, we propose a tool-supported approach that integrates the Goal-oriented Requirement Language and feature modeling to handle regulatory goal model families. We show how they can be organized as a Software Product Line (SPL), ensuring the consistency of the SPL as a whole, and providing an adapted derivation process associated to a feature model configuration. The proposed approach is also evaluated on large generated SPLs with results suggesting its capability to address scalability concerns.

Anthony Palmieri, Philippe Collet, Daniel Amyot

Enterprise Data Management

Frontmatter
Managing Data Warehouse Traceability: A Life-Cycle Driven Approach

Traceability has been used as a quality attribute for softwares for some decades now. Traceability can be defined as the ability to follow the life of software artifacts. Unfortunately, making a DW traceable did not have the same spring as for software systems. Nowadays, DW systems are evolving in a dynamic environment, where DW design become a complex task involving many resources and artifacts. In order to facilitate this task, a design life-cycle has been defined including five main phases. Due to the special idiosyncrasy of DW development, a tailored traceability approach is required. Our proposal in this paper is a novel DW traceability approach, driven by its design life-cycle. This approach covers the

whole

cycle and considers its inter-relationships. This study required (i) the formalization of each life-cycle phase and (ii) the identification of the interactions between and inside these phases. The traceability approach is conducted by two main activities: the

identification

of trace artifacts and links materialized in a traceability model and the

recording

of the model. The approach is illustrated using TPC-H and ETL benchmarks. It is implemented using Postgres DBMS.

Selma Khouri, Kamel Semassel, Ladjel Bellatreche
Specification and Incremental Maintenance of Linked Data Mashup Views

The Linked Data initiative promotes the publication of previously isolated databases as interlinked RDF datasets, thereby creating a global scale data space, known as the Web of Data. Linked Data Mashup applications, which consume data from the multiple Linked Data sources in the Web of Data, are confronted with the challenge of obtaining a homogenized view of this global data space, called a Linked Data Mashup view. This paper proposes an ontology-based framework for formally specifying Linked Data Mashup views, and a strategy for the incremental maintenance of such views, based on their specifications.

Vânia M. P. Vidal, Marco A. Casanova, Narciso Arruda, Mariano Roberval, Luiz Paes Leme, Giseli Rabello Lopes, Chiara Renso
A Model-Driven Approach to Enterprise Data Migration

In a typical data migration project, analysts identify the mappings between source and target data models at a conceptual level using informal textual descriptions. An implementation team translates these mappings into programs that migrate the data. While doing so, the programmers have to understand how the conceptual models and business rules map to physical databases. We propose a modeling mechanism where we can specify conceptual models, physical models and mappings between them in a formal manner. We can also specify rules on conceptual models. From these models and mappings, we can automatically generate a program to migrate data from source to target. We can also generate a program to migrate data access queries from source to target. The overall approach results in a significant improvement in productivity and also a significant reduction in migration errors.

Raghavendra Reddy Yeddula, Prasenjit Das, Sreedhar Reddy

Model Conceptualisation and Evolution

Frontmatter
Interactive Recovery of Requirements Traceability Links Using User Feedback and Configuration Management Logs

Traceability links between requirements and source code can assist in software maintenance tasks. There are some automatic traceability recovery methods. Most of them are similarity-based methods recovering links by comparing representation similarity between requirements and code. They cannot work well if there are some links independent of the representation similarity. Herein to cover weakness of them and improve the accuracy of recovery, we propose a method that extends the similarity-based method using two techniques: a log-based traceability recovery method using the configuration management log and a link recommendation from user feedback. These techniques are independent of the representation similarity between requirements and code. As a result of applying our method to a large enterprise system, we successfully improved both recall and precision by more than a 20 percent point in comparison with singly applying the similarity-based method (recall: 60.2% to 80.4%, precision: 41.1% to 64.8%).

Ryosuke Tsuchiya, Hironori Washizaki, Yoshiaki Fukazawa, Keishi Oshima, Ryota Mibe
Detecting Complex Changes During Metamodel Evolution

Evolution of metamodels can be represented at the finest grain by the trace of atomic changes: add, delete, and update elements. For many applications, like automatic correction of models when the metamodel evolves, a higher grained trace must be inferred, composed of complex changes, each one aggregating several atomic changes. Complex change detection is a challenging task since multiple sequences of atomic changes may define a single user intention and complex changes may overlap over the atomic change trace. In this paper, we propose a detection engine of complex changes that simultaneously addresses these two challenges of variability and overlap. We introduce three ranking heuristics to help users to decide which overlapping complex changes are likely to be correct. We describe an evaluation of our approach that allow reaching full recall. The precision is improved by our heuristics from 63% and 71% up to 91% and 100% in some cases.

Djamel Eddine Khelladi, Regina Hebig, Reda Bendraou, Jacques Robin, Marie-Pierre Gervais
“We Need to Discuss the Relationship”: Revisiting Relationships as Modeling Constructs

In this paper we propose a novel ontological analysis of relations and relationships based on a re-visitation of a classic problem in the practice of conceptual modeling, namely

relationship reification

. Despite the two terms ‘relation’ and ‘relationship’ are often used interchangeably, we shall assume a radical difference between the two: a relation

holds

, while a relationship

exists.

Indeed, the relation holds

because

the relationship exists. We investigate the ontological the nature of relationships as

truthmakers

, proposing a view according to which they are

endurants

. Under this view, not only a relationship is responsible (with its existence) of the fact that a relation holds, but it also accounts (with its properties) of

the way a relation holds and develops in time.

Nicola Guarino, Giancarlo Guizzardi

Process Mining, Monitoring and Predicting

Frontmatter
PM $$^2$$ : A Process Mining Project Methodology

Process mining aims to transform event data recorded in information systems into knowledge of an organisation’s business processes. The results of process mining analysis can be used to improve process performance or compliance to rules and regulations. However, applying process mining in practice is not trivial. In this paper we introduce PM

$$^2$$

, a methodology to guide the execution of process mining projects. We successfully applied PM

$$^2$$

during a case study within IBM, a multinational technology corporation, where we identified potential process improvements for one of their purchasing processes.

Maikel L. van Eck, Xixi Lu, Sander J. J. Leemans, Wil M. P. van der Aalst
Completing Workflow Traces Using Action Languages

The capability to monitor process and service executions, which has gone to notably increase in the last decades due to the growing adoption of IT-systems, has brought to the diffusion of several reasoning-based tools for the analysis of process executions. Nevertheless, in many real cases, the different degrees of abstraction of models and IT-data, the lack of IT-support on all the steps of the model, as well as information hiding, result in process execution data conveying only incomplete information concerning the process-level activities. This may hamper the capability to analyse and reason about process executions. This paper presents a novel approach to recover missing information about process executions, relying on a reformulation in terms of a planning problem.

Chiara Di Francescomarino, Chiara Ghidini, Sergio Tessaris, Itzel Vázquez Sandoval
A Novel Top-Down Approach for Clustering Traces

In the last years workflow discovery has become an important research topic in the business process mining area. However, existing workflow discovery techniques encounter challenges while dealing with event logs stemming from highly flexible environments because such logs contain many different behaviors. As a result, inaccurate and complex process models might be obtained. In this paper we propose a new technique which searches for the optimal way for clustering traces among all of the possible solutions. By applying the existing workflow discovery techniques on the traces for each discovered cluster by our method, more accurate and simpler sub-models can be obtained.

Yaguang Sun, Bernhard Bauer

Intra&Inter-Organizational Process Engineering

Frontmatter
Extracting Decision Logic from Process Models

Although it is not considered good practice, many process models from practice contain detailed decision logic, encoded through control flow structures. This often results in spaghetti-like and complex process models and reduces maintainability of the models. In this context, the OMG proposes to use the Decision Model and Notation (DMN) in combination with BPMN in order to reach a separation of concerns. This paper introduces a semi-automatic approach to (i) identify decision logic in process models, (ii) to derive a corresponding DMN model and to adapt the original process model by replacing the decision logic accordingly, and (iii) to allow final configurations of this result during post-processing. This approach enables business organizations to migrate already existing BPMN models. We evaluate this approach by implementation, semantic comparison of the decision taking process before and after approach application, and an empirical analysis of industry process models.

Kimon Batoulis, Andreas Meyer, Ekaterina Bazhenova, Gero Decker, Mathias Weske
Equivalence Transformations for the Design of Interorganizational Data-Flow

Distributed interorganizational processes can be designed by first creating a global process, which is then split into processes or views for each participant. Existing methods for automating this transformation concentrate on the control flow and neglect either the data flow or address it only partially. Even for small interorganizational processes, there is a considerably large number of potential realizations of the data flow. We analyze the problem of generating message exchanges to realize the dataflow in depth and present a solution for constructing data flows which are optimal with respect to some design objectives. The approach is based on a definition of the correctness of data flow and a complete set of transformations which preserve correctness and allow to search for an optimal solution from a generated correct solution.

Julius Köpke, Johann Eder
Automatic Generation of Optimized Process Models from Declarative Specifications

Process models often are generic, i. e., describe similar cases or contexts. For instance, a process model for commissioning can cover both vehicles with an automatic and with a manual transmission, by executing alternative tasks. A generic process model is not optimal compared to one tailored to a specific context. Given a declarative specification of the constraints and a specific context, we study how to automatically generate a good process model and propose a novel approach. We focus on the restricted case that there are not any repetitions of a task, as is the case in commissioning and elsewhere, e. g., manufacturing. Our approach uses a probabilistic search to find a good process model according to quality criteria. It can handle complex real-world specifications containing several hundred constraints and more than one hundred tasks. The process models generated with our scheme are superior (nearly twice as fast) to ones designed by professional modelers by hand.

Richard Mrasek, Jutta Mülle, Klemens Böhm

Process Compliance and Alignment

Frontmatter
Towards the Automated Annotation of Process Models

Many techniques for the advanced analysis of process models build on the annotation of process models with elements from predefined vocabularies such as taxonomies. However, the manual annotation of process models is cumbersome and sometimes even hardly manageable taking the size of taxonomies into account. In this paper, we present the first approach for automatically annotating process models with the concepts of a taxonomy. Our approach builds on the corpus-based method of second-order similarity, different similarity functions, and a Markov Logic formalization. An evaluation with a set of 12 process models consisting of 148 activities and the PCF taxonomy consisting of 1,131 concepts demonstrates that our approach produces satisfying results.

Henrik Leopold, Christian Meilicke, Michael Fellmann, Fabian Pittke, Heiner Stuckenschmidt, Jan Mendling
Discovery and Validation of Queueing Networks in Scheduled Processes

Service processes, for example in transportation, telecommunications or the health sector, are the backbone of today’s economies. Conceptual models of such service processes enable operational analysis that supports, e.g., resource provisioning or delay prediction. Automatic mining of such operational models becomes feasible in the presence of event-data traces. In this work, we target the mining of models that assume a resource-driven perspective and focus on queueing effects. We propose a solution for the discovery and validation problem of scheduled service processes - processes with a predefined schedule for the execution of activities. Our prime example for such processes are complex outpatient treatments that follow prior appointments. Given a process schedule and data recorded during process execution, we show how to discover Fork/Join networks, a specific class of queueing networks, and how to assess their operational validity. We evaluate our approach with a real-world dataset comprising clinical pathways of outpatient clinics, recorded by a real-time location system (RTLS). We demonstrate the value of the approach by identifying and explaining operational bottlenecks.

Arik Senderovich, Matthias Weidlich, Avigdor Gal, Avishai Mandelbaum, Sarah Kadish, Craig A. Bunnell
Verification and Validation of UML Artifact-Centric Business Process Models

This paper presents a way of checking the correctness of artifact-centric business process models defined using the BAUML framework. To ensure that these models are free of errors, we propose an approach to verify (i.e. there are no internal mistakes) and to validate them (i.e. the model complies with the business requirements). This approach is based on translating these models into logic and then encoding the desirable properties as satisfiability problems of derived predicates. In this way, we can then use a tool to check if these properties are fulfilled.

Montserrat Estañol, Maria-Ribera Sancho, Ernest Teniente

Enterprise IT Integration and Management

Frontmatter
Empirical Challenges in the Implementation of IT Portfolio Management: A Survey in Three Companies

The study explores the implementation challenges of Information Technology (IT) portfolio management in three companies. The portfolio approach to IT assets is significant for enabling organisations to make effective use of limited resources by prioritising IT initiatives and also for monitoring and evaluating their performance. In practice, the process facilitates the provision of necessary information for decision makers, allowing them to make rational decisions about IT investments. We found that there is a significant gap between IT portfolio management as discussed in the literature and its actual practice. The analysis showed that there was high flexibility when specifying IT projects, which caused companies to implement IT portfolios that were too broad. As a consequence, resources were not effectively utilised, and IT portfolio evaluations post implementation were rarely conducted. Our research contribution identifies important gaps to be filled in the literature and presents case studies related to IT portfolio management.

Lucy Ellen Lwakatare, Pasi Kuvaja, Harri Haapasalo, Arto Tolonen
Integration Adapter Modeling

Integration Adapters

are a fundamental part of an integration system, since they provide (business) applications access to its messaging channel. However, their modeling and configuration remain under-represented. In previous work, the integration control and data flow syntax and semantics have been expressed in the Business Process Model and Notation (BPMN) as a semantic model for message-based integration, while adapter and the related quality of service modeling were left for further studies.

In this work we specify common adapter capabilities and derive general modeling patterns, for which we define a compliant representation in BPMN. The patterns extend previous work by the adapter flow, evaluated syntactically and semantically for common adapter characteristics.

Daniel Ritter, Manuel Holzleitner

Service Science and Computing

Frontmatter
Modelling Service Level Agreements for Business Process Outsourcing Services

Many proposals to model service level agreements (SLAs) have been elaborated in order to automate different stages of the service lifecycle such as monitoring, implementation or deployment. All of them have been designed for computational services and are not well–suited for other types of services such as business process outsourcing (BPO) services. However, BPO services supported by process–aware information systems could also benefit from modelling SLAs in tasks such as performance monitoring, human resource assignment or process configuration. In this paper, we identify the requirements for modelling such SLAs and detail how they can be faced by combining techniques used to model computational SLAs, business processes, and process performance indicators. Furthermore, our approach has been validated through the modelling of several real BPO SLAs.

Adela del–Río–Ortega, Antonio Manuel Gutiérrez, Amador Durán, Manuel Resinas, Antonio Ruiz–Cortés
Deriving Artefact-Centric Interfaces for Overloaded Web Services

We present a novel framework and algorithms for the analysis of Web service interfaces to improve the efficiency of application integration in wide-spanning business networks. Our approach addresses the notorious issue of large and overloaded operational signatures, which are becoming increasingly prevalent on the Internet and being opened up for third-party service aggregation. Extending upon existing techniques used to refactor service interfaces based on derived artefacts of applications, namely business entities, we propose heuristics for deriving relations between business entities, and in turn, deriving permissible orders in which operations are invoked. As a result, service operations are refactored on business entity CRUD which then leads to behavioural models generated, thus supportive of fine-grained and flexible service discovery, composition and interaction. A prototypical implementation and analysis of web services, including those of commercial logistic systems (FedEx), are used to validate the algorithms and open up further insights into service interface synthesis.

Fuguo Wei, Alistair Barros, Chun Ouyang
Backmatter
Metadaten
Titel
Advanced Information Systems Engineering
herausgegeben von
Jelena Zdravkovic
Marite Kirikova
Paul Johannesson
Copyright-Jahr
2015
Electronic ISBN
978-3-319-19069-3
Print ISBN
978-3-319-19068-6
DOI
https://doi.org/10.1007/978-3-319-19069-3

Premium Partner