Skip to main content

2013 | Buch

Advanced Information Systems Engineering

25th International Conference, CAiSE 2013, Valencia, Spain, June 17-21, 2013. Proceedings

herausgegeben von: Camille Salinesi, Moira C. Norrie, Óscar Pastor

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 25th International Conference on Advanced Information Systems Engineering, CAiSE 2013, held in Valencia, Spain, in June 2013.

The 44 revised full papers were carefully reviewed and selected from 162 submissions. The contributions have been grouped into the following topical sections: services; awareness; business process execution; products; business process modelling; modelling languages and meta models; requirements engineering 1; enterprise architecture; information systems evolution; mining and predicting; data warehouses and business intelligence; requirements engineering 2; knowledge and know-how; information systems quality; and human factors.

Inhaltsverzeichnis

Frontmatter

Services

Cloud Computing Automation: Integrating USDL and TOSCA

Standardization efforts to simplify the management of cloud applications are being conducted in isolation. The objective of this paper is to investigate to which extend two promising specifications, USDL and TOSCA, can be integrated to automate the lifecycle of cloud applications. In our approach, we selected a commercial SaaS CRM platform, modeled it using the service description language USDL, modeled its cloud deployment using TOSCA, and constructed a prototypical platform to integrate service selection with deployment. Our evaluation indicates that a high level of integration is possible. We were able to fully automatize the remote deployment of a cloud service after it was selected by a customer in a marketplace. Architectural decisions emerged during the construction of the platform and were related to global service identification and access, multi-layer routing, and dynamic binding.

Jorge Cardoso, Tobias Binz, Uwe Breitenbücher, Oliver Kopp, Frank Leymann
A Business Protocol Unit Testing Framework for Web Service Composition

Unit testing is a critical step in the development lifecycle of business processes for ensuring product reliability and dependability. Although plenty of unit testing approaches for WS-BPEL have been proposed, only a few of them designed and implemented a runnable unit testing framework, and none of them provides a technique to systematically specifying and testing the causal and temporal dependencies between the process-under-test and its partner services. In this paper, we propose a novel approach and framework for specifying and testing the inter-dependencies between the process-under-test and its partner services. The dependency constraints defined in the business protocol are declaratively specified using a pattern-based high-level language, and a FSA-based approach is proposed for detecting the violation of constraints. A testing framework that integrates with the Java Finite State Machine framework has been implemented to support the specification of both dependency constraints and test cases, and the execution and result analysis of test cases.

Jian Yu, Jun Han, Steven O. Gunarso, Steve Versteeg
Secure and Privacy-Preserving Execution Model for Data Services

Data services have almost become a standard way for data publishing and sharing on top of the Web. In this paper, we present a secure and privacy-preserving execution model for data services. Our model controls the information returned during service execution based on the identity of the data consumer and the purpose of the invocation. We implemented and evaluated the proposed model in the healthcare application domain. The obtained results are promising.

Mahmoud Barhamgi, Djamal Benslimane, Said Oulmakhzoune, Nora Cuppens-Boulahia, Frederic Cuppens, Michael Mrissa, Hajer Taktak

Awareness

Enabling the Analysis of Cross-Cutting Aspects in Ad-Hoc Processes

Processes in case management applications are flexible, knowledge-intensive and people-driven, and often used as guides for workers in processing of artifacts. An important fact is the evolution of process artifacts over time as they are touched by different people in the context of a knowledge-intensive process. This highlights the need for tracking process artifacts in order to find out their history (artifact versioning) and also provenance (where they come from, and who touched and did what on them). We present a framework, simple abstractions and a language for analyzing cross-cutting aspects (in particular versioning and provenance) over process artifacts. We introduce two concepts of

timed-folders

to represent evolution of artifacts over time, and

activity-paths

to represent the process which led to artifacts. The introduced approaches have been implemented on top of FPSPARQL, Folder-Path enabled extension of SPARQL, and experimentally validated on real-world datasets.

Seyed-Mehdi-Reza Beheshti, Boualem Benatallah, Hamid Reza Motahari-Nezhad
Context-Aware UI Component Reuse

Adapting user interfaces (UIs) to various contexts, such as for the exploding number of different devices, has become a major challenge for UI developers. The support offered by current development environments for UI adaptation is limited, as is the support for the efficient creation of UIs in Web service-based applications. In this paper, we describe an approach where – based on a given context – a complete user interface is suggested. We demonstrate the approach for the example of a SOA environment. The suggestions are created by a rule-based recommender system, which combines Web service-bound UI elements with other UI building blocks. The approach has been implemented, as well as evaluated by simulating the development of 115 SAP UI screens.

Kerstin Klemisch, Ingo Weber, Boualem Benatallah
Internet of Things-Aware Process Modeling: Integrating IoT Devices as Business Process Resources

The Internet of Things (IoT) has grown in recent years to a huge branch of research: RFID, sensors and actuators as typical IoT devices are increasingly used as resources integrated into new value added applications of the Future Internet and are intelligently combined using standardised software services. While most of the current work on IoT integration focuses on areas of the actual technical implementation, little attention has been given to the integration of the IoT paradigm and its devices coming with native software components as resources in business processes of traditional enterprise resource planning systems. In this paper, we identify and integrate IoT resources as a novel automatic resource type on the business process layer beyond the classical human resource task-centric view of the business process model in order to face expanding resource planning challenges of future enterprise environments.

Sonja Meyer, Andreas Ruppen, Carsten Magerkurth

Business Process Execution

Generating Multi-objective Optimized Business Process Enactment Plans

Declarative business process (BP) models are increasingly used allowing their users to specify what has to be done instead of how. Due to their flexible nature, there are several enactment plans related to a specific declarative model, each one presenting specific values for different objective functions, e.g., completion time or profit. In this work, a method for generating optimized BP enactment plans from declarative specifications is proposed to optimize the performance of a process considering multiple objectives. The plans can be used for different purposes, e.g., providing recommendations. The proposed approach is validated through an empirical evaluation based on a real-world case study.

Andés Jiménez-Ramírez, Irene Barba, Carmelo del Valle, Barbara Weber
Supporting Risk-Informed Decisions during Business Process Execution

This paper proposes a technique that supports process participants in making risk-informed decisions, with the aim to reduce the process risks. Risk reduction involves decreasing the likelihood and severity of a process fault from occurring. Given a process exposed to risks, e.g. a financial process exposed to a risk of reputation loss, we enact this process and whenever a process participant needs to provide input to the process, e.g. by selecting the next task to execute or by filling out a form, we prompt the participant with the expected risk that a given fault will occur given the particular input. These risks are predicted by traversing decision trees generated from the logs of past process executions and considering process data, involved resources, task durations and contextual information like task frequencies. The approach has been implemented in the YAWL system and its effectiveness evaluated. The results show that the process instances executed in the tests complete with significantly fewer faults and with lower fault severities, when taking into account the recommendations provided by our technique.

Raffaele Conforti, Massimiliano de Leoni, Marcello La Rosa, Wil M. P. van der Aalst
A Methodology for Designing Events and Patterns in Fast Data Processing

Complex Event Processing handles processing of a large number of heterogeneous events and pattern detection over multiple event streams in real-time. Situations of interests are modeled using event patterns which describe a specific situation in an event processing language. In order to leverage the usage of event processing in everyday situations, a clear methodology for the identification and definition of events and event patterns is needed. In this paper, we propose an end-to-end methodology for designing event processing systems. This methodology integrates domain knowledge modeled during the setup phase of event processing with a high-level event pattern language which allows users to create specific business-related patterns. In addition, our methodology regards the circumstance that some patterns might have to be defined by technical experts and therefore introduces an actor model. Our approach is validated based on a real use case of a supplier of convenience stores.

Dominik Riemer, Nenad Stojanovic, Ljiljana Stojanovic

Products

A Hybrid Model Words-Driven Approach for Web Product Duplicate Detection

The detection of product duplicates is one of the challenges that Web shop aggregators are currently facing. In this paper, we focus on solving the problem of product duplicate detection on the Web. Our proposed method extends a state-of-the-art solution that uses the model words in product titles to find duplicate products. First, we employ the aforementioned algorithm in order to find matching product titles. If no matching title is found, our method continues by computing similarities between the two product descriptions. These similarities are based on the product attribute keys and on the product attribute values. Furthermore, instead of only extracting model words from the title, our method also extracts model words from the product attribute values. Based on our experimental results on real-world data gathered from two existing Web shops, we show that the proposed method, in terms of

F

1

-measure, significantly outperforms the existing state-of-the-art title model words method and the well-known TF-IDF method.

Marnix de Bakker, Flavius Frasincar, Damir Vandic
The Anatomy of a Sales Configurator: An Empirical Study of 111 Cases

Nowadays, mass customization has been embraced by a large portion of the industry. As a result, the web abounds with sales configurators that help customers tailor all kinds of goods and services to their specific needs. In many cases, configurators have become the single entry point for placing customer orders. As such, they are strategic components of companies’ information systems and must meet stringent reliability, usability and evolvability requirements. However, the state of the art lacks guidelines and tools for efficiently engineering web sales configurators. To tackle this problem, empirical data on current practice is required. The first part of this paper reports on a systematic study of 111 web sales configurators along three essential dimensions: rendering of configuration options, constraint handling, and configuration process support. Based on this, the second part highlights good and bad practices in engineering web sales configurator. The reported quantitative and qualitative results open avenues for the elaboration of methodologies to (re-)engineer web sales configurators.

Ebrahim Khalil Abbasi, Arnaud Hubaux, Mathieu Acher, Quentin Boucher, Patrick Heymans
Modeling Personalized Adaptive Systems

A new theoretical framework for the conceptual modeling of personalized and context-aware systems is described which supports specification of customization for individual users and analyzing the interaction between the domain context and functionality. An initial taxonomy of models is proposed based on the concept of personalized requirements. Two layers of human-centric models are proposed: an individual user characteristics layer for adaptation in assistive technology, learning and learning support systems and an individual values and personal goals layer to tailor applications to personal requirements. Practical application of the modeling framework is illustrated in a healthcare case study of a personalized, self-adaptive context-aware system.

Alistair Sutcliffe, Pete Sawyer

Business Process Modelling

Decomposition Driven Consolidation of Process Models

Oftentimes business processes exist not as singular entities that can be managed in isolation, but as families of variants that need to be managed together. When it comes to modelling these variants, analysts are faced with the dilemma of whether to model each variant separately or to model multiple or all variants as a single model. The former option leads to a proliferation of models that share common parts, leading to redundancy and possible inconsistency. The latter approach leads to less but more complex models, thus hindering on their comprehensibility. This paper presents a decomposition driven method to capture a family of process variants in a consolidated manner taking into account the above trade-off. We applied our method on a case study in the banking sector. A reduction of 50% of duplication was achieved in this case study.

Fredrik Milani, Marlon Dumas, Raimundas Matulevičius
Analyzing Business Process Architectures

In recent years, Business Process Management has gained maturity in private and public organizations. Organization own large process collections. Organizing, analyzing, and managing them becomes more complex. In the course of this development, research on Business Process Architectures has gotten more attention over the last decade. A Business Process Architecture describes the relationships between business processes within a process collections as well as the guidelines to organize them. However, formalization and verification techniques are still missing in this context. To overcome this gap we propose a novel Petri net based Business Process Architecture formalization. Based on this, we can resort to known Petri net verification techniques for the analysis of Business Process Architectures patterns and anti-patterns in regard to their structural and behavioral properties. Our methodology is evaluated on a real use case from the public administration.

Rami-Habib Eid-Sabbagh, Mathias Weske
Eye-Tracking the Factors of Process Model Comprehension Tasks

Understanding business process models has been previously related to various factors. Those factors were determined using statistical approaches either on model repositories or on experiments based on comprehension questions. We noticed that, when asking comprehension questions on a process model, usually the expert explores only a part of the entire model to provide the answer. This paper formalizes this observation under the notion of Relevant Region. We conduct an experiment using eye-tracking to prove that the Relevant Region is indeed correlated to the answer given to the comprehension question. We also give evidence that it is possible to predict whether the correct answer will be given to a comprehension question, knowing the number and the time spent fixating Relevant Region elements. This paper sets the foundations for future improvements on model comprehension research and practice.

Razvan Petrusel, Jan Mendling

Modelling Languages and Meta Models

On the Search for a Level-Agnostic Modelling Language

The use of models is increasing in software engineering, especially within the MDE initiative. Models are usually communicated by visualizing them, typically using a graphical modelling language. The architecture commonly used to standardize a software engineering modelling language utilizes multiple levels despite the fact that the basic assumptions are only valid for a pair of levels. This has led several research groups to seek a means by which modelling languages can be created, and later standardized, without resorting to ‘fixes’ necessitated by the use of strict metamodelling and a multilevel hierarchy. Here, we describe a novel single-level approach based on ‘everything is an object’, which permits effective flattening of such a hierarchy, thus obviating all the paradoxical concerns in the literature over the last two decades.

Brian Henderson-Sellers, Tony Clark, Cesar Gonzalez-Perez
WSSL: A Fluent Calculus-Based Language for Web Service Specifications

In order to effectively discover and invoke a Web service, the provider must supply a complete specification of its behavior, with regard to its inputs, outputs, preconditions and effects. Devising such complete specifications comes with many issues that have not been adequately addressed by current service description efforts, such as WSDL, SAWSDL, OWL-S and WSMO. These issues involve the frame, ramification and qualification problems, which deal with the succinct and flexible representation of non-effects, indirect effects and preconditions, respectively. We propose WSSL, a novel specification language for services, based on the fluent calculus, that is expressly designed to address the aforementioned issues. Also, a tool is implemented that translates WSSL specifications to FLUX programs and allows for service validation based on user-defined goals.

George Baryannis, Dimitris Plexousakis
Enabling the Collaborative Definition of DSMLs

Software development processes are collaborative in nature. Neglecting the key role of end-users leads to software that does not satisfy their needs. This collaboration becomes specially important when creating Domain-Specific Modeling Languages (DSMLs), which are (modeling) languages specifically designed to carry out the tasks of a particular domain. While end-users are actually the experts of the domain for which a DSML is developed, their participation in the DSML specification process is still rather limited nowadays. In this paper we propose a more community-aware language development process by enabling the active participation of all community members (both developers and end-users of the DSML) from the very beginning. Our proposal is based on a DSML itself, called Collaboro, which allows representing change proposals on the DSML design and discussing (and tracing back) possible solutions, comments and decisions arisen during the collaboration.

Javier Luis Cánovas Izquierdo, Jordi Cabot

Requirements Engineering 1

Formal Methods for Exchange Policy Specification

This paper introduces a modelling framework to perform automatic analyses on the specification of an information exchange policy. To avoid the increase of development costs and risks of uncontrolled dissemination of information, the specification errors need to be detected before the implementation phase. We propose a minimalist core language to unambiguously represent an exchange policy specification and a gateway to logic solvers to verify some properties, namely: completeness, consistency, applicability and minimality. The aim is to check whether the formalisation of an exchange policy is consistent with user expectations.

Rémi Delmas, Thomas Polacsek
Diagnostic Information for Compliance Checking of Temporal Compliance Requirements

Compliance checking is gaining importance as today’s organizations need to show that operational processes are executed in a controlled manner while satisfying predefined (legal) requirements or service level agreements. Deviations may be costly and expose an organization to severe risks. Compliance checking is of growing importance for the business process management and auditing communities. This paper presents an approach for checking compliance of observed process executions recorded in an event log to

temporal compliance requirements

, which restrict when particular activities may or may not occur. We show how temporal compliance requirements discussed in literature can be unified and formalized using a generic temporal compliance rule. To check compliance with respect to a temporal rule, the event log describing the observed behavior is aligned with the rule. The alignment then shows which events occurred out of order and which events deviated by which amount of time from the prescribed behavior. This approach integrates with an existing approach for control-flow compliance checking, allowing for multi-perspective diagnostic information in case of compliance violations. We show the feasibility of our technique by checking temporal compliance rules of real life event logs.

Elham Ramezani Taghiabadi, Dirk Fahland, Boudewijn F. van Dongen, Wil M. P van der Aalst
A Quantitative Analysis of Model-Driven Code Generation through Software Experimentation

Recent research results have shown that Model-Driven Development (MDD) is a beneficial approach to develop software systems. The reduction of development time enabled by code generation mechanisms is often acknowledged as an important benefit to be further explored. This paper reports on an experiment in which an MDD-based approach using code generation from models is compared with manual coding based on the classic life-cycle. In this experiment, groups of senior students from Computer Science and Computer Engineering undergraduate academic programs implemented a web application using both approaches, and we evaluated in quantitative terms the performance of the groups. The results showed that the development time when code generation was applied was consistently shorter than otherwise. The participants also indicated that they found less difficulties when applying code generation.

Paulo Eduardo Papotti, Antonio Francisco do Prado, Wanderley Lopes de Souza, Carlos Eduardo Cirilo, Luís Ferreira Pires

Enterprise Architecture

ROAD4SaaS: Scalable Business Service-Based SaaS Applications

Software-as-a-Service (SaaS) is a software delivery model gaining popularity. Service Oriented Architecture (SOA) is widely used to construct SaaS applications due to the complementary characteristics in the two paradigms.

Scalability

has always been one of the major requirements in designing SaaS applications to meet the fluctuating demand. However, constructing SaaS applications using third-party

business services

raises additional challenges for the scalability of the application due to the partner services’ variability and autonomy. Any approach used to develop scalable service-based SaaS applications that compose business services needs to consider these characteristics. In this paper we present an approach to deploy

scalable business service compositions

based on the concept of an extensible hierarchy of virtual organisations. The explicit representation of relationships in the organisation allows capturing commonalities and variations of relationships between business services while its extensibility allows scale-out/in the SaaS application instance.

Malinda Kapuruge, Jun Han, Alan Colman, Indika Kumara
A Multi-perspective Framework for Web API Search in Enterprise Mashup Design

Enterprise mashups are agile applications which combine enterprise resources with other external applications or web services, by selecting and aggregating Web APIs provided by third parties. In this paper, we provide a framework based on different Web API features to support Web API search and reuse in enterprise mashup design. The framework operates according to a novel perspective, focused on the experience of web designers, who used the Web APIs to develop enterprise mashups. This new perspective is used jointly with other Web API search techniques, relying on classification features, like categories and tags, and technical features, like the Web API protocols and data formats. This enables designers, who as humans learn by examples, to exploit the collective knowledge which is based on past experiences of other designers to find the right Web APIs for a target application. We also present a preliminary evaluation of the framework.

Devis Bianchini, Valeria De Antonellis, Michele Melchiori
Modeling Business Capabilities and Context Dependent Delivery by Cloud Services

Contemporary business environments are changing rapidly, organizations are global, and cloud-based services have become a norm. Enterprises operating in these conditions need to have the capability to deliver their business in a variety of business contexts. Capability delivery thus has to be monitored and adjusted. Current Enterprise Modeling approaches do not address context-dependent capability design and do not explicitly support runtime adjustments. To address this challenge, a capability-driven approach is proposed to model business capabilities by using EM techniques, and to use model-based patterns to describe how software applications can adhere to changes in the execution context. A meta-model for capability design and delivery is presented with the consideration to delivering solutions as cloud services. The proposal is illustrated with an example case from an energy efficiency project. A supporting architecture for the capability development and the delivery in the cloud is also presented.

Jelena Zdravkovic, Janis Stirna, Martin Henkel, Jānis Grabis

Information Systems Evolution

Enabling Ad-hoc Business Process Adaptations through Event-Driven Task Decoupling

The ability to adapt running process instances is a key requirement to handle exceptions in service orchestrations. The design of the orchestration middleware and its underlying meta-model plays an important role to fulfill this requirement. However, current service orchestration middleware such as BPEL engines suffer from their imperative and tightly coupled task execution mechanisms making it difficult to adapt running process instances. In this paper we present a novel service orchestration middleware and its underlying meta-model to overcome this limitation. To achieve this, we combine the benefits of the models@runtime concept with the event-driven publish-subscribe mechanism. We evaluate our approach for its support to process instance adaptation and compare its performance to an existing orchestration runtime.

Malinda Kapuruge, Jun Han, Alan Colman, Indika Kumara
Analyzing and Completing Middleware Designs for Enterprise Integration Using Coloured Petri Nets

Enterprise Integration Patterns

allow us to design a middleware system conceptually before actually implementing it. So far, the in-depth analysis of such a design was not feasible, as these patterns are only described informally. We introduce a translation of each of these patterns into a

Coloured Petri Net

, which allows to investigate and improve middleware system designs in early stages of development in a number of use cases, including validation and performance analysis using simulation, automatic completion of control-flow in middleware designs, verifying a design for errors and functional properties, and obtaining an implementation in automatic way.

Dirk Fahland, Christian Gierds
Managing the Evolution and Customization of Database Schemas in Information System Ecosystems

We present an approach that supports the customization and evolution of a database schema in a software ecosystem context. The approach allows for the creation of customized database schemas according to selected, supported feature packs and can be used in an ecosystem context, where third-party providers and customers augment the system with their own capabilities.

The creation of the final database schema is automatic and also the relevant updates of individual feature packs can be automatically handled by the system.

Hendrik Brummermann, Markus Keunecke, Klaus Schmid

Mining and Predicting

A Knowledge-Based Integrated Approach for Discovering and Repairing Declare Maps

Process mining techniques can be used to discover process models from event data. Often the resulting models are complex due to the variability of the underlying process. Therefore, we aim at discovering

declarative

process models that can deal with such variability. However, for real-life event logs involving dozens of activities and hundreds or thousands of cases, there are often many potential constraints resulting in cluttered diagrams. Therefore, we propose various techniques to prune these models and

remove constraints that are not interesting or implied by other constraints

. Moreover, we show that

domain knowledge

(e.g., a reference model or grouping of activities) can be used to guide the discovery approach. The approach has been implemented in the process mining tool ProM and evaluated using an event log from a large Dutch hospital. Even in such highly variable environments, our approach can discover

understandable

declarative models.

Fabrizio M. Maggi, R. P. Jagadeesh Chandra Bose, Wil M. P. van der Aalst
Understanding Process Behaviours in a Large Insurance Company in Australia: A Case Study

Having a reliable understanding about the behaviours, problems, and performance of existing processes is important in enabling a targeted process improvement initiative. Recently, there has been an increase in the application of innovative process mining techniques to facilitate

evidence-based

understanding about organizations’ business processes. Nevertheless, the application of these techniques in the domain of finance in Australia is, at best, scarce. This paper details a 6-month case study on the application of process mining in one of the largest insurance companies in Australia. In particular, the challenges encountered, the lessons learned, and the results obtained from this case study are detailed. Through this case study, we not only validated existing ‘lessons learned’ from other similar case studies, but also added new insights that can be beneficial to other practitioners in applying process mining in their respective fields.

Suriadi Suriadi, Moe T. Wynn, Chun Ouyang, Arthur H. M. ter Hofstede, Nienke J. van Dijk
Profiling Event Logs to Configure Risk Indicators for Process Delays

Risk identification is one of the most challenging stages in the risk management process. Conventional risk management approaches provide little guidance and companies often rely on the knowledge of experts for risk identification. In this paper we demonstrate how risk indicators can be used to predict process delays via a method for configuring so-called Process Risk Indicators (PRIs). The method learns suitable configurations from past process behaviour recorded in event logs. To validate the approach we have implemented it as a plug-in of the ProM process mining framework and have conducted experiments using various data sets from a major insurance company.

Anastasiia Pika, Wil M. P. van der Aalst, Colin J. Fidge, Arthur H. M. ter Hofstede, Moe T. Wynn

Datawarehouses and Business Intelligence

Coopetitive Data Warehouse: A Case Study

In this paper we discuss the experience of the development of a real system for integrating data about turnover, price and selling volume of AOP UnoLombardia, the biggest association of fruit and vegetable growers in the Lombardia region (Italy), that includes primary Italian and European brands such as Bonduelle and Dimmidisi. The system represents an adaptation and transformation of traditional data warehouse repository oriented development to comply the requirements of a coopetitive environment, where multiple organizations are willing to cooperate over some topics but, at the same time, they compete in the market. Readers may found useful insights and lessons learned from the following contributions of the present work: (i) a methodology to design data warehouse applications in a coopetitive environment and (ii) an architecture based on the combination of virtual data integration and traditional ETL enforcing protection of sensible data.

Andrea Maurino, Claudio Venturini, Gianluigi Viscusi
A Layered Multidimensional Model of Complex Objects

Multidimensional modeling is nowadays recognized to best reflect the decision makers’ analytical view on data. In this paper, we address some modeling features that we believe existing multidimensional models do not fully cover, such as considering real life entities that are meant to be analyzed as complex objects, allowing for simple and complex measures, treating facts and dimension members equally and observing hierarchies within and between complex entities. We propose a layered multidimensional model based on the concept of

complex object

which encapsulates data and structure complexity and eases the creation and manipulation of complex data cubes. We need to define our model at three layers. The first layer

class diagram

describes complex objects and captures the hierarchical organization of their attributes. The second layer

package of classes

describes the multidimensional model as a set of complex objects that are connected by relationships and some of which are organized in hierarchies. The third layer

package of packages

describes complex cubes which are derived from the multidimensional model. We show the benefits and feasibility of our proposals through their implementation in a real-life case study.

Doulkifli Boukraâ, Omar Boussaïd, Fadila Bentayeb, Djamel-Eddine Zegour
Business Model Ontologies in OLAP Cubes

Business model ontologies capture the complex interdependencies between business objects. The analysis of the hence formalized knowledge eludes traditional OLAP systems which operate on numeric measures. Many real-world facts, however, do not boil down to a single number but are more accurately represented by business model ontologies. In this paper, we adopt business model ontologies for the representation of non-numeric measures in OLAP cubes. We propose modeling guidelines and adapt traditional OLAP operations for ontology-valued measures.

Christoph Schütz, Bernd Neumayr, Michael Schrefl

Requirements Engineering 2

Outsourcing Location Selection with SODA: A Requirements Based Decision Support Methodology and Tool

This paper seeks to address the decision making problem in software development outsourcing scenarios in which a project manager is in charge of deciding about which software components will be outsourced and which ones will be developed internally. Therefore we propose a methodology and tool support which leverage the classification of a project’s software components by means of a graph-based model of the components’ requirements and their corresponding clustering. In the course of our design oriented research approach, a prototypical implementation of the methodology has been developed and evaluated. It illustrates the practical applicability of the proposed method. We thereby contribute to the location selection problem in distributed software projects and give guidance for in-house or external software production. The theoretical contribution consists of revealing an improved processing methodology for assessing software requirements and increasing the outsourcing success of a software project. Our contribution for practice is an implemented prototype for project leads of distributed teams.

Tommi Kramer, Michael Eschweiler
A Goal Driven Framework for Software Project Data Analytics

The life cycle activities of industrial software systems are often complex, and encompass a variety of tasks. Such tasks are supported by integrated environments (IDEs) that allow for project data to be collected and analyzed. To date, most such analytics techniques are based on quantitative models to assess project features such as effort, cost and quality. In this paper, we propose a project data analytics framework where first, analytics objectives are represented as goal models with conditional contributions; second, goal models are transformed to rules that yield a Markov Logic Network (MLN) and third, goal models are assessed by an MLN probabilistic reasoner. This approach has been applied with promising results to a sizeable collection of software project data obtained by ISBSG repository, and can yield results even with incomplete or partial data.

George Chatzikonstantinou, Kostas Kontogiannis, Ioanna-Maria Attarian
A Framework to Evaluate Complexity and Completeness of KAOS Goal Models

Goal-Oriented Requirements Engineering (GORE) approaches have been developed to facilitate the requirements engineers work by, for example, providing abstraction mechanisms to help eliciting and modeling requirements. One of the well-established GORE approaches is KAOS. Nevertheless, in large-scale systems building KAOS models may result in incomplete and/or complex goal models, which are difficult to understand and change. This may lead to an increase in costs of product development and evolution. Thus, for large-scale systems, the effective management of complexity and completeness of goal models is vital. In this paper, we propose a metrics framework for supporting the quantitative assessment of complexity and completeness of KAOS goal models. Those metrics are formally specified, implemented and incorporated in a KAOS modeling tool. We validate the metrics with a set of real-world case studies and discuss the identified recurring modeling practices.

Patrícia Espada, Miguel Goulão, João Araújo

Knowledge and Know-How

Is Knowledge Power? The Role of Knowledge in Automated Requirements Elicitation

In large IS development projects a huge number of unstructured text documents become available and need to be analyzed and transformed into structured requirements. This elicitation process is known to be time-consuming and error-prone when performed manually by a requirements engineer. Thus, previous works have proposed to automate the process through alternative algorithms using different forms of knowledge. While the effectiveness of different algorithms has been intensively researched, limited efforts have been paid to investigate how the algorithms’ outcomes are determined by the utilized knowledge. Our work explores how the amount and type of knowledge affects requirements elicitation quality in two consecutive simulations. The study is based on a requirements elicitation system that has been developed as part of our previous work. We intend to contribute to the body of knowledge by outlining how the provided amount and type of knowledge determine the outcomes of automatic requirements elicitation.

Hendrik Meth, Alexander Maedche, Maximilian Einoeder
Experience Breeding in Process-Aware Information Systems

Process-Aware Information Systems (PAIS), such as workflow systems, support organizations in optimizing their processes by increasing efficiency and structure. In such systems, the inclusion of humans beyond the typical concept of roles has not yet been paid much attention to. However, a tighter integration of human resources can be beneficial for both, employees and employers. Our contribution is the formal integration of experiences into PAIS. This integration a) enables employees to track which experiences they gain while working on process tasks, b) allows employees to express experience development goals, and c) allows employers to, based on the employees’ experiences and goals, improve task allocation to employees. We introduce experience breeding, which describes how to measure experience variances that occur when employees work on certain tasks. We present a simulation design, discuss preliminary results and the potential improvements to overall task allocation effectiveness compared to standard algorithms.

Sonja Kabicher-Fuchs, Jürgen Mangler, Stefanie Rinderle-Ma
Automated Construction of a Large Semantic Network of Related Terms for Domain-Specific Modeling

In order to support the domain modeling process in model-based software development, we automatically create large networks of semantically related terms from natural language. Using part-of-speech tagging, lexical patterns and co-occurrence analysis, and several semantic improvement algorithms, we construct SemNet, a network of approximately 2.7 million single and multi-word terms and 37 million relations denoting the degree of semantic relatedness. This paper gives a comprehensive description of the construction of SemNet, provides examples of the analysis process and compares it to other knowledge bases. We demonstrate the application of the network within the Eclipse/Ecore modeling tools by adding semantically enhanced class name autocompletion and other semantic support facilities like concept similarity.

Henning Agt, Ralf-Detlef Kutsche

Information Systems Quality

Enforcement of Conceptual Schema Quality Issues in Current Integrated Development Environments

We believe that one of the most effective ways of increasing the quality of conceptual schemas in practice is by using an Integrated Development Environment (IDE) that enforces all relevant quality criteria. With this view, in this paper we analyze the support provided by current IDEs in the enforcement of quality criteria and we compare it with the one that could be provided given the current state of the art. We show that there is a large room for improvement. We introduce the idea of a unified catalog that would include all known quality criteria. We present an initial version of this catalog. We then evaluate the effectiveness of the additional support that could be provided by the current IDEs if they enforced all the quality criteria defined in the catalog. We focus on conceptual schemas written in UML/OCL, although our approach could be applied to other languages.

David Aguilera, Cristina Gómez, Antoni Olivé
Integrity in Very Large Information Systems
Dealing with Information Risk Black Swans

Multi-national enterprises, like financial services companies, operate large and critical information systems around the globe on a 24/7 basis. In an information-based business, even a single inadequately designed, implemented, tested and operated business application can put the existence of the enterprise at risk.

For adequately securing the integrity of business critical information and hence ensuring that such information is

meaningful, accurate

and

timely

, we present our risk assessment and controls framework: First, we introduce our criticality rating scheme that is based on the recoverability from integrity failures. For dealing with dependencies among applications, we present our approach based on services given a Service-Oriented Architecture (SOA). Second, we provide an overview of our design-related controls including a data analytics approach to continuously audit the most critical information assets. Finally, we present our learnings from a first implementation of the presented framework.

Beat Liver, Helmut Kaufmann
Testing a Data-Intensive System with Generated Data Interactions
The Norwegian Customs and Excise Case Study

Testing data-intensive systems is paramount to increase our reliance on e-governance services. An incorrectly computed

tax

can have catastrophic consequences in terms of public image. Testers at Norwegian Customs and Excise reveal that faults occur from interactions between database features such as

field values

. Taxation rules, for example, are triggered due to an interaction between 10,000 items, 88 country groups, and 934 tax codes. There are about 12.9 trillion 3-wise interactions. Finding interactions to uncover specific faults is like finding a needle in a haystack. Can we surgically generate a test database for interactions that interest testers? We address this question with a methodology and tool

Faktum

to automatically populate a test database that covers all T-wise interactions for

selected features

.

Faktum

generates a constraint model of interactions in

Alloy

and solves it using a divide-and-combine strategy. Our experiments demonstrate scalability of our methodology and we project its industrial applications.

Sagar Sen, Arnaud Gotlieb

Human Factors

Mapping Study about Usability Requirements Elicitation

The HCI community has developed guidelines and recommendations for improving the usability system that are usually applied at the last stages of the software development process. On the other hand, the SE community has developed sound methods to elicit functional requirements in the early stages, but usability has been relegated to the last stages together with other non-functional requirements. Therefore, there are no methods of usability requirements elicitation to develop software within both communities. An example of this problem arises if we focus on the Model-Driven Development paradigm, where the methods and tools that are used to develop software do not support usability requirements elicitation. In order to study the existing publications that deal with usability requirements from the first steps of the software development process, this work presents a mapping study. Our aim is to compare usability requirements methods and to identify the strong points of each one.

Yeshica Isela Ormeño, Jose Ignacio Panach
Programming Incentives in Information Systems

Information systems are becoming ever more reliant on different forms of social computing, employing individuals, crowds or assembled teams of professionals. With humans as first-class elements, the success of such systems depends heavily on how well we can motivate people to act in a planned fashion. Incentives are an important part of human resource management, manifesting selective and motivating effects. However, support for defining and executing incentives in today’s information systems is underdeveloped, often being limited to simple, per-task cash rewards. Furthermore, no systematic approach to program incentive functionalities for this type of platforms exists.

In this paper we present fundamental elements of a framework for programmable incentive management in information systems. These elements form the basis necessary to support modeling, programming, and execution of various incentive mechanisms. They can be integrated with different underlying systems, promoting portability and reuse of proven incentive strategies. We carry out a functional design evaluation by illustrating modeling and composing capabilities of a prototype implementation on realistic incentive scenarios.

Ognjen Scekic, Hong-Linh Truong, Schahram Dustdar
Backmatter
Metadaten
Titel
Advanced Information Systems Engineering
herausgegeben von
Camille Salinesi
Moira C. Norrie
Óscar Pastor
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-38709-8
Print ISBN
978-3-642-38708-1
DOI
https://doi.org/10.1007/978-3-642-38709-8

Premium Partner