With the list below I try to keep track of my publications. I'm not sure why, but experience has shown that this is a process that is actually not as perfect and error-free as I expected... So, if you want to have an un-biased, differently biased, maybe better picture of my publications, you can have a look at DBLP or Google Scholar.
Blockchains are distributed ledgers that enable the disintermediation of collaborative processes and, at the same time, foster trust among partners. Modern blockchains support smart contracts, i.e., software deployed on the blockchain, and guarantee their repeatable, deterministic execution. Alas, blockchains and smart contracts lack standardization. Therefore, smart contracts come with heterogeneous properties, APIs and data formats. This hinders the integration of smart contracts running in different blockchains, e.g., into enterprise business processes. This paper introduces the Smart Contract Invocation Protocol (SCIP), which unifies interacting with smart contracts of different blockchains. The protocol supports invoking smart contract functions, monitoring function executions, emitted events, and transaction finality, as well as querying a blockchain. The protocol is accompanied by a prototypical implementation of a SCIP endpoint in the form of a gateway.
Bots, i.e., algorithmically driven entities that behave like humans in on-line communications, are increasingly infiltrating social conversations on the Web. If not properly prevented, this presence of bots may cause harm to the humans they interact with. This article aims to understand which types of abuse may lead to harm and whether these can be considered intentional or not. We manually review a dataset of 60 Twitter bot code repositories on GitHub, derive a set of potentially abusive actions, characterize them using a taxonomy of abstract code patterns, and assess the potential abusiveness of the patterns. The article then describes the design and implementation of a code pattern recognizer and uses the pattern recognizer to automatically analyze a dataset of 786 Python bot code repositories. The study does not only reveal the existence of 28 communication-specific code patterns – which could be used to assess the harmfulness of bot code – but also their consistent presence throughout all studied repositories.
Nowadays social networks are becoming an essential ingredient of our life, the faster way to share ideas and to influence people. Interaction within social networks tends to take place within communities, sets of social accounts which share friendships, ideas, interests and passions; detecting digital communities is of increasing relevance, from a social and economical point of view. In this paper, we analyze the problem of community detection from a content analysis perspective: we argue that the content produced in social interaction is a very distinctive feature of a community, hence it can be effectively used for community detection. We analyze the problem from a textual perspective using only syntactic and semantic features, including high level latent features that we denote as topics. We show that, by inspecting the content used by tweets, we can achieve very efficient classifiers and predictors of account membership within a given community. We describe the features that best constitute a vocabulary, then we provide their comparative evaluation and select the best features for the task, and finally we illustrate an application of our approach to some concrete community detection scenarios, such as Italian politics and targeted advertising.
This paper explores whether older adults can remotely participate in museum visits with the help of virtual environments. We design and build a system that supports shared museum co-visits between onsite visitors inside the museum and older adults from a care home. We make the experience more engaging by providing a meaningful story, connecting the objects in the museum. The aim of the study is to understand whether older adults are able to use such technology and to study the mediated sense of spatial presence, the experienced social closeness, and the level of participants' engagement and enjoyment in the visit. We discuss the relationship between these aspects and factors leading to a better remote experience for older adults. The results show that older adults enjoy and engage in remote visits, and that there is a positive correlation between enjoyment, engagement and social closeness.
This paper lays the foundation for a new delivery paradigm for web-accessible content and functionality, i.e., conversational interaction. Instead of asking users to read text, click through links and type on the keyboard, the vision is to enable users to "speak to a website" and to obtain natural language, spoken feedback. The paper describes how state-of-the-art chatbot technology can enable a dialog between the user and the website, proposes a reference architecture for the automated inference of site-specific chatbots able to mediate between the user and the website, and discusses open challenges and research questions. The envisioned, bidirectional dialog paradigm advances current screen reader technology and aims to benefit both regular users in eyes-free usage scenarios as well as visually impaired users in everyday scenarios.
Today’s blockchain technologies focus mostly on isolated, proprietary technologies, yet there are application scenarios that ask for interoperability, e.g., among blockchains themselves or with external applications. This paper proposes the Smart Contract Locator (SCL) for the unambiguous identification of smart contracts over the Internet and across blockchains, and the Smart Contract Description Language (SCDL) for the abstract description of the external interface of smart contracts. The paper derives a unified metamodel for blockchain smart contract description and equips it with a concrete, JSON-based description language for smart contract search and discovery. The goal of the proposal is to foster smart contract reuse both inside blockchains and through the integration of smart contracts inside enterprise applications. The idea is inspired by the Service-Oriented Architecture (SOA) and aims to provide a high-level, cross-blockchain interoperability layer.
This paper discusses the problem of data quality in blockchain applications at three levels of abstraction, i.e., conceptual, logical and physical. Conceptually, it makes explicit the need for information of typical data quality metrics for their online assessment. Logically, it analyzes how the adoption of blockchain technology affects the availability of the data needed for quality assessment. Physically, it identifies a set of implementation options that take into account the information needs of metrics and the restrictions by the technology; special attention at this level is paid to Ethereum and Solidity. Two case studies put the identified patterns and abstractions into context and showcase their importance in real-world distributed applications and processes.
Bots, i.e., algorithmically driven entities that behave like humans in online communications, are increasingly infiltrating social conversations on the Web. If not properly prevented, this presence of bots may cause harm to the humans they interact with. This paper aims to understand which types of abuse may lead to harm and whether these can be considered intentional or not. We manually review a dataset of 60 Twitter bot code repositories on GitHub, derive a set of potentially abusive actions, characterize them using a taxonomy of abstract code patterns, and assess the potential abusiveness of the patterns. The study does not only reveal the existence of 31 communication-specific code patterns - which could be used to assess the harmfulness of bot code - but also their presence throughout all studied repositories.
This paper presents a framework for the design of chatbots for data exploration. With respect to conversational virtual assistants (such as Amazon Alexa or Apple Siri), this class of chatbots exploits structured input to retrieve data from known data sources. The approach is based on a conceptual representation of the available data sources, and on a set of modeling abstractions that allow designers to characterize the role that key data elements play in the user requests to be handled. Starting from the resulting specifications, the framework then generates a conversation for exploring the content exposed by the considered data sources.
Bots are algorithmically driven entities that act like humans in conversations via Twitter, on Facebook, in chats or Q&A sites. This article studies how they may affect on-line conversations, provides a taxonomy of harms that may be caused, and discusses how to prevent harm by studying when abuses occur.
With the increase of digital interaction, social networks are becoming an essential ingredient of our life, by progressively becoming the dominant media, e.g. in influencing political choices. Interaction within social networks tends to take place within communities, sets of social accounts which share friendships, ideas, interests and passions; detecting digital communities is of increasing relevance, from a social and economical point of view. In this paper, we argue that the vocabulary of terms used in social interaction is a very distinctive feature of a community, hence it can be effectively used for community detection. We show that, by inspecting the vocabulary used by tweets, we can achieve very efficient classifiers and predictors of account membership within a given community. We describe the syntactic and semantic features that best constitute a vocabulary, then we provide their comparative evaluation and select the best features for the task, and finally we illustrate several applications of our approach to concrete community detection scenarios.
This paper focuses on two of the key challenges a developer encounters when reusing smart contracts: finding actionable information about existing smart contracts (descriptors) and writing the necessary integration logic to invoke selected contracts and implement missing functions. As for the former issue, the paper proposes a smart contract description format that allows the developer to search for publicly available contracts, understand which features a contract exposes and how to invoke them, according to a service-oriented approach. For the latter, the paper implements a simple, model-driven development environment consisting in a visual programming editor that provides a set of modeling constructs encoding specific, reuse-oriented code patterns. The approach is implemented and demonstrated in the context of the blockchain platform Ethereum and its programming language Solidity. The results obtained show that the proposed approach can be beneficial in the development of composite smart contracts and generic blockchain applications.
Smart contracts turn blockchains into distributed computing platforms. This article studies whether smart contracts as implemented by state-of-the-art blockchain technology may serve as component technology for a computing paradigm like service-oriented computing (SOC) in the blockchain, in order to foster reuse and increase cost-effectiveness.
Facility location allocation is key to success of urban design, mainly in designing transport systems, finding locations for warehouse, fire stations and so on. The problem of determining locations of k facilities so that provides service to n customers, also known as p-median problems, is one of the well-known NP hard problems. Several heuristics have been proposed to solve location allocation problems, each of which has several limitations such as accuracy, time and flexibility, besides their advantages. In this paper, we propose to solve the p-median problems using crowdsourcing and gamification techniques. We present a crowdsourced game, called SolveIt, which employs wisdom and i ntelligence of the crowd to solve location allocation problems. We have presented a data model for representing p-median problems, designed and implemented the game and tested it using gold standards generated using a genetic algorithm tool. The evaluations show the accuracy of the results obtained from SolveIt players. We have also discussed the limitations and possible applications of the proposed approach.
This book constitutes revised papers from the eight International Workshops held at the 16th International Conference on Business Process Management, BPM 2018, in Sydney, Australia, in September 2018: BPI 2018, BPMS2 2018, PODS4H 2018, AI4BPM 2018, CCBPM 2018, BP-Meet-IoT / PQ 2018, DeHMiMoP 2018, REBM /EdForum 2018. The 45 full papers presented in this volume were carefully reviewed and selected from 90 submissions.
Idea Management (IM) has increasingly been adopted in the civic domain as a tool to engage the citizenry in processes oriented toward innovating plans, policies, and services. While Idea Management Systems (IMSs), i.e., the software systems that instrument IM, definitely help manage this practice, they require citizens to be committed to a separate virtual space for which they need to register, they must learn how to operate, and they must return to frequently. This paper presents an approach that integrates IMS with today's most popular digital spaces of participation, the social networking sites, enabling citizens to engage in IM processes using ordinary tools and without having to step outside their daily habits. Our goal is to reach out and pull into IM the large and demographically diverse sectors of the society that are already present and participating in social networking sites. Through a real case study of IM in the public sector that mixed both q ualitative and quantitative data collection methods, our proposal demonstrates to be a promising approach to reduce the barriers of participation. We conclude with an analysis of the strengths and limitations of our proposal.
Social networks are huge continuous sources of information that can be used to analyze people’s behavior and thoughts. Our goal is to extract such information and predict political inclinations of users. In particular, this paper investigates the importance of syntactic features of texts written by users in the process. Our hypothesis is that people belonging to the same political party write in similar ways, thus they can be classified properly on the basis of the words that they use. We analyze tweets because Twitter is commonly used in Italy for discussing about politics; moreover, it provides an official API that can be easily exploited for data extraction. Many classifiers were applied to different kinds of features and NLP vectorization methods in order to obtain the best method capable of confirming our hypothesis. To evaluate their accuracy, a set of current Italian deputies with consistent activity in Twitter has been selected as ground truth, and we have then predicted their political party. Using the results of our analysis, we also got interesting insights into current Italian politics.
In this work-in-progress paper we discuss the challenges in identifying effective and scalable crowd-based strategies for designing content, conversation logic, and meaningful metrics for a reminiscence chatbot targeted at older adults. We formalize the problem and outline the main research questions that drive the research agenda in chatbot design for reminiscence and for relational agents for older adults in general.
Current scenarios for app development are characterized by resources so rich that often overwhelm the final users, especially in mobile app usage situations. It is therefore important to define design methods that enable dynamic filtering of the pertinent resources and appropriate tailoring of the retrieved content. This paper presents a design framework based on the specification of the possible contexts deemed relevant to a given application domain and on their mapping onto an integrated schema of the resources underlying the app. The context and the integrated schema enable the instantiation at runtime of templates of app pages in function of the context characterizing the user’s current situation of use.
Chatbots, i.e., conversational software agents able to interact with users via instant messaging channels like Messenger, WhatsApp or SMS, have the power to substantively simplify human-computer interaction thanks to their natural language paradigm. While this certainly helps to lower barriers, state-of-the-art chatbots prevalently provide access to generic, non-personalized features with relatively little usefulness. This may hinder adoption. To provide users with real value, we envision a kind of chatbot that is personal and helpful by providing services that are chosen and configured by the users themselves, for themselves. As the development of a one-size-fits-all, yet flexible and customizable bot is hard, if not impossible, we discuss requirements and design options that directly put the user into control of their own personal bot.
Knowledge in the world continuously evolves, and ontologies are largely incomplete, especially regarding data belonging to the so-called long tail. We propose a method for discovering emerging knowledge by extracting it from social content. Once initialized by domain experts, the method is capable of finding emerging entities by means of a mixed syntactic-semantic method. The method uses seeds, i.e. prototypes of emerging entities provided by experts, for generating candidates; then, it associates candidates to feature vectors built by using terms occurring in their social content and ranks the candidates by using their distance from the centroid of seeds, returning the top candidates. Our method can run continuously or with periodic iterations, using the results as new seeds. In this paper we address the following research questions: (1) How does reconstructed domain knowledge evolve if the candidates of one extraction are recursively used as seeds? (2) How does the reconstructed domain knowledge spread geographically? (3) Can the method be used to inspect the past, present, and future of knowledge? (4) Can the method be used to find emerging knowledge?
Crowdsourcing enables one to leverage on the intelligence and wisdom of potentially large groups of individuals toward solving problems. Common problems approached with crowdsourcing are labeling images, translating or transcribing text, providing opinions or ideas, and similar – all tasks that computers are not good at or where they may even fail altogether. The introduction of humans into computations and/or everyday work, however, also poses critical, novel challenges in terms of quality control, as the crowd is typically composed of people with unknown and very diverse abilities, skills, interests, personal objectives and technological resources. This survey studies quality in the context of crowdsourcing along several dimensions, so as to define and characterize it and to understand the current state of the art. Specifically, this survey derives a quality model for crowdsourcing tasks, identifies the methods and techniques that can be used to assess the attributes of the model, and the actions and strategies that help prevent and mitigate quality problems. An analysis of how these features are supported by the state of the art further identifies open issues and informs an outlook on hot future research directions.
Blockchain technology offers a sizable promise to rethink the way interorganizational business processes are managed because of its potential to realize execution without a central party serving as a single point of trust (and failure). To stimulate research on this promise and the limits thereof, in this article, we outline the challenges and opportunities of blockchain for Business Process Management (BPM). We first reflect how blockchains could be used in the context of the established BPM lifecycle and second how they might become relevant beyond. We conclude our discourse with a summary of seven research directions for investigating the application of blockchain technology in the context of BPM.
The authors study the practice of promoting idea campaigns in social networks via the well-known Share/Tweet button. They analyze data about 53 civic participation initiatives collected from IdeaScale, one of the leading online idea management platforms today, and unveil a considerable misconception about the effectiveness of the practice. The article highlights open challenges and suggests a set of alternative techniques to leverage on the ideation capacity of social networks.
Knowledge bases like DBpedia, Yago or Google’s Knowledge Graph contain huge amounts of ontological knowledge harvested from (semi-)structured, curated data sources, such as relational databases or XML and HTML documents. Yet, the Web is full of knowledge that is not curated and/or structured and, hence, not easily indexed, for example social data. Most work so far in this context has been dedicated to the extraction of entities, i.e., people, things or concepts. This poster describes our work toward the extraction of relationships among entities. The objective is reconstructing a typed graph of entities and relationships to represent the knowledge contained in social data, without the need for a-priori domain knowledge. The experiments with real datasets show promising performance across a variety of domains.
Social media response to catastrophic events, such as natural disasters or terrorist attacks, has received a lot of attention. However, social media are also extremely important in the context of planned events, such as fairs, exhibits, festivals, as they play an essential role in communicating them to fans, interest groups, and the general population. These kinds of events are geo-localized within a city or territory and are scheduled within a public calendar. We consider a specific scenario, the Milano Fashion Week (MFW), which is an important event in our city. We focus our attention on the spreading of social content in time, measuring the delay of the event propagation. We build different clusters of stakeholders (fashion brands), we characterize several features of time propagation and we correlate it to the popularity of involved actors. We show that the clusters by time and popularity are loosely correlated, and therefore the time response cannot be easily inferred. This motivates the development of a predictor through supervised learning in order to anticipate the space cluster of a new brand.
Social media response to catastrophic events, such as natural disasters or terrorist attacks, has received a lot of attention. However, social media are also extremely important in the context of planned events, such as fairs, exhibits, festivals, as they play an essential role in communicating them to fans, interest groups, and the general population. These kinds of events are geo-localized within a city or territory and are scheduled within a public calendar. We consider a specific scenario, the Milano Fashion Week (MFW), which is an important event in our city. We focus our attention on the coverage of social content in space, measuring the propagation of the event in the territory. We build different clusters of fashion brands, we characterize several features of propagation in space and we correlate them to the popularity of involved actors. We show that the clusters along space and popularity dimensions are loosely correlated, and that domain experts are typically able to understand and identify only popularity aspects, while they are completely unaware of spatial dynamics of social media response to the events.
This book constitutes revised selected papers from the Second International Rapid Mashup Challenge, RMC 2016, held in Lugano, Switzerland in June 2016. The 6 papers presented in this volume were carefully reviewed and selected from 8 submissions. They are post-challenge extensions of the selected short participation proposals. The 2016 edition of the Challenge is the second installment of a series of challenges that aim to engage researchers and practitioners in a competition for the best mashup approach.
Recommender systems are omnipresent today, especially on the Web, and the quality of their recommendations is crucial for user satisfaction. Unlike most works on the topic, in this article we do not focus on the algorithmic side of the problem (i.e., searching for the algorithm that better learns from the collected user feedback) and instead study the importance of the data in input to the algorithms, identifying the information that should be collected from users to build better recommendations. We study restaurant recommendations for locals and show that fine-tuned data and state-of-the-art algorithms can outperform the leading recommendation service, TripAdvisor. The findings make a case for better-thought and purpose-tailored data collection techniques.
Idea Management (IM) has increasingly been adopted by organizations and companies to innovate their products and services. While the software systems that instrument IM definitely help in managing this practice, we have discovered that IM organizers are having problems to establish, inside these platforms, communities able to support IM initiatives; either they cannot attract enough participants or the participating people are simply not the right ones. Acknowledging that most organizations have today a presence in Facebook and are striving to grow active communities inside this social network, we propose to harness the creativity of their already established Facebook communities instead of starting innovation communities inside IM platforms. Facebook can benefit IM with large and diverse pools of active communities who may be more interested in helping tied persons or organizations. This paper introduces a method that exploits Facebook’s technical affordances facilitating the execution of IM inside Facebook communities. Two independent studies demonstrated the effectiveness of the proposal to capture ideas that can lead to innovations. We conclude by discussing the strength and limitations of Facebook’s technical capabilities to instrument IM.
This work-in-progress paper describes a vision, i.e., that of fast and reliable software user experience studies con- ducted with the help from the crowd. Commonly, user studies are controlled in-lab activities that require the instruction, monitoring, interviewing and compensation of a number of participants that are typically hard to recruit. The goal of this work is to study which user study methods can instead be crowdsourced to generic audiences to enable the conduct of user studies without the need for expensive lab experiments. The challenge is understanding how to conduct crowdsourced studies without giving up too many of the guarantees in-lab settings are able to provide.
Effective matching of activities is the first step toward successful process model matching and search. The problem is nontrivial and has led to a variety of computational similarity metrics and matching approaches, however all still with low performance in terms of precision and recall. In this paper, instead, we study how to leverage on human intelligence to identify matches among activities and show that the problem is not as straightforward as most computational approaches assume. We access human intelligence (i) by crowdsourcing the activity matching problem to generic workers and (ii) by eliciting ground truth matches from experts. The precision and recall we achieve and the qualitative analysis of the results testify huge potential for a human-based activity matching that contemplates disagreement and interpretation.
Pattern mining, that is, the automated discovery of patterns from data, is a mathematically complex and computationally demanding problem that is generally not manageable by humans. In this article, we focus on small datasets and study whether it is possible to mine patterns with the help of the crowd by means of a set of controlled experiments on a common crowdsourcing platform. We specifically concentrate on mining model patterns from a dataset of real mashup models taken from Yahoo! Pipes and cover the entire pattern mining process, including pattern identification and quality assessment. The results of our experiments show that a sensible design of crowdsourcing tasks indeed may enable the crowd to identify patterns from small datasets (40 models). The results, however, also show that the design of tasks for the assessment of the quality of patterns to decide which patterns to retain for further processing and use is much harder (our experiments fail to elicit assessments from the crowd that are similar to those by an expert). The problem is relevant in general to model-driven development (e.g., UML, business processes, scientific workflows), in that reusable model patterns encode valuable modeling and domain knowledge, such as best practices, organizational conventions, or technical choices, that modelers can benefit from when designing their own models.
Quickly and dominantly, REST APIs have spread over the Web and percolated into modern software development practice, especially in the Mobile Internet where they conveniently enable offloading data and computations onto cloud services. We analyze more than 78GB of HTTP traffic collected by Italy’s biggest Mobile Internet provider over one full day and study how big the trend is in practice, how it changed the traffic that is generated by applications, and how REST APIs are implemented in practice. The analysis provides insight into the compliance of state-of-the-art APIs with theoretical Web engineering principles and guidelines, knowledge that affects how applications should be developed to be scalable and robust. The perspective is that of the Mobile Internet.
The ICWE 2016 Rapid Mashup Challenge is the second installment of a series of challenges that aim to engage researchers and practitioners in showcasing and discussing their work on assisting mashup development. This introduction provides the reader with the general context of the Challenge, its objectives and motivation, and the requirements contributions were asked to satisfy so as to be eligible for participation. A summary of the contributions that were selected for presentation in the 2016 edition anticipates the content of the remainder of this volume.
In this final chapter, we report on the outcome of the ICWE 2016 Rapid Mashup Challenge (RMC), describe the voting system used, and draw some conclusions regarding the presented works.
For centuries, science (in German “Wissenschaft”) has aimed to create (“schaffen”) new knowledge (“Wissen”) from the observation of physical phenomena, their modeling and empirical validation. With this vision paper, we propose to do so by observing not the physical, but the virtual world, namely the Web with its ever growing stream of data materialized in the form of social network chattering, content produced on demand by crowds of people, messages exchanged among interlinked devices in the Internet of Things, and similar. The knowledge we may find there can be dispersed, informal, contradicting and ephemeral today, while already tomorrow it may be commonly accepted. The challenge is capturing knowledge that is new, has not been formalized yet (e.g., in existing knowledge bases), and is buried inside a big, moving target (the stream of online data). The purpose is to provide data-driven innovation scenarios with the necessary food (up-to-date knowledge) and to do so timely.
This article makes a case for crowdsourcing approaches that are able to manage crowdsourcing processes, that is, crowdsourcing scenarios that go beyond the mere outsourcing of multiple instances of a micro-task and instead require the coordination of multiple different crowd and machine tasks. It introduces the necessary background and terminology, identifies a set of analysis dimensions, and surveys state-of-the-art tools, highlighting strong and weak aspects and promising future research and development directions.
Task execution timeliness, i.e., the completion of a task within a given time frame, is a known open issue in crowdsourcing. While running tasks on crowdsourcing platforms a requester experiences long tails in execution caused by abandoned assignments (those left by workers unfinished), which become available for other workers only after some expiration time (e.g., 30 minutes in CrowdFlower). These abandoned assignments result in significant delays and a poor predictability of the overall task execution time. In this paper, we propose an approach and an implementation called ReLauncher to identify such abandoned assignments and relaunch them for other workers. We evaluate our implementation with an experiment on CrowdFlower that provides substantive evidence for a significant execution speed improvement with an average extra cost of about 10%.
Web services are a consolidated reality of the modern Web with tremendous, increasing impact on everyday computing tasks. They turned the Web into the largest, most accepted, and most vivid distributed computing platform ever. Yet, the use and integration of Web services into composite services or applications, which is a highly sensible and conceptually non-trivial task, is still not unleashing its full magnitude of power. A consolidated analysis framework that advances the fundamental understanding of Web service composition building blocks in terms of concepts, models, languages, productivity support techniques, and tools is required. This framework is necessary to enable effective exploration, understanding, assessing, comparing, and selecting service composition models, languages, techniques, platforms, and tools. This article establishes such a framework and reviews the state of the art in service composition from an unprecedented, holistic perspective.
This book constitutes the refereed proceedings of the First International Rapid Mashup Challenge, RMC 2015, held on June 23, 2015, in Rotterdam, The Netherlands, in conjunction with the 15th International Conference on Web Engineering, ICWE 2015. The 2015 edition of the challenge is the first installment of a series of challenges that aim to engage researchers and practitioners in a competition for the best mashup approach. The 6 thoroughly reviewed contributions contained in this volume are extended versions of the initial proposals that served the authors to express their interest to participate in the challenge. The initial proposals have been used by the challenge organizers to select participants based on the interestingness and maturity of the proposals.
Software in general is thoroughly analyzed before it is released to its users. Business processes often are not – at least not as thoroughly as it could be – before they are released to their users, e.g., employees or software agents. This article ascribes this practice to the lack of suitable instruments for business process analysts, who design the processes, and aims to provide them with the necessary instruments to allow them to also analyze their processes. We use the spreadsheet paradigm to represent business process analysis tasks, such as writing metrics and assertions, running performance analysis and verification tasks, and reporting on the outcomes, and implement a spreadsheet-based tool for business process analysis. The results of two independent user studies demonstrate the viability of the approach.
This book constitutes the thoroughly refereed post-conference proceedings of Current Trends in Web Engineering, ICWE Workshops 2015 which was held in June 2015 in Rotterdam, The Netherlands. The 16 revised full papers were selected form 23 submissions and cover topics on natural language processing for informal text, pervasive Web technologies, trends and challenges, and mining in the social Web.
This volume collects the descriptions of the demonstrations that were presented at the 13th International Conference on Business Process Management (BPM), which was held from August 31st to September 3rd, 2015, in Innsbruck, Austria. The conference brings together researchers and practitioners alike and represents one of the most prestigious scientific events on BPM worldwide. BPM is an area of research that is strongly intertwined with practice, and the demonstration track assumes a major role in the program of the conference. It allows everybody to showcase novel software solutions related to the topics of the conference, ranging, for example, from process modeling tools for end–users to low–level process mining algorithms running hidden in a back–office or in the cloud.
TheBusinessProcessModelandNotation(BPMN)isastandard for modeling and executing business processes with human or machine tasks. The semantics of tasks is usually discrete: a task has exactly one start event and one end event; for multi-instance tasks, all instances must complete before an end event is emitted. We propose a new task type and streaming connector for crowdsourcing able to run hundreds or thousands of micro-task instances in parallel. The two constructs provide for task streaming semantics that is new to BPMN, enable the modeling and efficient enactment of complex crowdsourcing scenarios, and are applicable also beyond the special case of crowdsourcing. We implement the necessary design and runtime support on top of CrowdFlower, demonstrate the viability of the approach via a case study, and report on a set of runtime performance experiments.
This paper proposes to approach the problem of developing mashups by exclusively focusing on the Surface Web, that is, the data and functionality accessible through common Web pages. Typically, mashups focus on the integration of resources accessible through the Deep Web, such as data feeds, Web services and Web APIs, that do not have own UIs – next to data extracted from Web pages. Yet, these resources can be wrapped with ad-doc UIs, suitably instrumented, and made accessible through the Surface Web. Doing so enables a UI-oriented computing paradigm that allows developers to implement mashups interactively and in a live fashion inside their Web browser, without having to program any line of code. The goal of this paper is to showcase UI-oriented computing in practice and to demonstrate its feasibility and potential.
The ICWE 2015 Rapid Mashup Challenge is the first installment of a series of challenges that aim to engage researchers and practitioners in a competition for the best mashup approach. This paper introduces the reader to the general context of the Challenge, its objec- tives and motivation. It summarizes its structure into phases and the requirements contributions were asked to satisfy, so as to be eligible for participation. A brief summary of the contributions that were selected for presentation provides an overview of the content of the remainder of this volume.
In the following we report on the outcome of the ICWE 2015 Rapid Mashup Challenge (RMC), describe the voting system used, and draw some conclusions regarding the presented works.
Crowdsourcing is the outsourcing of a unit of work to a crowd of people via
an open call for contributions. Thanks to the availability of online crowdsourcing
platforms, such as Amazon Mechanical Turk or CrowdFlower, the practice has
experienced a tremendous growth over the last few years and demonstrated its
viability in a variety of different fields, such as data collection and analysis
or human computation. Yet, it is also increasingly struggling with the inherent
limitations of these platforms: each platform has its own logic of how to crowdsource
work (e.g., marketplace or contest), there is only very little support for structured
work (work that requires the coordination of multiple tasks), and it is hard to
integrate crowdsourced tasks into state-of-the-art business process management (BPM)
or information systems.
We attack these three shortcomings by (i) developing a flexible crowdsourcing platform
(we call it Crowd Computer, CC) that allows one to program custom crowdsourcing logics
for individual as well as structured tasks, (ii) devising a BPMN-based modeling language
that allows one to program the CC intuitively, (iii) equipping the language with a
dedicated visual editor, and (iv) implementing the CC on top of standard BPM technology
that can easily be integrated into existing software and processes. We demonstrate the
effectiveness of the approach with a case study on the crowd-based mining of mashup
model patterns.
This paper proposes a new perspective on the problem of data integration on the Web: the one of the Surface Web. It introduces the concept of UI-oriented computing as a computing paradigm whose core ingredient are the user interfaces that build up the Surface Web, and shows how a sensible mapping of data integration tasks to user interface elements and user interactions is able to cope with data integration scenarios that so far have only be conceived for the Deep Web with its APIs and Web services. The described approach provides a novel conceptual and technological framework for practices, such as the integration of data APIs/services and the extraction of content from Web pages, that are common practice but still not adequately supported. The approach targets both programmers and users alike and comes as an extensible, open-source browser extension.
The goal of this paper is to understand if older adults can participate in remote museum visits with virtual environments, and which design factors would be crucial for remote participation system success. We report on a study with 30 older adult participants, identifying strong and weak points of different designs for remote participation and identifying future design directions. Our results illustrate that different designs can change communication dynamics, exploration and navigation patterns, and we describe the design features that led to this. An interaction-free design was found to be the easiest to use, while virtual environments are perceived as aesthetically appealing. Implications for developers are discussed.
Active Ageing aims to foster a physically, mentally and socially active lifestyle as a person ages. It is a complex, multi-faceted problem that involves a variety of different actors, such as policy makers, doctors, care givers, family members, friends and, of course, older adults. This review aims to understand the role of a new actor, which increasingly plays the role of enabler and facilitator, i.e., that of the technology provider. The review specifically focuses on Information Technology (IT), with a particular emphasis on software applications, and on how IT can prevent decline, compensate for lost capabilities, aid care, and enhance existing capabilities. The analysis confirms the crucial role of IT in Active Ageing, shows that Active Ageing requires a multidisciplinary approach, and identifies the need for better integration of hardware, software, the environment and the involved actors.
With this article, we give an answer to one of the open problems of mashup development users may
face when operating a model-driven mashup tool, i.e., the lack of modeling expertise. Although
commonly considered simple applications, mashups can also be complex software artifacts,
depending on the number and types of Web resources (the components) they integrate. Mashup tools
have undoubtedly simplified mashup development, yet the problem is still generally non-trivial and
requires intimate knowledge of the components provided by the mashup tool, its underlying mashup
paradigm, and of how to apply such to the integration of the components. This knowledge is generally
neither intuitive nor standardized across different mashup tools, and the consequent lack of
modeling expertise affects both skilled programmers and end user programmers alike.
In this article, we show how to effectively assist the users of mashup tools with contextual,
interactive recommendations of composition knowledge in the form of reusable mashup model patterns.
We design and study three different recommendation algorithms and describe a pattern weaving approach
for the one-click reuse of composition knowledge. We report on the implementation of three pattern
recommender plug-ins for different mashup tools and demonstrate via user studies that recommending
and weaving contextual mashup model patterns significantly reduces development times in all three cases.
Process mining is a domain where computers undoubtedly outperform humans. It is a mathematically complex and computationally demanding problem, and event logs are at too low a level of abstraction to be intelligible in large scale to humans. We demonstrate that if instead the data to mine from are models (not logs), datasets are small (in the order of dozens rather than thousands or millions), and the knowledge to be discovered is complex (reusable model patterns), humans outperform computers. We design, implement, run, and test a crowd-based pattern mining approach and demonstrate its viability compared to automated mining. We specifically mine mashup model patterns (we use them to provide interactive recommendations inside a mashup tool) and explain the analogies with mining business process models. The problem is relevant in that reusable model patterns encode valuable modeling and domain knowledge, such as best practices or organizational conventions, from which modelers can learn and benefit when designing own models.
Home-based exercise programs can improve physical functioning and health status of elderly subjects.
It appears that a successful implementation of exercise interventions presents major challenges for
many older people: Monitoring and supporting elderly properly while doing their home-based exercises
is essential with respect to training success. We developed an IT-based system -ActiveLifestyle - that
offers older adults a home-based strength-training program with incorporated motivation strategies and
support features.
The objective of this article is to compare three different home-based training programs with respect to their effect on measures of
gait quality and physical performance through planned comparisons between i) tablet based and brochure
based interventions, ii) individual and social motivation strategies, and iii) ‘active’ and ‘inactive’
participants.
Mashups have emerged as an innovative software
trend that re-interprets existing Web building blocks and leverages the composition of individual
components in novel, value-adding ways. Additional appeal also derives from their potential to
turn non-programmers into developers.
Daniel and Matera have written the first comprehensive reference work for mashups.
They systematically cover the main concepts and techniques underlying mashup design and development,
the synergies among the models involved at different levels of abstraction, and the way models
materialize into composition paradigms and architectures of corresponding development tools. The
book deliberately takes a balanced approach, combining a scientific perspective on the topic with
an in-depth view on relevant technologies.
To this end, the first part of the book introduces the theoretical and technological
foundations for designing and developing mashups, as well as for designing tools that can aid mashup
development. The second part then focuses more specifically on various aspects of mashups. It discusses
a set of core component technologies, core approaches, and architectural patterns, with a particular
emphasis on tool-aided mashup development exploiting model-driven architectures. Development processes
for mashups are also discussed, and special attention is paid to composition paradigms for the end-user
development of mashups and quality issues.
Overall, the book is of interest to a wide range of readers. Students, lecturers,
and researchers will find a comprehensive overview of core concepts and technological foundations for
mashup implementation and composition. Even without low-level coding details, practitioners like
software architects will find guidance on key implementation concepts, architectural patterns, and
development tools and approaches. A related website provides additional teaching material which can
be used either as part of a course or for self study.
Despite the common claim by mashup platforms that they enable end users to develop own software, in
practice end users still don’t develop own mashups, as the highly technical or inexistent user bases
of today’s mashup platforms testify. The key shortcoming of current platforms is their general-purpose
nature, which privileges expressive power over intuitiveness. In our prior work, we have demonstrated
that a domain- specific mashup approach, which privileges intuitiveness over expressive power, has much
more potential to enable end user development (EUD). The problem is that developing mashup platforms –
domain-specific or not – is complex and time-consuming. In addition, domain-specific mashup platforms,
by their very nature, target only a small user basis, i.e., the experts of the target domain, which
makes their development not sustainable if it is not adequately supported and automated.
With this article, we aim to make the development of custom, domain-specific mashup platforms cost-effective.
We describe a mashup tool development kit (MDK) that is able to automatically generate a mashup platform
(comprising custom mashup and component description languages and design time and runtime environments)
from a conceptual design and to provision it as a service. We equip the kit with a dedicated development
methodology and demonstrate the applicability and viability of the approach with the help of two case studies.
Crowdsourcing platforms are changing the way people can work and earn money. The population of workers on crowdsourcing platforms already counts millions and keeps growing. Workers on these platforms face several usability challenges, which we identify in this work by running two surveys on the CrowdFlower platform. Our surveys show that the majority of workers spend more than 25% of their time on searching tasks to work on. Limitations in the current user interface of the task listing page prevent workers from focusing more on the execution. In this work we present an attempt to design and implement a specific user interface for task listing aimed to help workers spend less time searching for tasks and thus navigate among them more easily.
This tutorial aims to provide insight into the constantly evolving mashup ecosystem. It presents core definitions, overviews a representative set of mashup components (the resources to be integrated) and mashup models (how resources are integrated), illustrates composition paradigms supported by state-of-the-art mashup tools, and discusses the quality of the resulting mashups from a user perspective. The goal of the tutorial is to introduce the topic and show its applicability, benefits and limitations.
Over the past few years, mashup development has been made more accessible with tools such as Yahoo! Pipes that help in making the development task simpler through simplifying technologies. However, mashup development is still a difficult task that requires knowledge about the functionality of web APIs, parameter settings, data mappings, among other development efforts. In this work, we aim at assisting users in the mashup process by recommending development knowl- edge that comes in the form of reusable composition knowledge. This composition knowledge is harvested from a repository of existing mashup models by mining a set of composition patterns, which are then used for interactively providing composition recommendations while developing the mashup. When the user accepts a recommendation, it is automatically woven into the partial mashup model by applying modeling actions as if they were performed by the user. In order to demonstrate our approach we have implemented Baya, a Firefox plugin for Yahoo! Pipes that shows that it is indeed possible to harvest useful composition patterns from existing mashups, and that we are able to provide complex recommendations that can be automatically woven inside Yahoo! Pipes’ web-based mashup editor.
Service composition, web mashups, and business process modeling are based on the composition and reuse of
existing functionalities, user interfaces, or tasks. Composition tools typically come with their own,
purposely built composition languages, based on composition techniques like data flow or control flow,
and only with minor distinguishing features - besides the different syntax. Yet, all these composition
languages are developed from scratch, without reference specifications (e.g., XML schemas), and by reasoning
in terms of low-level language constructs. That is, there is neither reuse nor design support in the development
of custom com- position languages.
We propose a conceptual design technique for the construction of custom composition languages that is based
on a generic composition reference model and that fosters reuse. The approach is based on the abstraction of
common composition techniques into high-level language features, a set of reference specifications for each
feature, and the assembling of features into custom languages by guaranteeing their soundness. We specifically
focus on mashup languages.
Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint
university/industry research projects, and novel IT products on the market. SOC is the computing paradigm
that uses Web services as building blocks for the engineering of composite, distributed applications out of
the reusable application logic encapsulated by Web services. Web services could be considered the best-known
and most standardized technology in use today for distributed computing over the Internet.
This book is the second installment of a two-book collection covering the state-of-the-art of both theoretical
and practical aspects of Web services and SOC research and deployments. Advanced Web Services specifically focuses
on advanced topics of Web services and SOC and covers topics including Web services transactions, security and trust,
Web service management, real-world case studies, and novel perspectives and future directions.
The editors present foundational topics in the first book of the collection, Web Services Foundations (Springer,
2014). Together, both books comprise approximately 1400 pages and are the result of an enormous community effort
that involved more than 100 authors, comprising the world’s leading experts in this field.
Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint
university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that
uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable
application logic encapsulated by Web services. Web services could be considered the best-known and most
standardized technology in use today for distributed computing over the Internet.
Web Services Foundations is the first installment of a two-book collection covering the state-of-the-art of both
theoretical and practical aspects of Web services and SOC research. This book specifically focuses on the foundations
of Web services and SOC and covers - among others - Web service composition, non-functional aspects of Web services,
Web service selection and recommendation, and assisted Web service composition.
The editors collect advanced topics in the second book of the collection, Advanced Web Services, (Springer, 2014).
Both books together comprise approximately 1400 pages and are the result of an enormous community effort that involved
more than 100 authors, comprising the world’s leading experts in this field.
This book constitutes the refereed proceedings of the 10 th International Conference on Mobile Web Information Systems, MobiWIS 2013, held in Paphos, Cyprus, in August 2013. The 25 papers (20 full research papers, 4 demonstration papers, and one abstract of the keynote speech) presented were carefully reviewed and selected from various submissions. The papers cover the following topics related to mobile Web and Information Systems (WISs), such as mobile Web services, location-awareness, design and development, social computing and society, development infrastructures and services, SOA and trust, UI migration and human factors, and Web of Things and networks.
This book constitutes the proceedings of the 11th International Conference on Business Process Management, BPM 2013, held in Beijing, China, in August 2013. The 17 regular papers and 8 short papers included in this volume were carefully reviewed and selected from 118 submissions. The papers are organized in 7 topical sections named: process mining; conformance checking; process data; process model matching; process architectures and collaboration; as well as alternative perspectives, and industry paper.
Background: Reaction time, coordination, and cognition performance typically diminish in older adults, which may
lead to gait impairments, falls, and injuries. Regular strength–balance exercises are highly recommended to reduce
this problem and to improve health, well-being, and independence in old age. However, many older people face a
lack of motivation in addition to other strong barriers to exercise. We developed ActiveLifestyle, an information
technology (IT)-based system for active and healthy aging aiming at improving balance and strength. ActiveLifestyle
is a training app that runs on a tablet and assists, monitors, and motivates older people to follow personalized
training plans autonomously at home.
Objectives: The objectives were to (1) investigate which IT-mediated motivation strategies increase adherence to
physical exercise training plans in older people, (2) assess the impact of ActiveLifestyle on physical activity
behavior change, and (3) demonstrate the effectiveness of the ActiveLifestyle training to improve gait speed.
Methods: 44 older adults followed personalized, 12-week strength and balance training plans. All participants
performed the exercises autonomously at home. Questionnaires were used to assess the technological familiarity
and stage of behavior change, as well as the effectiveness of the motivation instruments adopted by ActiveLifestyle.
Adherence to the exercise plan was evaluated using performance data collected by the app and through information
given by the participants during the study. Pretests and posttests were performed to evaluate gait speed of the
participants before and after the study.
Results: Participants were 75 years (SD 6); predominantly females (64%), hold trades or professional diploma (54%),
and their past profession was in a sitting position (43%). Of the 44 participants who enrolled, 33 (75%) completed
the study. The app proved to assist and motivate independently living and healthy older adults to autonomously
perform strength–balance exercises (median=6 on a 7-point Likert scale). Social motivation strategies proved more
effective than individual strategies to stimulate the participants to comply with the training plan, as well as
to change their behavior permanently toward a more physically active lifestyle. The exercises were effective to
improve preferred and fast gait speed.
Conclusions: ActiveLifestyle assisted and motivated independently living and healthy older people to autonomously
perform strength–balance exercises over 12 weeks and had low dropout rates. The social motivation strategies
were more effective to stimulate the participants to comply with the training plan and remain on the intervention.
The adoption of assistive technology devices for physical intervention tends to motivate and retain older people
exercising for longer periods of time.
The claim of this paper is that reuse on the Web – if sensibly facilitated – can be achieved in a much more intuitive and efficient fashion than today. The idea is to invert the current perspective on reuse by moving away from programmer-oriented artifacts, such as APIs, web services and data feeds, and focusing on user-oriented artifacts, i.e., graphical user interfaces (UIs). The paper defines a new kind of API, the interactive API (iAPI), which reconciles the intuitiveness of interactive UIs with the power of programmable APIs and enables (i) programmatic access to UIs and (ii) interactive, live programming. The paper discusses use cases and implementation options and lays the foundation for UI-oriented computing as a discipline.
Currently, there are a lot of people trying to leverage on the success of social networks by implementing social applications. However, implementing social applications is complex, due to the requirements and constraints put by the social networks to protect their data. In this work we present Simple Flow, a tool that simplifies the creation of social applications. Simple Flow proposes a processes-based approach to the design and execution of social applications. Simple Flow targets end-users and programmers with no experience in programming for social networks, giving them the possibility to design processes by concatenating social network actions (like post a message or comment a photo). For the execution of the designed processes Simple Flow interconnects, at runtime, template web pages (one page per action) according to the process design defined previously. These templates abstract the complexities of the interactions with social networks.
This book constitutes the refereed proceedings of the 13th International Conference on Web Engineering, ICWE 2013, held in Aalborg, Denmark, in July 2013. The 21 full research papers, 4 industry papers, and 11 short papers presented were carefully reviewed and selected from 92 submissions. The scientific program was completed with 7 workshops, 6 demonstrations and posters. The papers cover a wide spectrum of topics, such as, among others: web mining and knowledge extraction, semantic and linked data management, crawling and web research, model-driven web engineering, component-based web engineering, Rich Internet Applications (RIAs) and client-side programming, web services, and end-user development.
Over the last decade, we have been witnessing an increasing use of Business Intelligence (BI) solutions
that allow enterprise to query, understand, and analyze business data in order to make better decisions.
Traditionally, BI applications allowed business people to acquire useful knowledge from the data of their
organization by means of a variety of technologies, such as data warehousing, data mining, business
performance management, OLAP, periodical business reports, and the like. Yet, in the very recent years,
a new trend emerged: BI applica- tions no longer limit their analysis to the data inside a company only.
Increasingly, they also source their data from the outside, i.e., from the Web, and complement company-internal
data with value-adding information from the Web (e.g., retail prices of products sold by competitors),
in order to provide richer insights into the dynamics of today’s business.
In parallel to the move of data from the Web into BI applications, BI applications are experiencing a
trend from company-internal information systems to the cloud: BI as a service (e.g., hosted BI platforms
for small- and medium-size companies) is the target of huge investments and the focus of large research
efforts by industry and academia.
This special issue of Information Systems Frontiers on Business Intelligence and the Web targets the above
two moves: Web data feeding BI and engineering Web-enabled BI.
Despite several efforts for simplifying the composition process, learning efforts required for using existing mashup editors to develop mashups remain still high. In this paper, we describe how this barrier can be lowered by means of an assisted development approach that seamlessly integrates automatic composition and interactive pattern recommendation techniques into existing mashup platforms for supporting easy mashup development by end users. We showcase the use of such an assisted development environment in the context of an open-source mashup platform Apache Rave. Results of our user studies demonstrate the benefits of our approach for end user mashup development.
With age reaction time, coordination and cognition tend to deteriorate, which may lead to gait impairments, falls and injuries. To reduce this problem in elderly and to improve health, well-being and independence, regular balance and strength exercises are recommended. However, elderly face strong barriers to exercise. We developed Active Lifestyle, an IT-based system for active and healthy ageing aiming at improving elderly’s balance and strength. Active Lifestyle is a proactive training application, running on a tablet, which assists, monitors and motivates elderly to follow personalized training plans autonomously at home, while integrating them socially. We ran a pilot study to investigate: i) the feasibility of assisting the autonomous, physical training of independently living elderly with the Active Lifestyle system, ii) the adherence of the participants to the training plans, and iii) the effectiveness of the motivation instruments built into the system. The study shows that the Active Lifestyle app prototype has valuable potential to support physical exercise practice at home and it is worthwhile to further develop it into a more mature system. Furthermore, the results add to the knowledge base into mobile-based applications for elderly, in that it shows that elderly users can learn to work with mobile-based systems. The Active Lifestyle app proved viable to support and motivate independently living elderly to autonomously perform balance and strength exercises.
Wireless sensor networks (WSNs) have been a promising technology for quite some time. Their success stories are, however, restricted to environmental monitoring. In the industrial domain, their adoption has been hampered by two main factors. First, there is a lack of integration of WSNs with business process modeling languages and back-ends. Second, programming WSNs is still challenging as it is mainly performed at the operating system level. To this end, we provide the makeSense framework, a unified programming framework and a compilation chain that, from high-level business process specifications, generates code ready for deployment on WSN nodes. In this paper, we present the makeSense framework and the application scenario for our final deployment.
Facilitating compliance management, that is, assisting a company’s management in conforming to laws, regulations, standards, contracts, and policies, is a hot but non-trivial task. The service-oriented architecture (SOA) has evolved traditional, manual business practices into modern, service-based IT practices that ease part of the problem: the systematic definition and execution of business processes. This, in turn, facilitates the online monitoring of system behaviors and the enforcement of allowed behaviors - all ingredients that can be used to assist compliance management on the fly during process execution. In this paper, instead of focusing on monitoring and runtime enforcement of rules or constraints, we strive for an alternative approach to com- pliance management in SOAs that aims at assessing and improving compliance. We propose two ingredients: (i) a model and tool to design compliant service-based processes and to instrument them in order to generate evidence of how they are executed and (ii) a reporting and analysis suite to create awareness of a company’s compliance state and to enable understanding why and where compliance violations have occurred. Together, these ingredients result in an approach that is close to how the real stakeholders - compliance experts and auditors - actually assess the state of compliance in practice and that is less intrusive than enforcing compliance.
The recent emergence of mashup tools has refueled research on end-user development, i.e., on enabling end-users without programming skills to compose their own applications. Yet, similar to what happened with analogous promises in web service composition and business process management, research has mostly focused on technology and, as a consequence, has failed its objective. Plain technology (e.g., SOAP/WSDL web services) or simple modeling languages (e.g., Yahoo! Pipes) don’t convey enough meaning to non-programmers. In this article, we propose a domain-specific approach to mashups that 'speaks the language of the user', i.e., that is aware of the terminology, concepts, rules, and conventions (the domain) the user is comfortable with. We show what developing a domain-specific mashup tool means, which role the mashup meta-model and the domain model play and how these can be merged into a domain-specific mashup meta-model. We exemplify the approach by implementing a mashup tool for a specific scenario (research evaluation) and describe the respective user study. The results of a first user study confirm that domain-specific mashup tools indeed lower the entry barrier to mashup development.
Wireless Sensor and Actuator Networks (WSNs) are distributed sensor and actuator networks that monitor and control real-world phenomena, enabling the integration of the physical with the virtual world. They are used in domains like building automation, control systems, remote healthcare, etc., which are all highly process-driven. Today, tools and insights of Business Process Modeling (BPM) are not used to model WSN logic, as BPM focuses mostly on the coordination of people and IT systems and neglects the integration of embedded IT. WSN development still requires significant special-purpose, low-level, and manual coding of process logic. By exploiting similarities between WSN applications and business processes, this work aims to create a holistic system enabling the modeling and execution of executable processes that integrate, coordinate, and control WSNs. Concretely, we present a WSN-specific extension for Business Process Modeling Notation (BPMN) and a compiler that transforms the extended BPMN models into WSN-specific code to distribute process execution over both a WSN and a standard business process engine. The developed tool-chain allows modeling of an independent control loop for the WSN.
This demo introduces Eventifier, a tool that helps in reconstructing an event log from operational databases upon which process instances have been executed. The purpose of reconstructing such event log is that of discovering process models out of it, and, hence, the tool targets researches and practitioners interested in process mining. The aim of this demo is to convey to the participants both the conceptual and practical implications of identifying and extracting process execution events from such databases for reconstructing ready-to-use event logs for process discovery.
Web mashups are a new generation of applications based on the composition of ready-to-use, heterogeneous components. In different contexts, ranging from the consumer Web to Enterprise systems, the potential of this new technology is to make users evolve from passive receivers of applications to actors actively involved in the creation of their artifacts, thus accommodating the inherent variability of the users' needs. Current advances in mashup technologies are good candidates to satisfy this requirement. However, some issues are still largely unexplored. In particular, quality issues specific for this class of applications, and the way they can guide the users in the identification of adequate components and composition patterns, are neglected. This paper discusses quality dimensions that can capture the intrinsic quality of mashup components, as well as the components' capacity to maximize the quality and the user-perceived value of the overall composition. It also proposes an assisted composition process in which quality becomes the driver for recommending to the users how to complete mashups, based on the integration of quality assessment and recommendation techniques within a tool for mashup development.
Abstract. Social networks and crowdsourcing platforms provide powerful means to execute tasks that require human intelligence instead of just machine computation power. Especially crowdsourcing has demonstrated its applicability in many fields, and a variety of platforms have been created for delegating small tasks to human solvers on the Web. However, creating applications that are structured, thus applications that combine more than a single task, is a complex and typically manual endeavor that requires many different interactions with crowdsourcing platforms. In this paper, we introduce the idea of a crowd computer, discuss its properties, and propose a programming paradigm for the development of crowdsourcing applications. In particular, we argue in favor of business processes as formalism to program the crowd computer and show how they enable the reuse of intricate crowdsourcing practices.
The 2nd International Workshop on Business intelligencE and the WEB (BEWEB) was co-located with the EDBT/ICDT 2011 Joint Conference in Uppsala (Sweden) on March 25, 2011. BEWEB intends to be an international forum for researchers and practitioners to exchange ideas on how to leverage the huge amount of data that is available on the Web in BI applications and on how to apply Web engineering methods and techniques to the design of BI applications. This report summarizes the 2011 edition of BEWEB.
The recent emergence of mashup tools has refueled research on end user development, i.e., on enabling end-users without programming skills to compose their own applications. Yet, similar to what happened with analogous promises in web service composition and business process management, research has mostly focused on technology and, as a consequence, has failed its objective. In this paper, we propose a domain-specific approach to mashups that is aware of the terminology, concepts, rules, and conventions (the domain) the user is comfortable with. We show what developing a domain-specific mashup tool means, which role the mashup meta-model and the domain model play and how these can be merged into a domain-specific mashup meta-model. We exemplify the approach by implementing a mashup tool for a specific domain (research evaluation) and describe the respective user study. The results of the user study confirm that domain-specific mashup tools indeed lower the entry barrier to mashup development.
The industrial adoption of wireless sensor networks (WSNs) is hampered by two main factors. First, there is a lack of integration of WSNs with business process modeling languages and back-ends. Second, programming WSNs is still challenging as it is mainly performed at the operating system level. To this end, we provide makeSense: a unified programming framework and a compilation chain that, from high-level business process specifications, generates code ready for deployment on WSN nodes.
Age typically brings motor control impairments and loss of the lower body muscle strength, which can lead to falls, injuries and, in the worst case, death. It is well know that the practice of simple daily physical exercises can reduce the likelihood of falls, however, it is also known that it is far from easy to motivate elders to exercise, especially autonomously at home. To address this challenge, we have designed an app that not only makes it feasible and easier to follow a training plan for physical exercises, but that also introduces individual and social motivational instruments to increase the adherence to a plan – everything inside a virtual community composed of training partners, healthcare experts, and family members.
Given their increasing popularity and novel requirements and characteristics, telco mashups deserve an analysis that goes beyond what's available for mashups in general. Here, the authors cluster telco services into different types, analyze their features, derive a telco mashup reference architecture, and survey how well existing mashup tools can respond to these mashups' novel needs.
Despite the emergence of mashup tools like Yahoo! Pipes or JackBe Presto Wires, developing mashups is still non-trivial and requires intimate knowledge about the functionality of web APIs and services, their interfaces, parameter settings, data mappings, and so on. We aim to assist the mashup process and to turn it into an interactive co-creation process, in which one part of the solution comes from the developer and the other part from reusable composition knowledge that has proven successful in the past. We harvest composition knowledge from a repository of existing mashup models by mining a set of reusable composition patterns, which we then use to interactively provide composition recommendations to developers while they model their own mashup. Upon acceptance of a recommendation, the purposeful design of the respective pattern types allows us to automatically weave the chosen pattern into a partial mashup model, in practice performing a set of modeling actions on behalf of the developer. The experimental evaluation of our prototype implementation demonstrates that it is indeed possible to harvest meaningful, reusable knowledge from existing mashups, and that even complex recommendations can be efficiently queried and weaved also inside the client browser.
The recent emergence of mashup tools has refueled research on end user development, i.e., on enabling end users without programming skills to compose own applications. Yet, similar to what happened with analogous promises in web service composition and business process management, research has mostly focused on technology and, as a consequence, has failed its objective. Plain technology (e.g., SOAP/WSDL web services) or simple modeling languages (e.g., Yahoo! Pipes) don’t convey enough meaning to non-programmers. We propose a domain-specific approach to mashups that “speaks the language of the user”, i.e., that is aware of the terminology, concepts, rules, and conventions (the domain) the user is comfortable with. We show what developing a domain-specific mashup tool means, which role the mashup meta-model and the domain model play and how these can be merged into a domain-specific mashup meta-model. We apply the approach implementing a mashup tool for the research evaluation domain. Our user study confirms that domain-specific mashup tools indeed lower the entry barrier to mashup development.
In this demonstration, we present ResEval Mash, a mashup platform for research evaluation, i.e., for the assessment of the productivity or quality of researchers, teams, institutions, journals, and the like - a topic most of us are acquainted with. The platform is specifically tailored to the need of sourcing data about scientific publications and researchers from the Web, aggregating them, computing metrics (also complex and ad-hoc ones), and visualizing them. ResEval Mash is a hosted mashup platform with a client-side editor and runtime engine, both running inside a common web browser. It supports the processing of also large amounts of data, a feature that is achieved via the sensible distribution of the respective computation steps over client and server. Our preliminary user study shows that ResEval Mash indeed has the power to enable domain experts to develop own mashups (research evaluation metrics); other mashup platforms rather support skilled developers. The reason for this success is ResEval Mash's domain specificity.
In this demonstration, we describe Baya, an extension of Yahoo! Pipes that guides and speeds up development by interactively recommending composition knowledge harvested from a repository of existing pipes. Composition knowledge is delivered in the form of reusable mashup patterns, which are retrieved and ranked on the fly while the developer models his own pipe (the mashup) and that are automatically weaved into his pipe model upon selection. Baya mines candidate patterns from pipe models available online and thereby leverages on the knowledge of the crowd, i.e., of other developers. Baya is an extension for the Firefox browser that seamlessly integrates with Pipes. It enhances Pipes with a powerful new feature for both expert developers and beginners, speeding up the former and enabling the latter. The discovery of composition knowledge is provided as a service and can easily be extended toward other modeling environments.
With the success of Web 2.0 we are witnessing a growing number of services and APIs exposed by Telecom, IT and content providers. Targeting the Web community and, in particular, Web application developers, service providers expose capabilities of their infrastructures and applications in order to open new markets and to reach new customer groups. However, due to the complexity of the underlying technologies, the last step, i.e., the consumption and integration of the offered services, is a non-trivial and time-consuming task that is still a prerogative of expert developers. Although many approaches to lower the entry barriers for end users exist, little success has been achieved so far. In this paper, we introduce the OMELETTE project and show how it addresses end-user-oriented telco mashup development. We present the goals of the project, describe its contributions, summarize current results, and describe current and future work.
The industrial adoption of wireless sensor networks (WSNs) is hampered by two main factors. First, there is a lack of integration of WSNs with business process back-ends. Second, programming WSNs is still challenging as it is mainly performed at the operating system level. To this end, we provide makeSense - a unified programming framework and a compilation chain that, from high-level business process specifications, generates code ready for deployment on WSN nodes.
LNBIP 99 and LNBIP 100 together constitute the thoroughly refereed post-proceedings of twelve international workshops held in Clermont-Ferrand, France in conjunction with the 9th International Conference on Business Process Management, BPM 2011, in August 2011. The twelve workshops focused on Business Process Design (BPD 2011), Business Process Intelligence (BPI 2011), Business Process Management and Social Software (BPMS2 2011), Cross-Enterprise Collaboration (CEC 2011), Empirical Research in Business Process Management (ER-BPM 2011), Event-Driven Business Process Management (edBPM 2011), Process Model Collections (PMC 2011), Process-Aware Logistics SSystems (PALS 2011), Process-Oriented Aystems in Healthcare (ProHealth 2011), Reuse in Business Process Management (rBPM 2011), Traceability and Compliance of Semi-Structured Processes (TC4SP 2011), and Workflow Security Audit and Certification (WfSAC 2011). In addition, the proceedings also include the Process Mining Manifesto (as an Open Access Paper), which has been jointly developed by more than 70 scientists, consultants, software vendors, and end-users.
LNBIP 99 and LNBIP 100 together constitute the thoroughly refereed post-proceedings of twelve international workshops held in Clermont-Ferrand, France in conjunction with the 9th International Conference on Business Process Management, BPM 2011, in August 2011. The twelve workshops focused on Business Process Design (BPD 2011), Business Process Intelligence (BPI 2011), Business Process Management and Social Software (BPMS2 2011), Cross-Enterprise Collaboration (CEC 2011), Empirical Research in Business Process Management (ER-BPM 2011), Event-Driven Business Process Management (edBPM 2011), Process Model Collections (PMC 2011), Process-Aware Logistics SSystems (PALS 2011), Process-Oriented Aystems in Healthcare (ProHealth 2011), Reuse in Business Process Management (rBPM 2011), Traceability and Compliance of Semi-Structured Processes (TC4SP 2011), and Workflow Security Audit and Certification (WfSAC 2011). In addition, the proceedings also include the Process Mining Manifesto (as an Open Access Paper), which has been jointly developed by more than 70 scientists, consultants, software vendors, and end-users.
Assessing whether a company's business practices conform to laws and regulations and follow standards and SLAs, i.e., compliance management, is a complex and costly task. Few software tools aiding compliance management exist; yet, they typically do not address the needs of who is actually in charge of assessing and understanding compliance. We advocate the use of a compliance governance dashboard and suitable root cause analysis techniques that are specifically tailored to the needs of compliance experts and auditors. The design and implementation of these instruments are challenging for at least three reasons: (1) it is fundamental to identify the right level of abstraction for the information to be shown; (2) it is not trivial to visualize different analysis perspectives; and (3) it is difficult to manage and analyze the large amount of involved concepts, instruments, and data. This chapter shows how to address these issues, which concepts and models underlie the problem, and, eventually, how IT can effectively support compliance analysis in Service-Oriented Architectures (SOAs).
Workflow management systems focus on the coordination of people and work items, service composition approaches on the coordination of service invocations, and, recently, web mashups have started focusing on the integration and coordination of pieces of user interfaces (UIs), e.g., a Google map, inside simple web pages. While these three approaches have evolved in a rather isolated fashion - although they can be seen as evolution of the componentization and coordination idea from people to services to UIs - in this paper we describe a component-based development paradigm that conciliates the core strengths of these three approaches inside a single model and language. We call this new paradigm distributed UI orchestration, so as to reflect the mashup-like and process-based nature of our target applications. In order to aid developers in implementing UI orchestrations, we equip the described model and language with suitable design, deployment, and runtime instruments, covering the whole life cycle of distributed UI orchestrations.
In this paper, we approach the problem of interactively querying and recommending composition knowledge in the form of re-usable composition patterns. The goal is that of aiding developers in their composition task. We specifically focus on mashups and browser-based modeling tools, a domain that increasingly targets also people without profound programming experience. The problem is generally complex, in that we may need to match possibly complex patterns on-the-fly and in an approximate fashion. We describe an architecture and a pattern knowledge base that are distributed over client and server and a set of client-side search algorithms for the retrieval of step-by-step recommendations. The performance evaluation of our prototype implementation demonstrates that - if sensibly structured - even complex recommendations can be efficiently computed inside the client browser.
Over the last decade, we have witnessed an increasing use of Business Intelligence (BI) solutions that allow business people to query, understand, and analyze their business data in order to make better decisions. Traditionally, BI applications allow management and decision-makers to acquire useful knowledge about the performance and problems of business from the data of their organization by means of a variety of technologies, such as data warehousing, data mining, business performance management, OLAP, and periodical business reports. Research in these areas has produced consolidated solutions, techniques, and methodologies, and there are a variety of commercial products available that are based on these results.
Business Intelligence Applications and the Web: Models, Systems and Technologies summarizes current research advances in BI and the Web, emphasizing research solutions, techniques, and methodologies which combine both areas in the interest of building better BI solutions. This comprehensive collection aims to emphasize the interconnections that exist among the two research areas and to highlight the benefits of combined use of BI and Web practices, which so far have acted rather independently, often in cases where their joint application would have been sensible.
Workflow Management Systems (WfMSs) are complex software systems that require proper support in terms of WfMS performance. We propose here an approach to obtain some performance measurements for WfMSs (in order to compare them) by adopting a black box approach - an aspect that is not yet adequately studied in literature - and report some preliminary results: this allows us to evaluate at run-time the overall performance of a WfMS, comprising all of its constituent elements. We set up two reference processes and four different experiments, to simulate real circumstances of load, ranging from one process instance to several process instances, entering the system either gradually or simultaneously. We identify some key performance indicators (CPU, main memory and disk workloads, and completion time) for the tests. We choose five WfMSs (some publicly available, some commercially available), and install them in their respective default configuration on five different and separate virtual machines (VMware). For every WfMS and for every experiment, we perform measurements and specifically focus on the completion time. Results enable us to measure how efficient the WfMSs are in general and how well they react to an increase of workload.
Mashups have relatively simple, component-based development paradigms, yet few end users develop their own applications. To help turn end users into developers and innovators, the authors present two mashup platforms for lightweight Web development practices and discuss open challenges.
The potential that mashups can reach in web applications has not yet been exploited in practice. In fact, many of the challenges that introduce some of the most advanced types of mashups are not yet solved and require new mechanisms that allow their proper specification and execution. Among the different types of identified mashups, in this work we focus on process mashups, a type of mashups where the integration and coordination of people, tasks, services and UIs is required. Specifically, from the set of characteristics found in this type of mashups we focus on the collaborative aspect found in process tasks. To deal with it we provide a modeling solution that extends a business process modeling language such as BPMN to represent such characteristic. The solution has been defined within the context of OOWS4BP, a model-driven engineering approach to deal with the development of business process-driven web applications.
Recently, mashup tools have emerged as popular end-user development platform. Composition languages used in mashup tools provide ways (e.g., drag-and-drop based visual metaphors for programming) to integrate data from multiple data sources in order to develop situational applications. However this integration task often requires substantial technical expertise from the developers in order to use basic composition blocks properly in their composition logic. Reusing of existing composition knowledge is one of the possible solutions to ease the mashup development process. This reusable composition knowledge can be harvested from composition patterns that have occurred frequently in previously developed mashups. In order to understand composition patterns in mashups, particularly in data flow based mashups, in this paper, we have analyzed the composition language used by one of the most popular data-flow based mashup tools, Yahoo! Pipes. Based upon our analysis we have identified six composition patterns, which represent most commonly used composition steps during mashup application development. To prove the generality of the identified patterns in data-flow based mashup composition languages, we have further shown the applicability of our composition patterns in several other popular data-flow based mashup tools.
Excellence or quality are often regarded as the holy grail of science, and it is the main goal driving scientist to pursue research that will influence the direction of their fields. This excellence, however, has no standard definition and varies across disciplines, and even from person to person, making it difficult to evaluate research and reputation of researchers. In this work, we introduce the general problem of studying the nature of reputation in the context of computer science, providing preliminary results on its relation with bibliometric indicators and hints for future experiments that will foster a better understanding of reputation in the scientific domain.
A personal health and lifestyle platform helps individuals maintain a personal health and lifestyle record and obtain personalized, lifestyle-related advice to improve their health by changing their daily habits. The platform leverages the data stored in a typical personal health record, augmenting it with environmental and sensor data and enabling the monitoring and analysis of an individual's habits. Sharing habits and advice with doctors and friends empowers individuals to become wellness coproducers and leads to a personal health and lifestyle record that is much more useful to the individual maintaining it.
In this paper, we summarize our experience and first results achieved in the context of advanced research evaluation. Striving for research metrics that effectively allow us to predict real opinions about researchers in a variety of scenarios, we conducted two experiments to understand the respective suitability of common indicators, such as the h-index. We concluded that realistic research evaluation is more complex than assumed by those indicators and, hence, may require the specification of even complex evaluation algorithms. While the reconstruction (or reverse engineering) of those algorithms from publicly available data is one of our research goals, in this paper we show how can we enable users to develop their own algorithms with Reseval, our mashup-based research evaluation platform, and how doing so requires dealing with a variety of data management issues that are specific to the domain of research evaluation. Therefore, we also present the main concepts and model of our data access and management solution, the Scientific Resource Space (SRS).
Despite several years of mashup practice and research, it is still hard to find high-quality, useful mashups on the Web. While this can be partly ascribed to the low quality of the components used in the mashups or simply to the lack of suitable components, in this paper we argue that this is partly also due to the lack of suitable quality models for mashups themselves, helping developers to focus on the key aspects that affect mashup quality. Although apparently easy, we show that - if taken seriously - mashup development can be non-trivial and that it deserves an investigation that specializes current web quality assessment techniques, which are not able to cater for the specifics of mashups. In fact, we believe a mashup-specific quality model is needed.
One of the key innovations introduced by web mashups into the integration landscape (basically focusing on data and application integration) is integration at the UI layer. Yet, despite several years of mashup research, no commonly agreed on component technology for UIs has emerged so far. We believe W3C’s widgets are a good starting point for componentizing UIs and a good candidate for reaching such an agreement. Recognizing, however, their shortcomings in terms of inter-widget communication - a crucial ingredient in the development of interactive mashups - in this paper we (i) first discuss the nature of UI mashups and then (ii) propose an extension of the widget model that aims at supporting a variety of inter-widget communication patterns.
This paper presents requirements elicitation study for a EUD tool for composing service-based applications. WIRE aims at enabling EUD by harvesting and recommending community composition knowledge (the wisdom), thus facilitating knowledge transfer from developers to end-users. The idea was evaluated with 10 contextual interviews to accountants, eliciting a rich set of information, which can lead to requirements for Wisdom-Aware EUD.
The development of modern Web 2.0 applications is increasingly characterized by the involvement of end users with typically limited programming skills. In particular, an emerging practice is the development of web mashups, i.e., applications based on the composition of contents and functions that are accessible via the Web. In this article, we try to explain the ingredients that are needed for end users to become mashup developers, namely adequate mashup tools and lightweight development processes, leveraging on the users' capability to innovate. We also describe our own solution, the DashMash platform, an example of end-user-oriented mashup platform that tries to fill the gaps that typically prevent end users from fully exploiting the mashup potential as innovation instruments. DashMash offers an intelligible, easy-to-use composition paradigm that enables even inexperienced users to compose own mashups. As confirmed by a user-centric experiment, its paradigm is effective and increases the satisfaction of the end users.
WSNs are expected to play a critical role in the next computing revolution, as depicted in the visions of Cooperating Objects and the Internet of Things. However, designing and developing WSN software is currently very difficult. This may prevent WSNs from reaching large-scale adoption, especially in industry. The makeSense project aims at enabling an easier integration of WSNs in business processes, by allowing business process experts and WSN developers to express the high-level functionality required, while leaving low-level details to the compiler and run-time system. We envision the results of makeSense to be not only a landmark for WSN software development, but also a new way to look at WSN programming that increases productivity and business value, enabling a far-reaching adoption in key industrial domains.
We propose to enable and facilitate the development of service-based development by exploiting community composition knowledge, i.e., knowledge that can be harvested from existing, successful mashups or service compositions defined by other and possibly more skilled developers (the community or crowd) in a same domain. Such knowledge can be used to assist less skilled developers in defining a composition they need, allowing them to go beyond their individual capabilities. The assistance comes in the form of interactive advice, as we aim at supporting developers while they are defining their composition logic, and it adjusts to the skill level of the developer. In this paper we specifically focus on the case of process-oriented, mashup-like applications, yet the proposed concepts and approach can be generalized and also applied to generic algorithms and procedures.
Over the last few years, the mashup community has grown significantly, and mashup development has matured substantially compared to the initial hacking practices. Mashups as applications have specialized into data mashups, service mashups, or user interface mashups - although these terms lack a common agreement on definitions - while other types of mashups can still be identified. In fact, recently the term process mashup emerged, yet, again, its meaning is everything but clear.
Intrigued by this latter idea, in this paper we try to understand what process mashups are. We identify three dimensions that distinguish process mashups from most of the current types of mashups and we show that exploring them leads to a set of new types of mashups, which are the actual basis for the development of process mashups. For each of these new types of mashups, we provide a discussion, discuss suitable application scenarios and show tool support, so as to highlight challenges and open issues.
The concept of application composition at the presentation layer, i.e., the development of web applications with user interfaces (UI) starting from stand-alone, reusable components, is a relatively new research area. The recent advent of web mashups and component-based web applications has produced promising results, but we argue that there is still a lot of space for improvement. By looking at three advanced approaches in this area, we investigate the current solution space and consequently unveil challenges and problems still to be solved in order to turn presentation composition into common practice.
As the web continues to change the way we produce and disseminate scientific knowledge, traditional digital libraries are confronted with the challenge of transcending their boundaries to remain compatible with a world where the whole Web in itself is the source of scientific knowledge. This paper discusses a resource-oriented approach for the management and interaction of scientific services as a way to face this challenge. Our approach consists in building a general-purpose, extensible layer for accessing any resource that has a URI and is accessible on the Web, along with appropriate extensions specific to the scientific domain. We name the class of systems that have this functionality Scientific Resource Space Management Systems (sRSMS), since they are the resource analogous to data space management systems known in literature.
In this extended abstract we describe our ideas, preliminary results, and ongoing work on a mashup platform for research evaluation. The evaluation of researchers and research artifacts (e.g., a scientific paper) is important for a variety of reasons, ranging from hiring and promotion (for people) to selection of contributions (for journals or conferences) and to searching for interesting content within an ocean of scientific knowledge. However, there is little consensus today on how research evaluation should be done, and it is commonly acknowledged that the quantitative metrics available today are largely unsatisfactory.
It is significant for companies to ensure their businesses conforming to relevant policies, laws, and regulations as the consequences of infringement can be serious. Unfortunately, the divergence and frequent changes of different compliance sources make it hard to systematically and quickly accommodate new compliance requirements due to the lack of an adequate methodology for system and compliance engineering. In addition, the difference of perception and expertise of multiple stakeholders involving in system and compliance engineering further complicates the analyzing, implementing, and assessing of compliance. For these reasons, in many cases, business compliance today is reached on a per-case basis by using ad hoc, hand-crafted solutions for specific rules to which they must comply. This leads in the long run to problems regarding complexity, understandability, and maintainability of compliance concerns in a SOA. To address the aforementioned challenges, we present in this invited paper a comprehensive SOA business compliance software framework that enables a business to express, implement, monitor, and govern compliance concerns.
Traditionally, workflow management systems aim at alleviating people's burden of coordinating repetitive business procedures, i.e., they coordinate people. Web service orchestration approaches, instead, coordinate pieces of software (the web services), hiding the human aspects that are intrinsically present in any business process behind the services. The recent emergence of technologies like BPEL4People and WS-HumanTask, which introduce human actors into service compositions, manifest that taking into account the people involved in business processes is however important. Yet, none of these approaches allow one to also develop the user interfaces (UIs) the users need to concretely participate in a business process.
With this paper, we want to go one step beyond state-of-the-art workflow management and service composition and propose an original model, language and running system for the composition of distributed UIs, an approach that allows us to bring together UIs, web services and people in a single orchestration logic and tool. To demonstrate the effectiveness of the idea, we apply the approach to a real-world home assistance scenario.
This demo introduces the idea of distributed orchestration of user interfaces (UIs), an application development approach that allows us to easily bring together UIs, web services and people in a single orchestration logic, language, and tool. The tool is called MarcoFlow, and it covers three main phases of the software development lifecycle: design (by means of a dedicated, visual editor), deployment (by means of a set of code generators), and execution (by means of a distributed runtime environment for UI orchestrations). We showcase the benefits of MarcoFlow in each of the phases by developing and running a practical and expressive application for the management of home assistance and by explaining, in each phase, which are the challenges, which the intuitions, and which the solutions. The demo targets the development of mashup-like applications that require (distributed) process support and, hence, targets researchers and practitioners interested in mashups, lightweight process design, web services, and innovative (and free) ways of providing process support.
Business process management, service-oriented architectures and software back-engineering heavily rely on the fundamental processes of mining of processes and web service business protocols from log files. Model extraction and mining aim at the (re)discovery of the behavior of a running model implementation using solely its interaction and activity traces, and no a priori information on the target model. This paper presents an approach for correlating messages and extracting the business protocol of a web service in the realistic scenario in which correlation information is entirely absent from interaction and activity logs. Correlation is achieved through deterministic computations that result in an extremely efficient method whose extensive experiments have shown its solid reliability, robustness when dealing with complex structures, and very high performance and scalability. This approach and the underlying algorithms extend what is actually possible to achieve in the web service business protocol mining domain using incomplete and noisy data logs, and opens new horizons in back-engineering of web services. The theoretical and experimental results clearly show the leap forward achieved herein.
This book constitutes the thoroughly refereed post-conference proceedings of the workshops held at the 10th International Conference on Web Engineering, ICWE 2010, in Vienna, Austria, in July 2010. The 60 revised full papers presented were carefully reviewed and selected from over 100 submissions made to 9 international workshops and held in cooperation with the ICWE 2010 main conference. Those 9 workshops were selected from 16 proposals and encompassed: MDWE 2010, the 6th model-driven Web engineering workshop; QWE 2010, the first international workshop on quality in Web engineering; SWIM 2010, the second international workshop on semantic Web information management; SWEng 2010, the first international workshop on service Web engineering; ESW 2010, the first workshop on engineering soa and the Web; ComposableWeb 2010, the second international workshop on lightweight composition on the Web; EC 2010, the first international workshop on enterprise crowdsourcing; TouchTheWeb 2010, the first international workshop on Web-enabled objects; and WEBTOUR 2010, the first international workshop on Web engineering and tourism.
Automatically monitoring and enforcing compliance of service-based business processes with laws, regulations, standards, contracts, or policies is a hot issue in both industry and research. Little attention has however been paid to the problem of understanding non-compliance and improving business practices to prevent non-compliance in the future, a task that typically still requires human interpretation and intervention. Building upon work on automated detection of non-compliant situations, in this paper we propose a technique for the root-cause analysis of encountered problems and for the prediction of likely compliance states of running processes that leverages (i) on event-based service infrastructures, in order to collect execution evidence, and (ii) on the concept of key compliance indicator, in order to focus the analysis on the right data. We validate our ideas and algorithms on real data from a internal process of a hospital.
Modern Web 2.0 applications are characterized by a high user involvement: users are supported in the creation of content and annotations, but also in the "composition" of applications starting from content and functions that are provided by third parties. This last phenomenon is known as web mashups and is gaining popularity even with users having little programming skills, raising a set of peculiar information quality issues. Assessing the quality of a mashup, especially of the information it provides to its users, requires understanding how the mashup has been developed, how its components look like, and how quality propagates from basic components to the final mashup application – three aspects we investigate in this article.
Decision support systems and, in particular, business intelligence techniques are widely used by enterprises for monitoring and analyzing operations to understand in which aspects the business is not performing well and even how to improve it. These tools provide valuable results in the context of single departments and business processes, while they are often not suitable in scenarios driven by web-enabled intercompany cooperation and IT outsourcing. In such contexts, the adoption of service-oriented company IT architectures and the use of external web services may prevent the comprehensive view over a distributed business process and raise doubts about the reliability of computed outputs. We analyze how these scenarios impact on information quality in business intelligence applications and lead to non-trivial research challenges. We propose the notions of uncertain events and uncertain key indicators, a model to express and store uncertainty, and a tool to compute with and visualize uncertainty.
Una pratica sempre piu' emergente nello scenario del Web 2.0 e' lo sviluppo di mashup, applicazioni Web basate sulla composizione di contenuti e servizi aperti. In questo articolo illustreremo come, tramite strumenti e processi di sviluppo leggeri, anche gli utenti meno esperti possano assumere il ruolo di sviluppatori di mashup. Evidenzieremo la capacita' dei mashup di far leva sul potenziale di innovazione degli utenti del Web e la necessita' in questo contesto di metodi di sviluppo rinnovati. Tracceremo quindi le maggiori problematiche che dovranno essere affrontate affinche' i nuovi metodi trovino applicabilita'.
As service-based Internet systems get increasingly complex they become harder to manage at design time as well as at runtime. Nowadays, many systems are described in terms of precisely specified models, e.g., in the context of model-driven development. By making the information in these models accessible at runtime, we provide better means for analyzing and monitoring the service-based systems. We propose a model-aware repository and service environment (MORSE) to support model access and evolution at both design time and runtime. MORSE focuses on enabling us to monitor, interpret, and analyze the monitored information. In an industrial case study, we demonstrate how compliance monitoring can benefit from MORSE to monitor violations at runtime and how MORSE can ease the root cause analysis of such violations. Performance and scalability evaluations show the applicability of our approach for the intended use cases and that models can be retrieved during execution at low cost.
While mashups may integrate into a new web application data, application logic, and user interfaces sourced from the Web – a highly intricate and complex task – they typically come in the form of simple applications (e.g., composed of only one web page) for individual users. In this chapter, we introduce the idea of distributed user interface orchestration, a mashup-like development paradigm that, in addition to the above features, also provides support for the coordination of multiple users inside one shared application or process. We describe the concepts and models underlying the approach and introduce the MarcoFlow system, a platform for the assisted development of distributed user interface orchestrations. As a concrete development example, we show how the system can be profitably used for the development of an advanced, collaborative search application.
The paper discusses some trends in end user programming (EUP) and takes inspiration from the discussions in a panel and in a vertical session on research evaluation within the second Search Computing workshop. We discuss the controversial successes and failures in this field and we elaborate on which facilities could foster adoption of end user programming. We discuss various dimensions of end user programming, including vertical versus horizontal language definition, declarative versus imperative approaches. We exemplify our discussion in the realistic scenario of research evaluation by comparing the Search Computing and ResEval approaches.
This book describes a methodological, model-driven approach to context-awareness and adaptivity in the field of Web Engineering and is based on the dissertation "Model-Driven Design of Context-Aware Web Applications" by the author. The described modeling approach is one of the first attempts to enlarge the applicability of adaptive application features in the Web from so-called "adaptive hypermedia systems" to "context-aware Web applications". While the former typically are based on a user model that is dynamically updated based on the observation of the user's navigation actions, the latter are based on a more complex context model and active, context-triggered application features. The research described in this book is applied in the context of WebML (the Web Modeling Language), yet its contribution is general in nature and can be ported to other modeling languages as well.
Active applications are characterized by the need for expressing, evaluating, and maintaining a set of rules that implement the application's active behavior. Typically, rules follow the Event-Condition-Action (ECA) paradigm, yet oftentimes their actual implementation is buried in the application code, as their enactment requires a tight integration with the concepts and modules of the application. This chapter proposes a rule management system that allows developers to easily expand its rule processing logic with such concepts and modules and, hence, to decouple the management of their active rules from the application code. This system derives from an exception manager that has previously been developed in the context of an industry-scale workflow management system and effectively allows developers to separate active and non-active design concerns.
In this demonstration we introduce Gelee, our online platform for the hosted specification and cooperative execution of lifecycles of artifacts of any kind. With Gelee we aim at filling two lacks we identify in current cooperative software systems when it comes to unstructured, artifact-based works (e.g., the writing of a project deliverable): the lack of state and the complete lack of automated actions. Lifecycles allow us to model the state of any object, and if we focus on online resources (e.g., a Google Doc) then we can also automate some lifecycle actions. If we apply Gelee to composite artifacts, e.g., a set of web services, lifecycles provide for the human-driven orchestration of services.
Traditional integration practices like Enterprise Application Integration and Enterprise Information Integration approaches typically focus on the application layer and the data layer in software systems, i.e., on limited and specific development aspects. Current web mashup practices, instead, show that there is also a concrete need for (i) integration at the presentation layer and (ii) integration approaches that conciliate all the three layers together. In this demonstration, we show how our mashArt approach addresses these challenges and provides skilled web users with universal integration in a hosted fashion.
Assessing whether a company's business practices conform to laws and regulations and follow standards and best practices, i.e., compliance governance, is a complex and costly task. Few software tools aiding compliance governance exist; however, they typically do not really address the needs of who is actually in charge of assessing and controlling compliance, that is, compliance experts and auditors. We advocate the use of compliance governance dashboards, whose design and implementation is however challenging for at least three reasons: (i) it is fundamental to identify the right level of abstraction for the information to be shown; (ii) it is not trivial to visualize different analysis perspectives; and (iii) it is difficult to manage the large amount of involved concepts, instruments, and data. This paper shows how to address these issues, which concepts and models underlie the problem, and, eventually, how IT can effectively support compliance analysis in Service-Oriented Architectures.
Information integration, application integration and component-based software development have been among the most important research areas for decades. The last years have been characterized by a particular focus on web services, the very recent years by the advent of web mashups, a new and user-centric form of integration on the Web. However, while service composition approaches lack support for user interfaces, web mashups still lack well engineered development approaches and mature technological foundations.
In this paper, we aim to overcome both these shortcomings and propose what we call a universal composition approach that naturally brings together data and application services with user interfaces. We propose a unified component model and a universal, event-based composition model, both able to abstract from low-level implementation details and technology specifics. Via the mashArt platform, we then provide universal composition as a service in form of an easy-to-use graphical development tool equipped with an execution environment for fast deployment and execution of composite Web applications.
Key indicators, such as key performance indicators or key compliance indicators are at the heart of modern business intelligence applications. Key indicators are metrics, i.e., numbers, that help an organization to measure and assess how successful it is in reaching predefined goals (e.g., lowering process execution times or increasing compliance with regulations), and typically the people looking at them simply trust the values they see when taking decisions. However, it is important to recognize that in real business environments we cannot always rely on fully trusted or certain data, yet indicators are to be computed.
In this paper, we tackle the problem of computing uncertain indicators from uncertain data, we characterize the problem in a modern business scenario (combining techniques from uncertain and probabilistic data management), and we describe how we addressed and implemented the problem in a European research project.
Nowadays, Web applications are almost omnipresent. The Web has become a platform not only for information delivery, but also for eCommerce systems, social networks, mobile services, and distributed learning environments. Engineering Web applications involves many intrinsic challenges due to their distributed nature, content orientation, and the requirement to make them available to a wide spectrum of users who are unknown in advance.
The authors discuss these challenges in the context of well-established engineering processes, covering the whole product lifecycle from requirements engineering through design and implementation to deployment and maintenance. They stress the importance of models in Web application development, and they compare well-known Web-specific development processes like WebML, WSDM and OOHDM to traditional software development approaches like the waterfall model and the spiral model. Important problem areas inherent to the Web, like localization, personalization, accessibility, and usage analysis, are dealt with in detail, and a final chapter provides both a description of and an outlook on recent Semantic Web and Web 2.0 developments.
Overall, their book delivers a comprehensive presentation of the state-of-the-art in Web application development and thus forms an ideal basis for academic or industrial courses in this or related areas. It is equally suitable for self-study by researchers or advanced professionals who require an overview on how to use up-to-date Web technologies.
Sometimes it looks like development for Web 2.0 is completely detached from the "traditional" world of web engineering. It is true that Web 2.0 introduced new and powerful instruments such as tags, micro formats, RESTful services, and light-weight programming models, which ease web development. However, it is also true that they didn't really substitute conventional practices such as component-based development and conceptual modeling.
Traditional web engineering is still needed, especially when it comes to developing components for mashups, i.e., components such as web services or UI components that are meant to be combined, possibly by web users who are not skilled programmers. We argue that mashup components do not substantially differ from common web applications and that, hence, they might benefit from traditional web engineering methods and instruments. As a bridge toward Web 2.0, in this paper we show how, thanks to the adoption of suitable models and abstractions, generic web applications can comfortably be turned into mashup components.
Through web mashups, web designers with even little programming skills have the opportunity to develop advanced applications by leveraging components accessible over the Web and offered by a multitude of service providers. So far, however, component selection has been merely based on functional requirements only, without considering the quality of the components and that of the final mashup. The quality in this context results from different factors, such as the software API, the contents, and the user interface.
In the literature, quality criteria for the different aspects have been proposed and analyzed, but the adaptability and dynamicity that characterize the mashup ecosystem require a separate and focused analysis. In this paper, we analyze the quality properties of mashup components (APIs), the building blocks of any mashup application, and define a quality model, which we claim represents a valuable instrument in the hands of both component developers and mashup composers.
Mashups, i.e., web applications that are developed by integrating data, application logic, and user interfaces sourced from the Web, represent one of the innovations that characterize Web 2.0. Novel content wrapping technologies, the availability of so-called web APIs (e.g., web services), and the increasing sophistication of mashup tools allow also the less skilled programmer (or even the average web user) to compose personal applications on the Web. In many cases, such applications also feature search capabilities, achieved by explicitly integrating search services, such as Google or Yahoo!, into the overall logic of the composite application.
In this chapter, we first overview the state of the art in mashup development by looking at which technologies a mashup developer should master and which in-struments exist that facilitate the overall development process. Then we specifi-cally focus on our own mashup platform, mashArt, and discuss its approach to what we call universal integration, i.e., integration at the data, application, and user interface layer inside one and the same mashup environment. To better ex-plain the novel ideas of the platform and its value in the context of search com-puting, we discuss an example inspired by the idea of search computing.
Governing business compliance with regulations, laws, best practices, contracts, and the like is not an easy task, and so far there are only limited software products available that help a company to express compliance rules and to analyze its compliance state. We argue that today's SOA-based way of implement-ing and conducting business (e.g., using web services and business process engines) lends itself very well to the development of a comprehensive compliance government solution that effectively aids companies in being compliant. In this paper, we contextualize the compliance problem in SOA-based businesses, we highlight which are the most salient research challenges that need to be addressed, and we describe our approach to compliance governance, spanning design, execution, and evaluation concerns.
While business process definition is becoming more and more popular as an instrument for describing human activities, there is a growing need for software tools supporting business process abstractions to help users organize and monitor their desktop work. Tools are most effective when they embed some knowledge about the process, e.g. in terms of the typical activities required by the process, so that users can execute the activities without having to define them. Tools must be light-weight and flexible, so as to enable users to create or change the process as soon as there is a new need.
In this article, we first describe an application-independent approach to flexible process support by discussing the abstractions required for modeling, creating, enacting, and modifying flexible processes. Then, we show our approach at work in the context of project-centered learning. In this application, learners are challenged to perform concrete tasks in order to master specific subjects; in doing so, they have to conduct significant projects and cope with realistic (or even real-life) working conditions and scenarios. Often, students are geographically dispersed or under severe timing constraints, because these activities intertwine with their normal university activity. As a result, they need communication technology in order to interact and workflow technology in order to organize their work. The developed platform provides a comprehensible, e-learning-specific set of activities and process templates, which can be combined through a simple Web interface into project-centered collaboration processes. We discuss how the general paradigm of flexible processes was adapted to the learning concept, implemented, and experienced by students.
Adaptivity (the runtime adaptation to user profile data) and context-awareness (the runtime adaptation to generic context data) have been gaining momentum in the field of Web engi-neering over the last years, especially in response to the ever growing demand for highly per-sonalized services and applications coming from end users. Developing context-aware and adaptive Web applications requires addressing a few design concerns that are proper of such kind of applications and independent of the chosen modeling paradigm or programming lan-guage. In this chapter we characterize the design of context-aware Web applications, we de-scribe a conceptual, model-driven development approach, and we show how the peculiarities of context-awareness require augmenting the expressive power of conceptual models in order to be able to express adaptive application behaviors.
Abstract. Business Intelligence (BI) solutions commonly aim at assisting decision-making processes by providing a compre-hensive view over a company's core business data and suitable abstractions thereof. Decision-making based on BI solutions therefore builds on the assumption that providing users with targeted, problem-specific fact data enables them to make informed and, hence, better decisions in their everyday businesses. In order to really provide users with all the necessary details to make informed decisions, we however believe that - in addition to conventional reports - it is essential to also provide users with information about the quality, i.e. with quality metadata, regarding the data from which reports are generated. Identifying a lack of support for quality metadata management in conventional BI solutions, in this paper we propose the idea of quality-aware reports and a possible architecture for quality-aware BI able to involve the users themselves into the quality metadata management process by soliciting and exploiting user feedback.
Introduction to special issue on Adaptation and Evolution in Web Systems Engineering
Mashup languages, such as Yahoo! Pipes, are offering new graphic interfaces for service compo-sition; while their popularity increases, a growing number of Internet users acquire the ability of using these new languages and paradigms. Normally, composition is limited to simple services, such as RSS or Atom feeds, but visual mashup languages have the potential of being used for more complex service compositions, with typed parameters and well-defined input-output inter-faces.
We specifically distinguish between exact services and search services; exact services return a given number of equally valuable answers, while search services return a variable number of ranked answers. Thus, the composition of search services introduces new issues, such as deter-mining the optimal sequence of search invocations, or composing separately ranked entries into a globally ranked result. We argue that enabling end users to mashup services through suitable ab-stractions and tools is a viable option in order to improve service-based computations.
Web mashups, i.e., web applications that are developed starting from contents and services available online, have been fascinating to a rapidly growing number of developers over the last two years. Still, comprehensive development tools or frameworks are lacking, and in most cases mashing up a new application implies a significant manual programming effort. In this paper, we provide an overview of current tools, frameworks, and trends that aim at facilitating mashup de-velopment. We identify a set of characteristic dimensions, which we then use to highlight the strengths and weaknesses of a few representative approaches, thereby providing some insights into the nature of mashups.
Event monitoring and active behaviors are important aspects in many software systems and application domains, not only in database management systems. In this paper, we propose an Event-Condition-Action (ECA) approach that spans from application data to application components and behaviors. Starting from an exception manager we previously developed in the context of a workflow management system, we derived an autonomous active component capable of handling a variety of events and of enacting actions in response to detected events. The ECA server runs as an autonomous engine and can be seamlessly integrated with existing systems, thus enhancing the systems functionalities and maintainability by separating active and non-active design concerns.
Context-awareness and adaptivity in web applications have been gaining momentum in web engineering over the last years, and it is nowadays recognized that, more than a mere technology aspect, they represent a first-class design concern. This acknowledgment has led to a revision of existing design methods and languages, finally resulting in runtime adaptation being considered a cross-cutting aspect throughout the whole development process. In this paper, we propose a radically new view on context-awareness and show how a well-done component-based development may allow the fast mashup of context-aware and adaptive web applications. The proposed approach comes with an intuitive graphical development environment, which will finally enable even end users themselves to mash up their adaptive applications.
Modern Web development is more and more moving towards the production of full-fledged, complex Web applications, possibly equipped with active and/or adaptive behaviors. On the one side, this evolution implies higher development costs and times; on the other side, such implications are contrasted by the dynamics of the modern Web, which demands for even faster application development and evolution cycles.
In this paper we focus on the above problem, considering adaptive Web applications. We defined an Event-Condition-Action (ECA) rule-based approach aimed at facilitating the management and evolution of adaptive application features and we developed an engine, namely Bellerofonte, to process ECA rules. In our approach, we decouple the active logic (i.e. the adaptivity rules) from the execution of the actual application by means of a decoupled rule engine capturing events and autonomously enacting the adaptivity actions.
Web 2.0 is a technical term describing the trend in the use of World Wide Web technology and web design. The aims of Web 2.0 are to enhance creativity, information sharing and collaboration among users, rather than just retrieving information. For example, one of the most promising area emerged recently in Web 2.0 is the development and evolution of Web-based communities and hosted services, such as social networking sites, wikis, Blogs and folkonomies. They can build on the interactive facilities of “Web 1.0” to provide “Network as platform” computing, allowing users to run software or applications entirely through a browser. Users can own the data on a Web 2.0 site and exercise control over that data.
Where can I attend an interesting database workshop close to a sunny beach? Who are the strongest experts on service computing based upon their recent publication record and accepted European projects? Can I spend an April weekend in a city served by a low-cost direct flight from Milano offering a Mahler's symphony? We regard the above queries as multi-domain queries, i.e., queries that can be answered by combining knowledge from two or more domains (such as: seaside locations, flights, publications, accepted projects, conference offerings, and so on). This information is available on the Web, but no general-purpose software system can accept the above queries nor compute the answer. At the most, dedicated systems support specific multi-domain compositions (e.g., Google-local locates information such as restaurants and hotels upon geographic maps).
This paper presents an overall framework for multi-domain queries on the Web. We address the following problems: (a) expressing multi-domain queries with an abstract formalism, (b) separating the treatment of "search" services within the model, by highlighting their differences from "exact" Web services, (c) explaining how the same query can be mapped to multiple "query plans", i.e., a well-defined scheduling of service invocations, possibly in parallel, which complies with their access limitations and preserves the ranking order in which search services return results; (d) introducing cross-domain joins as first-class operation within plans; (e) evaluating the query plans against several cost metrics so as to choose the most promising one for execution. This framework adapts to a variety of application contexts, ranging from end-user-oriented mash-up scenarios up to complex application integration scenarios.
Business Intelligence (BI) solutions commonly aim at assisting decision-making processes by providing a comprehensive view over a companys core business data and suitable abstractions thereof. Decision-making based on BI solutions therefore builds on the assumption that providing users with targeted, problem-specific fact data enables them to make informed and, hence, better decisions in their everyday businesses. In order to really provide users with all the necessary details to make informed decisions, we however believe that in addition to conventional reports it is essential to also provide users with information about the quality, i.e. with quality metadata, regarding the data from which reports are generated. Identifying a lack of support for quality metadata management in conventional BI solutions, in this paper we propose the idea of quality-aware reports and a possible architecture for quality-aware BI, able to involve the users themselves into the quality metadata management process, by explicitly soliciting and exploiting user feedback.
In this paper we discuss the problem of engineering privacy requirements for business intelligence applications, i.e., of eliciting, modeling, testing, and auditing privacy requirements imposed by the source data owner on the business intelligence applications that use these data to compute reports for analysts. We describe the peculiar challenges of this problem, propose and evaluate different solutions for eliciting and modeling such requirements, and make the case in particular for what we experienced as being the most promising and realistic approach: eliciting and modeling privacy requirements on the reports themselves, rather than on the source or as part of the data warehouse.
The Service-Oriented Architecture (SOA) has become todays reference architecture for modern distributed systems. As SOA concepts and technologies become more and more widespread and the number of services in operation within enterprises increases, the problem of managing these services becomes manifest. One of the most pressing needs we hear from customers is the ability to "discover", within a maze of services each offering functionality to (and in turn using functionality offered by) other services, which are the actual dependencies between such services. Understanding dependencies is essential to performing two functions: impact analysis (understanding which other services are affected when a service becomes unavailable) and service-level root-cause analysis (which is the opposite problem: understanding the causes of a service failure by looking at the other services it relies on). Discovering dependencies is essential as the hope that the enterprise maintains documentation that describe these dependencies (on top of a complex maze and evolving implementations) is vane. Hence, we have to look for dependencies by observing and analyzing the interactions among services.
In this paper we identify the importance of the problem of discovering dynamic dependencies among Web services and we propose a solution for the automatic identification of traces of dependent messages, based on the correlation of messages exchanged among services. We also discuss our lessons learned and results from applying the techniques to data related to HP processes and services.
This paper presents NGS, a framework providing fully automated support for cross-domain queries. In particular, NGS (a) integrates different kinds of services (search engines, web services, and wrapped web pages) into a global ontology, i.e., a unified view of the concepts supported by the available services, (b) covers query formulation aspects over the global ontology, and query rewriting in terms of the actual services, and (c) offers several optimization opportunities leveraging the characteristics of the different services at hand, based on several different cost metrics.
If we consider a query involving multiple domains, such as find all database conferences held within six months in locations whose seasonal average temperature is 28C and for which a cheap travel solution exists, we note that (i) general-purpose search engines fail to answer multi-domain queries and specific search services may cover one of such domains, but no general integration framework is readily available. Currently, the only way to treat such cases is to separately query dedicated services and feed the result of one search as input to another, or to pairwise compare them by hand.
This paper presents NGS, a framework providing fully automated support for cross-domain queries. In particular, NGS (a) integrates different kinds of services (search engines, web services, and wrapped web pages) into a global ontology, i.e., a unified view of the concepts supported by the available services, (b) covers query formulation aspects over the global ontology, and query rewriting in terms of the actual services, and (c) offers several optimization opportunities leveraging the characteristics of the different services at hand, based on several different cost metrics.
The effort invested by the Workflow Management Coalition in the interchan-geability of process definitions has led to the definition of the XPDL language, a commonly acknowledged XML format for process definition. While XPDL effectively enables the cross-product portability of process definitions, the language has not been designed to also capture undesired behaviors that may arise during process execution, i.e. exceptions. Nonetheless, exceptions - especially those that are predictable at process definition time - do have se-mantics that are not negligible. Our investigation of exception handling mechanisms in workflow manage-ment products has shown that a commonly accepted approach does not exist, and that, hence, a proposal for an exception-specific XPDL extension would probably not succeed. In this chapter, we describe our resulting idea of leveraging the products' very own extension mechanisms to enable cross-product exception definitions. The proposed approach operates at the con-ceptual level, formalizes exceptions in a fully XPDL-compliant fashion, and abstracts from product-specific details until process execution. Along with the description of our approach, we also discuss our experience with its effective portability across a few XPDL-compliant commercial and open-source workflow management products.
More and more Web users ask for contents and services highly tailored to their particular contexts of use. Especially due to the increasing affordability of new and powerful mobile communication devices, they also appreciate the availability of ubiquitous access, independent from the device actually in use. Due to such premises, traditional software design methods need to be extended, and new issues and requirements need to be addressed for supporting context-aware access to services and applications. In this paper we propose a model-driven approach towards adaptive, context-aware Web applications, accompanied by a general-purpose execution framework enabling active context-awareness.Whereas conventional adaptive hypermedia systems address the problem of adapting HTML pages in response to user-generated requests, in this work we especially stress the importance of user-independent, context-triggered adaptivity actions. This finally leads us to interpret the context as an active actor, operating independently from users during their navigations.
The Web service domain is a fast growing and fast changing environment. From a business perspective, the trend over the last few years in the Web services area firmly points toward seamless business logic integration and inter-enterprise collaboration. However, in order to accomplish such goals, both technological and conceptual advances are required. Some already have proved their viability, others still have to be made. Among them, Web service orchestration and choreography are of crucial importance, but still lack a widely agreed on development framework comprising both technological and conceptual aspects. In this chapter, we try to provide a critical snapshot of current standards for Web service development and particularly we focus on Web service orchestration and choreography. We discuss problems and solutions from a conceptual point of view, exemplify the illustrated ideas by means of real-world technologies and standards, and highlight the mutual dependencies that exist among orchestration and choreography of Web services.
This paper proposes a model-driven, extensible platform, delivered on the Web, which is able to support long-distance collaboration of students teams working on complex projects. The main merit of this proposal is the ability to support end-users to self-organize processes, by using a simple Web interface and a library of activities that cover most of the needs arising in collaborative environments. In this way, students can organize processes in a flexible way, and at the same time their work is well-organized, well-understood by all team members. This underlying paradigm for the management of dynamic processes is very general and can be applied to other application contexts, different from e-learning, after understanding and modelling the relevant collaboration activities.
In this paper we identify the importance of the prob-lem of discovering dynamic dependencies among Web services. The approach we take is to automate the identi-fication of traces of dependent messages, based on the correlation of messages exchanged among services. We infer service dependencies based on the correlated mes-sage traces.
In this paper we present a development and runtime environment for creating composite applications by reusing existing presentation components. The granularity of components is that of stand-alone modules encapsulating re-usable functionalities. The goal is to allow developers to easily create compos-ite applications by combining the components' individual user interfaces.
The trend over the last years clearly shows that modern Web development is evolving from traditional, HTML-based Web sites to full-fledged, complex Web applications, also equipped with active and/or adaptive application features. While this evolution unavoidably implies higher development costs and times, such implications are contrasted by the dynamics of the modern Web, which demands for even faster application development and evolution cycles.
In this paper we address the above problem by focusing on the case of adaptive Web applications. We illustrate an ECA rule-based approach, intended to facilitate the management and evolution of adaptive application features. For this purpose, we stress the importance of decoupling the active logic (i.e. the adaptivity rules) from the execution of the actual application by means of a decoupled rule engine that is able to capture events and to autonomously enact adaptivity actions.
The development of user interfaces (UIs) is one of the most time-consuming aspects in software development. In this context, the lack of proper reuse mechanisms for UIs is increasingly becoming manifest, especially as software development is more and more moving toward composite applications. In this paper we propose a framework for the integration of stand-alone modules or applications, where integration occurs at the presentation layer. Hence, the final goal is to reduce the effort required for UI development by maximizing reuse.
The design of the framework is inspired by lessons learned from application integration, appropriately modified to account for the specificity of the UI integration problem. We provide an abstract component model to specify characteristics and behaviors of presentation components and propose an event-based composition model to specify the composition logic. Components and composition are described by means of a simple XML-based language, which is interpreted by a runtime middleware for the execution of the resulting composite application. A proof-of-concept prototype allows us to show that the proposed component model can also easily be applied to existing presentation components, built with different languages and/or component technologies.
The evolution of the Information Technology in the last years has seen the World Wide Web transforming from a read-only hypertext media into a full-fledged, multi-channel and multi-service application delivery platform. As a consequence, there has been an evolution from simple, static Web sites to complex, data-intensive Web applications. As for the development of such Web applications, the described evolution demands for appropriate development methods, able to cope with the growing complexity and the specific peculiarities of such new generations of Web applications. It is the field of Web Engineering that addresses this demand and that aims to develop systematic methodologies and solutions for an efficient development process for modern Web applications.
Also, with the advent of new and powerful mobile devices, the Web is addressing a continuously growing number of users and is more and more pervading our everyday life. In this regard, the need to improve the user's browsing experience, e.g., by adapting the application to user preferences and device characteristics, has become manifest. Personalization and adaptation to preferences and devices have already proved their benefits for both application providers and content or service consumers.
Similarly, context-awareness and more flexible adaptation mechanisms are increasingly becoming key factors to enhance both the effectiveness and the efficiency of the Web applications of today and especially of tomorrow. "Context-awareness'' is intended as capability to take into account whichever properties or information that characterize the interaction with the application, i.e. the context, and to react to changes that such properties or information may experience during the use of the application. Reactions, i.e. application adaptations, are therefore not anymore based on the sole user preferences and device characteristics, but more in general on any property that characterizes the context of the interaction. Typical application adaptations in Web applications are, for example, the adaptation of contents or hyperlinks, the execution of operations or services, or the adaptation of presentation or style properties.
In line with these considerations, this dissertation puts its focus on the development of context-aware and adaptive Web applications. As answer to the challenge faced by the Web Engineering field, this dissertation proposes a conceptual, model-driven method for the design of context-awareness and adaptivity in Web applications. The proposed method is achieved by extending an already established conceptual modeling language for Web application design, i.e. the Web Modeling Language (WebML), also providing for the automatic generation of the application code. The proposed design model reflects a conceptualization of problems and solutions deriving from the use of context-aware and/or adaptive features in the domain of the Web, thus representing a comprehensive instrument covering the main requirements in the design of context-aware Web applications.
This dissertation provides one of the first methodological approaches to context-awareness and adaptivity in the field of Web Engineering. More precisely, this dissertation is one of the first attempts to enlarge the applicability of adaptive application features in the Web from "adaptive hypermedia systems'' to "context-aware Web applications''. While the former typically are based on a user model that is dynamically updated based on the observation of the user's navigation actions, the latter may be based on a more complex context model and active, context-triggered application features. Although the research described in this dissertation is applied to the WebML method, its general nature also contributes to the advancement of the Web Engineering field in general.
The dissertation starts with an introduction to the background knowledge required for the comprehension of the outlined ideas, i.e. context-awareness, adaptability, adaptivity, and the Web Modeling Language (WebML). Inspired by a comparison of solutions provided by other Web modeling methods, the dissertation then gradually introduces the new concepts related to adaptivity and context-awareness into WebML, describes the implementation of the extended model in the context of the WebRatio CASE tool for the design of WebML applications, and discusses a modeling case study. Then, the dissertation shows how the achieved results have been exploited in different contexts and how we envision the work will evolve. The concluding chapter summarizes the presented work, discusses benefits and limitations, and outlines ongoing and future work.
Creating composite applications from reusable components is an important technique in software engineering and data management. A large body of research and development exists for integrations at the data or application level. However, little work has been done to facilitate integration at the presentation level. In this paper we discuss existing UI frameworks and component technologies used in presentation integration, illustrate their strengths and weaknesses, and present some opportunities for future work in the area.
Context-aware, multi-channel Web applications are more and more gaining consensus among both content providers and consumers, but very few proposals exist for their conceptual modeling. This article illustrates a conceptual framework that provides modeling facilities for context-aware, multichannel Web applications; it also shows how high-level modeling constructs can drive the application development process through automatic code generation. Our work stresses the importance of user-independent, context-triggered adaptation actions, in which the context plays the role of a first class actor, operating independently of users on the same hypertext the users navigate. Modeling concepts are based on WebML (WebModeling Language), an already established conceptual model for data-intensive Web applications, which is also accompanied by a development method and a CASE tool. However, given their general validity, the concepts of this article shape up a complete framework that can be adopted independently of the chosen model, method, and tool.
Although the efforts from the Workflow Management Coalition (WfMC) led to the definition of a standard process definition language (XPDL), there is still no standard for the definition of expected exceptions in workflows. Yet, the very few Workflow Management Systems (WfMS) capable of managing exceptions, provide a proprietary exception handling unit, preventing workflow exception definitions from being portable from one system to another one.
In this paper, we show how generic process definitions based on XPDL can be seamlessly enriched with standard-conform exception handling constructs, starting from a high-level event-condition-action language. We further introduce a suitable rule compiler, enabling to yield portable process and exception definitions in a fully automated way.
Many current research efforts address the problem of personalizing the Web experience for each user with respect to users identity and/or context. In this paper we propose a new high-level model for the specification of Web applications that takes into account the manner in which users interact with the application for supplying appropriate contents or gathering profile data. We therefore consider entire behaviors (rather than single properties) as the smallest information units, allowing for automatic restructuring of application components. For this purpose, a high-level Event-Condition-Action (ECA) paradigm is proposed, which enables capturing arbitrary (and timed) clicking behaviors. Also, the architecture and components of a first prototype implementation are discussed.
In this paper we propose a framework for the design and development of adaptive Web applications. The framework leverages on the integration of two well established approaches: a conceptual model, complemented with a CASE tool for automatic code generation, and a language for expressing ECA rules, supported by an engine for rule execution. Such integration leads to a versatile and flexible adaptivity environment, whose advantage is twofold: on one hand, conceptual modeling and automatic code generation support an efficient development process; on the other hand a detached rule engine allows us to widen the set of adaptivity requirements that can be handled and to overcome some limitations of current modeling approaches.
Many current research efforts address the problem of personalizing the Web experience for each user with respect to users identity and/or context. In this paper we propose a new high-level model for the specification of Web applications that takes into account the manner in which users interact with the application for supplying appropriate contents or gathering profile data. We therefore consider entire behaviors (rather than single properties) as the smallest information units, allowing for automatic restructuring of application components. For this purpose, a high-level Event-Condition-Action (ECA) paradigm is proposed, which enables capturing arbitrary (and timed) clicking behaviors. Also, the architecture and components of a first prototype implementation are discussed.
Current Web Applications are volatile in nature, and require (fre-quent) changes of content, functionality, semantics, structure, navigation, presentation or implementation. Often, the application is somehow engineered at design time to be able to adapt to for example the user, context, or device: this is reflected by the con-cept of adaptation. However, changes may also occur unexpect-edly over time, for example in the case of a re-design of the site or in the case of updates due to changing external sources: in this case, we talk about evolution as a natural and intrinsic part of the application's life cycle.
The First International Workshop on Adaptation and Evolution in Web Systems Engineering studies all forms of (support for) adap-tation and evolution during the Web engineering process, and in particular the synergy between both.
Finding a previously visited page during a Web navigation is a very common and important kind of interaction. Most commercial browsers incorporate history mechanisms, which typically are simple indexes of visited pages, sorted according to the time dimension. Such mechanisms are not very effective and are quite far from giving users the impression of a semantically aware, long-term memory, as it is available to the human brain. In particular they lack associative, semantic-based mechanisms that are essential for supporting information retrieval. This paper introduces xMem (eXtended Memory Navigation) as a new method to access users navigation history, based upon semantic and associative access. Its aim is to emulate some of the features of the human memory, so as to give users a better understanding of the context of their searches, by exploiting semantic cues characterizing contents of visited pages.
As the amount of information on the World Wide Web continues to grow, efficient hypertext navigation mechanisms are becoming crucial. Among them, effective history mechanisms play an important role. We therefore decided to provide a new method to access users navigation histories, called xMem (Extended Memory Navigation), building on semantic-based and associative accesses, so as to imitate some of the features of the human memory. Such a memory may give users better understanding of the context of their searches, intermixing semantic aspects with the temporal dimension.
The paper presents the experimental study conducted on the xMem approach to revisit the Web interaction history. Two controlled experiments have been performed with the aim to evaluate the effectiveness of the xMem history mechanism with respect to traditional Web browser histories. The results from the first experiment show a clear advantage, in terms of the time needed to complete a retrieving task, for the subjects that used the xMem prototype. Accordingly, users found retrieving previously visited pages with xMem more satisfying than using Web interaction histories sorted by the only time dimension. The results from the second experiment show the relevance in the process of information retrieval of clusters and keywords semantically related to the context of the search.
The trend over the last few years in the Web service area firmly points toward seamless business logic integration and inter-enterprise collaboration. In order to reach these business goals, both technological and conceptual advances are required, some already have proven their viability, others still have to be made. Among them, Web service orchestration and choreography are of crucial importance, but still lack a widely agreed on development framework comprising both technological and conceptual aspects.
As the Web service domain is a fast growing and equally rapidly changing environment, this paper provides a critical snapshot of currently available standards focusing on Web services orchestration and choreography. An introduction into conceptual problems and solutions regarding orchestration and choreography of Web services is given, and mutual dependencies between the two are highlighted.
Finding a previously visited page during a Web navigation is a very common and important kind of interaction. Although most commercial browsers incorporate history mechanisms, such mechanisms are typically very simple indexes of visited pages, sorted according to the time dimension. They are not very effective and are quite far from giving users the impression of a semantically aware, longterm memory, as it is available to the human brain. In particular they lack associative, semantic-based mechanisms that are essential for supporting information retrieval.
This paper introduces the xMem (Extended Memory Navigation) as a new method to access users navigational history, based upon semantic-based and associative accesses. Its aim is to imitate some of the features of the human memory, so as to give users a better understanding of the context of their searches, by exploiting time-based ordering and semantic clues.
The Adaptive Web is a new research area addressing the personalization of theWeb experience for each user. In this paper we propose a new high-level model for the specification ofWeb applications that take into account the manner users interact with the application for supplying appropriate contents or gathering profile data. We therefore consider entire processes (rather than single properties) as smallest information units, allowing for automatic restructuring of application components. For this purpose, a high-level Event-Condition-Action (ECA) paradigm is proposed, which enables capturing arbitrary (and timed) clicking behaviors. Also, a possible architecture as well as a first prototype implementation are discussed.
Service oriented computing is becoming the standard paradigm to support the creation of applications composed of eservices selected from a registry. Nowadays, we are assisting to the proliferation of standardized approaches to describe such services, but there is the general agreement of distinguishing the general characteristics of services from the characteristics linked to service invocation. In many case, the selection of services is static and based on matching techniques to retrieve the most appropriate service.
The paper presents the MAIS architecture to provide highly adaptive e-services in a mobile and interactive environment. It focuses on service selection and invocation, contextaware orchestration, and mechanisms for managing user interaction in a service oriented architecture. We propose adaptivity at different levels: at the process level, but also at the level of the selection of a concrete service given an abstract description. selection is based on suitable ontologies and can consider the actual context and user characteristics to retrieve the most suitable services. The paper describes the main components of the architecture and exemplifies them on a simply process for a shipping company.
This paper focuses on some issues related to the conceptual modeling of multi-channel, context aware Web applications, and in particular it proposes some solutions conceived within the WebML method. WebML is a conceptual model for data-intensive Web applications, which already offers some constructs for one-to-one personalization and multi-channel delivery. In this paper we introduce some new extensions that will allow representing a context model at data level, and exploit it at hypertext level for offering customized services and contents, accessible through multiple channels.