Publications

List of Publications

Business Informatics Group, TU Wien

Reset Filters

UML 2.0 in der Praxis

Martin SchwaigerManuel WimmerGerti Kappel

View PDF View .bib

Handle: 20.500.12708/14656; Year: 2009; Issued On: 2009-01-01; Type: Thesis; Subtype: Diploma Thesis;

Keywords: UML, UML 2.0, Unified Modeling Language, modeling
Astract: In 2005 the Object Management Group (OMG) adopted the UML 2.0, a new version of the Unified Modeling Language which is said to be more compact, understandable and coherent.
The main criticism of the UML 1.x is its extensive and complex format which inhibits learners from quickly utilizing the functions the program offers.1 This thesis dealt with the question of whether the UML 2.0 has become more effective an efficient. This question can be addressed through the creation of a software project in which all UML 2.0 diagrams are used.
The theoretical part contains a historical description of the UML as well as the diagram types. Ultimately, the objective is to create diagrams which enables the reader to quickly comprehend the operations of the UML 2.0. The description of the software project is also included in the theoretical part. The software project is about the combining of single software systems of an enterprise into one. The objective of the enterprise is to work with one single system instead of using several systems.
Within the practical part a set of criteria was prepared which critically questioned the different aspects of the diagrams.

Schwaiger, M. (2009). UML 2.0 in der Praxis [Diploma Thesis, Technische Universität Wien]. reposiTUm. https://resolver.obvsg.at/urn:nbn:at:at-ubtuw:1-24694

B2B Services: Worksheet-Driven Development of Modeling Artifacts and Code

Christian HuemerP. LieglR. SchusterM. Zapletal

View .bib

Handle: 20.500.12708/165384; Year: 2009; Issued On: 2009-01-01; Type: Publication; Subtype: Article; Peer Reviewed:

Keywords:
Astract: In the development process of a B2B system, it is crucial that the business experts are able to express and evaluate agreements and commitments between the partners, and that the software engineers get all necessary information to bind the private process interfaces to the public ones. UN/CEFACT's modeling methodology (UMM) is a Unified Modeling Language (UML) profile for developing B2B processes. The formalisms introduced by UMM's stereotypes facilitate the communication with the software engineers. However, business experts-who usually have a very limited understanding of UML-prefer expressing their thoughts and evaluating the results by plain text descriptions. In this paper,we describe an approach that presents an equivalent of theUMMstereotypes and tagged values in text-based templates called worksheets. This strong alignment allows an integration into a UMM modeling tool and ensures consistency. We show how a specially designed XML-based worksheet definition language allows customization to special needs of certain business domains. Furthermore, we demonstrate how information kept in worksheets may be used for the semi-automatic generation of pattern-based UMM artifacts which are later transformed to web service definition language and business process execution language code.

Huemer, C., Liegl, P., Schuster, R., & Zapletal, M. (2009). B2B Services: Worksheet-Driven Development of Modeling Artifacts and Code. The Computer Journal, 52(8), 1006–1026. https://doi.org/10.1093/comjnl/bxn076

SmartMatcher: Improving Automatically Generated Transformations

Horst KarglManuel WimmerMartina SeidlGerti Kappel

View .bib

Handle: 20.500.12708/165566; Year: 2009; Issued On: 2009-01-01; Type: Publication; Subtype: Article;

Keywords:
Astract: Model integration is one of the core components for the realization of model-driven engineering. In particular, the seamless exchange of models among different modeling tools is of special importance. This exchange is achieved by the means of model transformations. However, the manual definition of model transformations is an error prone and cumbersome task. So matching techniques, originally intended for database schema integration, have been reused. The result is unsatisfactory as current matching approaches typically produce only one-toone alignments which are inappropriate for many integration problems. As a consequence, a detailed review and a manual post-processing step is often necessary. To tackle these problems, we propose the self-tuning framework SmartMatcher for improving automatically generated transformations. Our approach combines the power of an executable mapping language for bridging structural heterogeneities with the strength of an instance based quality evaluation model. In an iterative, feedback-driven process a mapping between two schemas is constructed and repeatedly enhanced.

Kargl, H., Wimmer, M., Seidl, M., & Kappel, G. (2009). SmartMatcher: Improving Automatically Generated Transformations. Datenbank-Spektrum: Zeitschrift Für Datenbanktechnologien Und Information Retrieval, 9(29), 42–52. http://hdl.handle.net/20.500.12708/165566

SmartMatching in der Praxis : Evaluierung und Erweiterung eines Forschungsprototyps

Dominik KarallManuel WimmerGerti Kappel

View .bib

Handle: 20.500.12708/177887; Year: 2009; Issued On: 2009-01-01; Type: Thesis; Subtype: Diploma Thesis;

Keywords: schema matching, information integration, schema mapping, smartmatcher
Astract: In software projects most of the data is saved in a structured way to simplify the usage of it. These data structures are persisted in relational or XML (Extensible Markup Language) databases. In case of new software releases these data structures have to be modified to save new data or adapted data. The usage of a new technology often implies changes in the data structures as well. If changes are made on the data structure, in most of the cases the old data must be migrated to the new data structure to avoid data loss. This process, called information integration, is a time intensive job, and must be done by experts, who create the mapping rules manually and have to take care of the data structure limitations. With schema matching this process can be solved more efficient by schema matching tools build automatically the mapping rules which can be used to transform the data into a new data structure.
SmartMatcher is a schema matching tool prototype, which has been developed at the Vienna University of Technology. This prototype generates mappings out of a source and target schema with their corresponding training instances. The latest release of the SmartMatcher contains a new internal data structure, which should allow more complex mapping operations in future. The effect of this new data structure regard-ing the quality of mappings has been evaluated in this work.
Furthermore, a new feature has been integrated which allows to import existing mappings. With this fea-ture the SmartMatcher will be able to use results from other matching tools to im-prove them. In the past the SmartMatcher was limited to one training instance per schema. Thus, a further feature was implemented, called Multiple Samples. This allows more training instances to be used by the SmartMatcher, and improves the user experience by providing a better clarity of the training instances.

Karall, D. (2009). SmartMatching in der Praxis : Evaluierung und Erweiterung eines Forschungsprototyps [Diploma Thesis, Technische Universität Wien]. reposiTUm. http://hdl.handle.net/20.500.12708/177887

Wirtschaftliche Prozessautomatisierung: Eine Methode zur Auswahl geeigneter Geschäftsprozesse für eine gewinnbringende Automatisierung mittels Workflow Management Systeme

Horst GruberA Min TjoaChristian Huemer

View .bib

Handle: 20.500.12708/184247; Year: 2009; Issued On: 2009-01-01; Type: Thesis; Subtype: Doctoral Thesis;

Keywords: business process, automation, economic process automation, framework for selecting business processes, economic automation, Workflow Management System
Astract: Workflow technology promises an increase in efficiency in the execution of business processes. The technology is widely accepted, but often the high costs exceed the promised benefits. Most companies perform an evaluation of workflow management systems (WFMS) tools before selecting their tool of choice. However, they usually do not carefully select the business processes for automation by WFMS. Accordingly, they choose processes that have an inappropriate structure for automation and/or that are of less strategic and operational relevance. Even if appropriate processes are selected, it is of great importance to justify the planned investments into WFMS support by a profitability analysis before starting the implementation. This is rarely performed by any company. Further no adequate procedure for selecting the right business processes exists.
In this thesis a framework for the planning phase of the support of business proc-esses by WFMS is presented. This framework covers all three main modules: selec-tion of appropriate business processes (module 1), the selection of an appropriate WFMS (module 2), and the calculation of the profitability of the planned investment (module 3). The first module, the selection of appropriate business processes, is based on a multi factor analysis of technical, economic and organizational criteria leading to a process-automation-portfolio enabling the selection of appropriate business processes. The WFMS selection in the second module is based on a criteria catalogue covering aspects of functionality, architecture, integration, robustness, and secured prospects. The profitability analysis in the third module is based on the selected business proc-esses of module 1 and the chosen WFMS of module 2. This analysis is predicted on simple measurable performance indicators that consider the tangible calculation of costs as well as the quantitative and qualitative benefits. Long time practical experience in implementing and operating workflow management supported the design of the framework. The presented framework has been successfully used in the IT company of a banking corporation.

Gruber, H. (2009). Wirtschaftliche Prozessautomatisierung: Eine Methode zur Auswahl geeigneter Geschäftsprozesse für eine gewinnbringende Automatisierung mittels Workflow Management Systeme [Dissertation, Technische Universität Wien]. reposiTUm. http://hdl.handle.net/20.500.12708/184247

Web scraping : a tool evaluation

Andreas MehlführerChristian Huemer

View .bib

Handle: 20.500.12708/184830; Year: 2009; Issued On: 2009-01-01; Type: Thesis; Subtype: Diploma Thesis;

Keywords: Web Scraper, Crawler, Evaluation
Astract: The WWW grows to one of the most important communication and informa- tion medium. Companies can use the Internet for various tasks. One possible application area is the information acquisition. As the Internet provides such a huge amount of information it is necessary to distinguish between relevant and irrelevant data. Web scrapers can be used to gather defined content from the Internet. A lot of scrapers and crawlers are available. Hence we decided to accomplish a case study with a company. We analyze available programs which are applicable in this domain.
The company is the leading provider of online gaming entertainment. They offer sports betting, poker, casino games, soft games and skill games.
For the sports betting platform they use data about events (fixtures/results) which are partly supplied by extern feed providers. Accordingly, there is a dependency on providers. Another problem is that smaller sports and minor leagues are not covered by the providers. This approach requires cross checks which are manual checks with websites if provider data differs and the definition of a primary source (source which is used in case of different data from providers) have to be done by data input user. Data about fixtures and results should be delivered by a company owned application.
This application should be a domain-specific web crawler. It gathers information from well defined sources. This means bwin will not depend on other providers and they can cover more leagues. The coverage of the data feed integration tool will increase. Furthermore, it eliminates the cross checks between different sources. The first aim of this master thesis is to compose a functional specification for the usage of robots to gather event data via Internet and in- tegrate the gathered information into the existing systems. Eight selected web scrapers will be evaluated and checked based on a benchmark catalogue which is created according to the functional specification. The catalogue and the selection are conceived to be reused for further projects of the company. The evaluation should result in a recommendation which web scraper fits best the requirements of the given domain.

Mehlführer, A. (2009). Web scraping : a tool evaluation [Diploma Thesis, Technische Universität Wien]. reposiTUm. http://hdl.handle.net/20.500.12708/184830

Ein Ansatz zur Auflösung von Konflikten bei der Versionierung von Modellen

Simon TragatschnigManuel WimmerMartina SeidlGerti Kappel

View .bib

Handle: 20.500.12708/185677; Year: 2009; Issued On: 2009-01-01; Type: Thesis; Subtype: Diploma Thesis;

Keywords: versioning, conflict resolution, model versioning
Astract: In modern software development, models are used for documentation and particularly for source code generation. Thus, syntax errors and inconsistencies within a model directly affect the generated source code.
If only one developer is working on a model, it is easier to ensure the consistency of the model, because of tools provided by the development environment. If several developers are working collaboratively on one model, which is the more realistic case, a version control system (VCS) has to be used. Unfortunately, conventional textbased approaches for versioning are not sufficient for versioning models.
Concerning conflict resolution two major drawbacks exist when using conventional VCSs.
The first drawback concerns conflicts which arise between two versions of models when several developers are concurrently changing the same model elements. These conflicts must be resolved manually by the developer, because there is currently no tool support for merging conflicting models. Errors caused by the manual merge of the model versions have negative impact on the quality of the generated source code. The second drawback stems from lost information which is generated during conflict resolution and which may be of use in future conflicting situations. However, this information is currently not saved in VCS and so it gets lost. Thus, generated knowledge concerning conflict resolution cannot be reused and the usage of design guidelines for developing future models cannot be provided.
Therefore, prior decisions of developers for conflict resolution cannot be reused and the usage of design guidelines for developing models cannot be verified.
The approach introduced in this thesis facilitates the description of resolution strategies for syntactic and static semantic conflicts, which allows a semiautomatic resolution of conflicts. To describe syntactic conflicts and their resolutions, a model based, descriptive language is introduced. To resolve static semantic conflicts, graph transformation and patterns expressed in the Object Constraint Language(OCL) are employed.
Because of the model based approach for describing resolution strategies, simple and flexible adaptations of the strategies are possible. The common structure of the resolution patterns facilitates the recording of the developer's decisions. Analysis of the recorded decisions provides further information which may support the developer finding the appropriate resolution strategy for a conflict and verifying the usage of design guidelines. Finally, the increased quality of the merged model versions reduces errors within the generated source code.

Tragatschnig, S. (2009). Ein Ansatz zur Auflösung von Konflikten bei der Versionierung von Modellen [Diploma Thesis, Technische Universität Wien]. reposiTUm. http://hdl.handle.net/20.500.12708/185677

From legacy web applications to WebML models : a framework-based reverse engineering process

Max RiederManuel WimmerGerti Kappel

View .bib

Handle: 20.500.12708/186602; Year: 2009; Issued On: 2009-01-01; Type: Thesis; Subtype: Diploma Thesis;

Keywords: webml, legacy, web application, reverse engineering
Astract: In the last decade the adoption of web applications instead of desktop applications has grown rapidly. Also the patterns and technologies for developing and running web applications have changed a lot over time. The World Wide Web has evolved from a collection of linked static documents to a space of countless dynamic, data centric applications. One of the oldest and most popular languages for developing dynamic web applications is PHP. Although nowadays there are proved techniques for developing web applications in PHP, many older PHP web applications are written without the notion of applying welldefined design patterns. Those web applications are hard to understand, maintain, extend as well as hard to migrate to new web platforms.
Nowadays many web applications are developed using Model Driven Engineering (MDE) techniques where software systems are described as models and code artifacts are generated out of these models. But often the requirement is not to develop a completely new web application but to capture the functionality of an existing legacy application. As it usually takes a lot of time for humans to understand the source code, it can be helpful to have a tool that analyzes the source artifacts and transforms them into a model on a higher level of abstraction. This process is called reverse engineering.
The requirements for such a tool to work is the existence of well-known patterns in the source code, which is typically found in Model-View-Controller (MVC) web applications.
In this thesis a reverse engineering process from a legacy PHP web shop application into a model of the Web Modeling Language (WebML), based on static code analysis, is presented. First of all the requirements for the source code are analyzed in order to apply an automatic reverse engineering process on it. The source application is refactored to fulfill these requirements, which leads to a MVC version of the example application. The refactored application is the source for the next step, a code to model transformation into an intermediate model of the MVC web application.
The last step is a model to model transformation from the the MVC model into a WebML model.
The result is a WebML model that shows the most important structural and behavioral aspects of the example application. The benefit of such a model is that that it provides a realistic documentation of the current state of the application. Whenever the application changes, the process can be repeated so the documentation never gets outdated. It helps humans to understand the connections between different parts of the application and can be used to support refactoring activities or the migration to another platform.

Rieder, M. (2009). From legacy web applications to WebML models : a framework-based reverse engineering process [Diploma Thesis, Technische Universität Wien]. reposiTUm. http://hdl.handle.net/20.500.12708/186602

Service-Oriented Enterprise Modeling and Analysis

Christian HuemerPhilipp LieglRainer SchusterMarco ZapletalBirgit Hofreiter

View .bib

Handle: 20.500.12708/26495; Year: 2009; Issued On: 2009-01-01; Type: Publication; Subtype: Book Contribution; Peer Reviewed:

Keywords:
Astract: This chapter concentrates on the modeling and analysis of enterprises that collaborate in a service oriented world. According to the idea of model-driven development, modeling of service-oriented enterprises collaborating in a networked configuration must address three different layers. The first layer is concerned with business models that describe the exchange of economic values among the business partners. An appropriate methodology on this level of abstraction is e3-value [1, 2]. The second layer addresses the inter-organizational business processes among business partners. The third layer addresses the businesses processes executed at each partner´s side, i.e., what each partner implements locally to contribute to the business collaboration.

Huemer, C., Liegl, P., Schuster, R., Zapletal, M., & Hofreiter, B. (2009). Service-Oriented Enterprise Modeling and Analysis. In Handbook of Enterprise Integration (pp. 307–322). Auerbach Publications. http://hdl.handle.net/20.500.12708/26495

Lost in Translation? Transformation Nets to the Rescue!

Manuel WimmerAngelika KuselThomas ReiterWerner RetschitzeggerWieland SchwingerGerti KappelJianhua YangAthula GinigeHeinrich C. MayrRalf-Detlef Kutsche

View .bib

Handle: 20.500.12708/52679; Year: 2009; Issued On: 2009-01-01; Type: Publication; Subtype: Inproceedings; Peer Reviewed:

Keywords:
Astract: The vision of Model-Driven Engineering places models as first-class artifacts throughout the software lifecycle. An essential prerequisite is the availability of proper transformation languages allowing not only code generation but also augmentation, migration or translation of models themselves. Current approaches, however, lack convenient facilities for debugging and ensuring the understanding of the transformation process. To tackle these problems, we propose a novel formalism for the development of model transformations which is based on Colored Petri Nets. This allows first, for an explicit, process-oriented execution model of a transformation, thereby overcoming the impedance mismatch between the specification and execution of model transformations, being the prerequisite for convenient debugging. Second, by providing a homogenous representation of all artifacts involved in a transformation, including metamodels, models and the actual transformation logic itself, understandability of model transformations is enhanced.

Wimmer, M., Kusel, A., Reiter, T., Retschitzegger, W., Schwinger, W., & Kappel, G. (2009). Lost in Translation? Transformation Nets to the Rescue! In J. Yang, A. Ginige, H. C. Mayr, & R.-D. Kutsche (Eds.), Information Systems: Modeling, Development, and Integration (pp. 315–327). Springer. https://doi.org/10.1007/978-3-642-01112-2_33