Testing

Reviewed: 2025-10-10

Dr. Emil Alegroth

Questions regarding the research presented on this page? Contact Dr. Emil Alegroth.

SERL advances software testing that is practical, robust, and tightly coupled to everyday delivery. A major thread tackles GUI-based testing: reducing manual effort with “augmented testing,” and boosting review of test artifacts, e.g. for GUI testing, into a first-class engineering activity with concrete guidelines. When teams automate and collaborate, robustness is addressed head-on, especially if supported bysimilarity- and vision-aware web element localization to reduce flaky tests as systems evolve. Experience reports document where such workflows and guidelines work (or break), so teams can anticipate adoption pitfalls rather than rediscover them.

Decision support is another focus. Checklists help practitioners decide what to rerun during regression, while a goal–question–metric framing aligns what engineers measure with what organizations value. Complementary work structures what “quality” means for test artifacts and catalogues testing metrics for dashboards that provide indicators that matter in practice. Risk-based testing is made more actionable with a taxonomy that links standards to tailoring choices engineers face.

Bridging requirements and testing, SERL shows how to generate acceptance tests from well-formed conditionals and how to verify performance requirements in suitable test environments - shortening the path from intent to executable checks. Looking ahead, the group explores LLMs to automatically annotate recorded GUI tests and to coordinate multi-agent solutions for GUI test generation, while keeping a clear-eyed view of industry constraints illuminated in broad challenge mappings.

Current and Future Work

Expect human-in-the-loop toolchains: generative AI driven augmented testing, supported by review guidelines for reliable GUI regression; multi-modal locator strategies; and lean checklists wired to risk and product goals and quality models for test artifacts that inform lightweight dashboards. LLMs will support labeling and multi-agent test generation, grounded in realistic industrial constraints.

Together, this work helps teams make testing faster to do and safer to trust - connecting intent, automation, and decision-making without losing sight of real-world limits.

Important context

This text was generated by AI and edited by humans. It is based on SERL's research publications between January 2020 and September 2025. For technical questions, please contact Dr. Michael Unterkalmsteiner.