Research Group Synthesis
Reviewed: 2026-04-01
AI for Software Engineering
SERL treats natural-language artefacts - requirements, review comments, test narratives - as analyzable, actionable data. Multi-label classification organizes large requirements taxonomies; causality and conditional analysis reduce ambiguity and generate acceptance tests. In reviews and triage, NLP guides attention to likely defects and recurring issues. Testing benefits from LLM-assisted labeling of recorded GUI tests and research on which multi-agent setups generate more effective GUI checks. The common outcome is shorter feedback loops and clearer links from intent to verification.
Software Engineering for AI
When products embed ML, reliability depends on the entire socio-technical system. SERL catalogues “data smells” to expose quality pitfalls, formalizes MLOps as a foundation for continuous AI delivery, and investigates threat modeling and adversarial risk in industry. Assurance becomes pipeline-native: compliance as code and enriched SBOMs integrate security, privacy, and audit into CI/CD so evidence is produced continuously.
Continuous Delivery, Reuse, and Quality
SERL’s work on continuous software engineering makes readiness and cost-benefit visible, helping leaders calibrate investments in automation, feedback, and governance. Strategic reuse is reframed as an ecosystem - InnerSource components, service catalogues, automated tests - sustained by decision models and context evidence. Quality research pinpoints metrics that track maintainability and reliability, studies how review knowledge diffuses, and connects internal quality to customer value and product half-life.
Requirements, Testing, and Traceability
Requirements engineering advances an ontology of quality and fitness-for-purpose, enabling lean dashboards and Bayesian reasoning over evidence. Taxonomy-driven traceability aligns regulatory obligations with artefacts early, while executable conditionals and performance-requirement verification shrink the gap from text to tests. Testing research hardens GUI automation against evolution, elevates test-artefact review, and provides risk-aligned regression guidance.
Security, Privacy, and Operations
Security and privacy are treated as everyday engineering work. CI-embedded compliance and capability metrics give leaders and teams shared visibility of progress and risk. In microservices and cloud-native settings, logs-driven diagnostics, integration taxonomies, and actionable refactoring guidance reduce mean time to remediate without sacrificing autonomy.
Ways of Working at Scale
Evidence on hybrid work separates rhetoric from results, showing how performance and psychological safety are maintained with clear on-site value propositions and explicit teaming habits. At organizational scale, decentralised decision-making, communities of practice, ownership and clone governance, and socio-technical alignment are the scaffolds that keep flow and quality healthy.
What’s Next
Across the portfolio, three trends stand out. First, task-specific LLM assistants will become part of routine work in requirements, testing, and triage, selected and prompted based on evidence. Second, compliance and quality evaluation will be exercised continuously in CI, with telemetry that ties internal indicators to user-level outcomes. Third, reuse ecosystems will spread hardened, secure services and test assets across products, accelerating delivery while lowering risk. The destination is engineering confidence: software that evolves quickly, proves its safety as it moves, and channels human effort where it matters most.