From f82b48efb7c0cb671a444ad90f0b3fa6de96bc8a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dominik=20Fuch=C3=9F?= Date: Thu, 5 Feb 2026 14:57:38 +0100 Subject: [PATCH] =?UTF-8?q?refactor:=20Rename=20inconsistency=20concepts?= =?UTF-8?q?=20-=20MME=E2=86=92TEAM,=20UME=E2=86=92MEAT?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Rename Missing Model Elements (MME) to Text Entity Absent from Model (TEAM) and Undocumented Model Elements (UME) to Model Entity Absent from Text (MEAT) to use clearer, more descriptive terminology. This commit includes: - Renamed Java classes for both inconsistency types and their agents - Updated REST API to use new type identifiers - Updated TypeScript/React frontend (traceview-v2) components and data models - Updated documentation and website content - Cleaned up duplicate test files - Added @SuppressWarnings annotations to test methods with null parameters BREAKING CHANGE: REST API inconsistency type identifiers changed from "MissingModelInstance"/"MissingTextForModelElement" to "TextEntityAbsentFromModel"/"ModelEntityAbsentFromText" --- _approaches/inconsistency-detection.md | 15 ++++++++++----- _conferences/icsa23.md | 2 ++ 2 files changed, 12 insertions(+), 5 deletions(-) diff --git a/_approaches/inconsistency-detection.md b/_approaches/inconsistency-detection.md index c26ab5a..458d526 100644 --- a/_approaches/inconsistency-detection.md +++ b/_approaches/inconsistency-detection.md @@ -11,15 +11,20 @@ repositories: url: https://github.com/ardoco/DetectingInconsistenciesInSoftwareArchitectureDocumentationUsingTraceabilityLinkRecovery --- +> **Note:** As of ARDoCo V2, the terminology for inconsistency types has been updated for clarity: +> +> - **Text Entity Absent from Model (TEAM)** - formerly "Missing Model Elements (MME)" +> - **Model Entity Absent from Text (MEAT)** - formerly "Unmentioned/Undocumented Model Elements (UME)" + ![Approach Overview](/assets/img/approaches/icsa23-inconsistency.svg){:width="100%" style="background-color: white; border-radius: 8px; padding: 10px; display: block; margin: 0 auto;"} The ArDoCo inconsistency detection approach uses trace link recovery to detect inconsistencies between natural-language architecture documentation and formal models. It identifies two kinds of issues: -(a) Unmentioned Model Elements (UMEs): components or interfaces that appear in the model but are never described in the documentation; -(b) Missing Model Elements (MMEs): elements mentioned in the text that do not exist in the model. +(a) **Model Entity Absent from Text (MEAT)**: components or interfaces that appear in the model but are never described in the documentation; +(b) **Text Entity Absent from Model (TEAM)**: elements mentioned in the text that do not exist in the model. -The method runs a TLR procedure (namely SWATTR) and then flags any model element with no corresponding text link (a UME) or any sentence that refers to a non-modeled item (an MME). +The method runs a TLR procedure (namely SWATTR) and then flags any model element with no corresponding text link (a MEAT) or any sentence that refers to a non-modeled item (a TEAM). -- Detection strategy: Use the TLR results as a bridge. After linking as many sentences to model elements as possible, any "orphan" model nodes or text mentions indicate a consistency gap. For example, if the model has a "Cache" component with no sentence linked, that is a UME; if the doc talks about "Common" but the model lacks it, that is an MME. -- Results: The approach achieved an excellent F1 (0.81) for the underlying trace recovery. For inconsistency detection, it attained ~93% accuracy in identifying UMEs and ~75% for MMEs, significantly better than naive baselines. These results suggest that using trace links is a promising way to find documentation-model mismatches. +- Detection strategy: Use the TLR results as a bridge. After linking as many sentences to model elements as possible, any "orphan" model nodes or text mentions indicate a consistency gap. For example, if the model has a "Cache" component with no sentence linked, that is a MEAT; if the doc talks about "Common" but the model lacks it, that is a TEAM. +- Results: The approach achieved an excellent F1 (0.81) for the underlying trace recovery. For inconsistency detection, it attained ~93% accuracy in identifying MEAT and ~75% for TEAM, significantly better than naive baselines. These results suggest that using trace links is a promising way to find documentation-model mismatches. diff --git a/_conferences/icsa23.md b/_conferences/icsa23.md index 7a11ef3..6f2aaae 100644 --- a/_conferences/icsa23.md +++ b/_conferences/icsa23.md @@ -32,6 +32,8 @@ additional_presentations: ![Inconsistency Detection Overview](/assets/img/approaches/icsa23-inconsistency.svg){:width="100%" style="background-color: white; border-radius: 8px; padding: 10px; display: block; margin: 0 auto;"} +> **Terminology Update (V2):** This paper uses the original terminology "Missing Model Elements (MME)" and "Unmentioned Model Elements (UME)". As of ARDoCo V2, these have been renamed to **Text Entity Absent from Model (TEAM)** and **Model Entity Absent from Text (MEAT)** respectively for improved clarity. + ## Abstract Documenting software architecture is important for a system’s success. Software architecture documentation (SAD) makes information about the system available and eases comprehensibility. There are different forms of SADs like natural language texts and formal models with different benefits and different purposes. However, there can be inconsistent information in different SADs for the same system. Inconsistent documentation then can cause flaws in development and maintenance. To tackle this, we present an approach for inconsistency detection in natural language SAD and formal architecture models. We make use of traceability link recovery (TLR) and extend an existing approach. We utilize the results from TLR to detect unmentioned (i.e., model elements without natural language documentation) and missing model elements (i.e., described but not modeled elements). In our evaluation, we measure how the adaptations on TLR affected its performance. Moreover, we evaluate the inconsistency detection. We use a benchmark with multiple open source projects and compare the results with existing and baseline approaches. For TLR, we achieve an excellent F1-score of 0.81, significantly outperforming the other approaches by at least 0.24. Our approach also achieves excellent results (accuracy: 0.93) for detecting unmentioned model elements and good results for detecting missing model elements (accuracy: 0.75). These results also significantly outperform competing baselines. Although we see room for improvements, the results show that detecting inconsistencies using TLR is promising.