Skip to content

RDK-60291 [telemetry] RDK Coverity Defect Resolution for Device Management#264

Open
madhubabutt wants to merge 1 commit intodevelopfrom
feature/RDKE-60291
Open

RDK-60291 [telemetry] RDK Coverity Defect Resolution for Device Management#264
madhubabutt wants to merge 1 commit intodevelopfrom
feature/RDKE-60291

Conversation

@madhubabutt
Copy link
Contributor

No description provided.

@madhubabutt madhubabutt requested a review from a team as a code owner February 14, 2026 07:06
Copilot AI review requested due to automatic review settings February 14, 2026 07:06
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request addresses Coverity static analysis defects across multiple files in the RDK telemetry component. The changes focus on improving error handling, preventing integer underflows, fixing memory leaks, and enhancing security for temporary file creation.

Changes:

  • Fixed potential integer underflow issues in queue operations and hash map count handling
  • Enhanced file operation error checking with proper validation of stat, fread, and file sizes
  • Corrected memory management to prevent leaks from failed realloc calls
  • Improved security by setting restrictive umask before temporary file creation

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
source/utils/t2collection.c Prevents integer underflow when checking queue indices by storing count and validating before subtraction
source/utils/persistence.c Adds comprehensive error checking for stat, file size validation, and fread operations
source/reportgen/reportgen.c Fixes memory leaks by using temporary pointers for realloc and proper cleanup on allocation failure
source/protocol/rbusMethod/rbusmethodinterface.c Adds clarifying comment about mutex synchronization pattern
source/dcautil/dca.c Implements secure temporary file creation using umask(0077) before mkstemp
source/commonlib/telemetry_busmessage_sender.c Fixes null pointer dereference by checking strvalue before use and initializes ret variable
source/ccspinterface/rbusInterface.c Adds null check for param before strdup to prevent null pointer being passed
source/bulkdata/reportprofiles.c Prevents integer underflow by validating hash_map_count return value before subtraction

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

}
if (filestat.st_size < 0 || (size_t)filestat.st_size >= sizeof(data))
{
T2Error("File size %lld exceeds buffer capacity %zu for file : %s\n", (long long)filestat.st_size, sizeof(data) - 1, filePath);
Copy link

Copilot AI Feb 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The buffer size check correctly prevents buffer overflow by requiring filestat.st_size to be strictly less than sizeof(data). This reserves space for the null terminator already present from initialization. However, the error message is misleading: it says "exceeds buffer capacity %zu" but shows sizeof(data) - 1, which is 19, when the actual capacity is 20 bytes. The error message should either say "exceeds buffer capacity %zu" with sizeof(data) as the value, or "exceeds available data capacity %zu" with sizeof(data) - 1 to clarify that one byte is reserved for null termination.

Suggested change
T2Error("File size %lld exceeds buffer capacity %zu for file : %s\n", (long long)filestat.st_size, sizeof(data) - 1, filePath);
T2Error("File size %lld exceeds available data capacity %zu for file : %s\n", (long long)filestat.st_size, sizeof(data) - 1, filePath);

Copilot uses AI. Check for mistakes.
@madhubabutt madhubabutt force-pushed the feature/RDKE-60291 branch 2 times, most recently from f0b34db to e143c2f Compare February 15, 2026 05:53
Copilot AI review requested due to automatic review settings February 15, 2026 05:53
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 17 out of 17 changed files in this pull request and generated 8 comments.

Comments suppressed due to low confidence (1)

source/bulkdata/profile.c:645

  • In the maxUploadLatency HTTP path, the error branch destroys reportcond and exits via goto reportThreadEnd but does not destroy reportMutex. Since reportMutex is initialized each cycle when maxUploadLatency > 0, this can leak resources and/or hit undefined behavior on the next pthread_mutex_init. Ensure reportMutex is destroyed on this error path too.
                            {
                                T2Error("Profile : %s pthread_cond_timedwait ERROR!!!\n", profile->name);
                                pthread_mutex_unlock(&profile->reportMutex);
                                pthread_cond_destroy(&profile->reportcond);
                                if(httpUrl)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +675 to +679
bool is_signal_executing = true;
while(is_signal_executing && !is_activation_time_out)
{
if(pthread_mutex_lock(&tProfile->tMutex) != 0)
{
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

signalrecived_and_executing is a single global flag but it’s being synchronized using tProfile->tMutex. Since each profile has a different tMutex, this does not actually synchronize accesses across multiple TimeoutThreads/profiles, so the data-race/atomicity risk remains. Consider making this state per-profile (field on SchedulerProfile) or protecting the global with a single shared mutex (e.g., scMutex) for all reads/writes.

Copilot uses AI. Check for mistakes.
Comment on lines +919 to 923
// Copy threadExists while holding reportInProgressMutex to avoid holding both locks
bool threadStillExists = profile->threadExists;
pthread_mutex_unlock(&profile->reportInProgressMutex);

/* To avoid previous report thread to go into zombie state, mark it detached. */
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

threadExists is written while holding reuseThreadMutex in CollectAndReport(), but it’s read here while holding reportInProgressMutex. Because these are different mutexes, this read/write isn’t synchronized and can still be a data race. Consider consistently guarding threadExists with the same mutex everywhere (or make it atomic if it must be read without reuseThreadMutex).

Copilot uses AI. Check for mistakes.
{
T2Error("Profile : %s pthread_cond_timedwait ERROR!!!\n", profile->name);
pthread_mutex_unlock(&profile->reportMutex);
pthread_cond_destroy(&profile->reportcond);
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the RBUS_METHOD maxUploadLatency error path, the code destroys reportcond and exits via goto reportThreadEnd but does not destroy reportMutex. Since reportMutex is initialized when maxUploadLatency > 0, this can leak resources and can lead to undefined behavior if it’s later re-initialized. Ensure pthread_mutex_destroy(&profile->reportMutex) is paired with the init on this error path too.

Suggested change
pthread_cond_destroy(&profile->reportcond);
pthread_cond_destroy(&profile->reportcond);
pthread_mutex_destroy(&profile->reportMutex);

Copilot uses AI. Check for mistakes.
}
pthread_mutex_unlock(&xcThreadMutex);
}
while(isXconfInit); //End of do while loop
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isXconfInit is used as the do { ... } while(isXconfInit) loop condition without holding xcThreadMutex, but it is written under xcThreadMutex elsewhere. This is a data race (undefined behavior). Capture isXconfInit under xcThreadMutex into a local variable (or make it atomic) and use that for the loop condition.

Copilot uses AI. Check for mistakes.
Comment on lines +627 to +630
while (profile->enable && n != ETIMEDOUT)
{
n = pthread_cond_timedwait(&profile->reportcond, &profile->reportMutex, &timeout);
}
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The maxUploadLatency wait loop uses while (profile->enable && n != ETIMEDOUT) and re-calls pthread_cond_timedwait even after a successful signal (n == 0). This effectively ignores the “signaled before timeout” case and can make the later else if (n == 0) unreachable. Consider waiting once, or looping only for EINTR/spurious wakeups using an explicit predicate and breaking on n == 0.

Suggested change
while (profile->enable && n != ETIMEDOUT)
{
n = pthread_cond_timedwait(&profile->reportcond, &profile->reportMutex, &timeout);
}
n = pthread_cond_timedwait(&profile->reportcond, &profile->reportMutex, &timeout);

Copilot uses AI. Check for mistakes.
Comment on lines +694 to +696
while (profile->enable && n != ETIMEDOUT)
{
n = pthread_cond_timedwait(&profile->reportcond, &profile->reportMutex, &timeout);
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue in the RBUS_METHOD maxUploadLatency path: while (profile->enable && n != ETIMEDOUT) will keep waiting after a successful signal (n == 0), so the “signaled before timeout” branch won’t behave as intended. Use a single timed wait or a predicate-based loop that breaks on n == 0 (and only retries on EINTR/spurious wakeups).

Suggested change
while (profile->enable && n != ETIMEDOUT)
{
n = pthread_cond_timedwait(&profile->reportcond, &profile->reportMutex, &timeout);
while (profile->enable)
{
n = pthread_cond_timedwait(&profile->reportcond, &profile->reportMutex, &timeout);
if (n == 0 || n == ETIMEDOUT)
break;
if (n == EINTR)
continue;
/* For any other error, break and let the error path handle it */
break;

Copilot uses AI. Check for mistakes.
Comment on lines +610 to +614
else
{
if(pthread_mutex_unlock(&sTDMutex) != 0)
{
T2Error("%s pthread_mutex_unlock for sTDMutex failed\n", __FUNCTION__);
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When stopDispatchThread is already true, T2ER_Uninit() just unlocks sTDMutex and skips destroying erMutex/sTDMutex/erCond. This can leak resources and makes re-initialization unsafe (re-calling pthread_*_init on already-initialized objects). Consider destroying the mutexes/cond unconditionally during uninit once you’ve ensured the dispatch thread is not running.

Copilot uses AI. Check for mistakes.
}
pthread_mutex_unlock(&profile->reportMutex);
pthread_cond_destroy(&profile->reportcond);
pthread_mutex_destroy(&profile->reportMutex);
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reportMutex is destroyed in the HTTP maxUploadLatency path (pthread_mutex_destroy(&profile->reportMutex);), but the RBUS_METHOD maxUploadLatency path later only destroys reportcond and leaves reportMutex initialized. This asymmetry can leak the mutex and may break subsequent iterations if reportMutex is re-initialized. Consider ensuring both protocol branches destroy reportMutex consistently after use.

Suggested change
pthread_mutex_destroy(&profile->reportMutex);

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings February 15, 2026 06:35
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 17 out of 17 changed files in this pull request and generated 4 comments.

Comments suppressed due to low confidence (2)

source/bulkdata/profile.c:738

  • In the RBUS_METHOD + maxUploadLatency path, reportMutex is initialized (pthread_mutex_init) but never destroyed after pthread_cond_destroy(&profile->reportcond). This can leak OS resources and makes subsequent pthread_mutex_init calls on the same mutex undefined behavior. Ensure reportMutex is destroyed on all exit paths where it is initialized (including the error path that currently only destroys reportcond).
                            pthread_mutex_lock(&profile->reportMutex);
                            while (profile->enable && n != ETIMEDOUT)
                            {
                                n = pthread_cond_timedwait(&profile->reportcond, &profile->reportMutex, &timeout);
                            }
                            if(n == ETIMEDOUT)
                            {
                                T2Info("TIMEOUT for maxUploadLatency of profile %s\n", profile->name);
                                ret = sendReportsOverRBUSMethod(profile->t2RBUSDest->rbusMethodName, profile->t2RBUSDest->rbusMethodParamList, jsonReport);
                            }
                            else if(n == 0)
                            {
                                T2Info("Profile : %s signaled before timeout, skipping report upload\n", profile->name);
                            }
                            else
                            {
                                T2Error("Profile : %s pthread_cond_timedwait ERROR!!!\n", profile->name);
                                pthread_mutex_unlock(&profile->reportMutex);
                                pthread_cond_destroy(&profile->reportcond);
                                pthread_mutex_lock(&profile->reuseThreadMutex);
                                pthread_mutex_lock(&profile->reportInProgressMutex);
                                profile->reportInProgress = false;
                                pthread_mutex_unlock(&profile->reportInProgressMutex);
                                pthread_mutex_unlock(&profile->reuseThreadMutex);
                                if(profile->triggerReportOnCondition)
                                {
                                    T2Info(" Unlock trigger condition mutex and set report on condition to false \n");
                                    profile->triggerReportOnCondition = false ;
                                    pthread_mutex_unlock(&profile->triggerCondMutex);

                                    if(profile->callBackOnReportGenerationComplete)
                                    {
                                        T2Debug("Calling callback function profile->callBackOnReportGenerationComplete \n");
                                        profile->callBackOnReportGenerationComplete(profile->name);
                                    }
                                }
                                else
                                {
                                    T2Debug(" profile->triggerReportOnCondition is not set \n");
                                }
                                //return NULL;
                                goto reportThreadEnd;
                            }
                            pthread_mutex_unlock(&profile->reportMutex);
                            pthread_cond_destroy(&profile->reportcond);
                        }

source/xconf-client/xconfclient.c:882

  • In getRemoteConfigURL retry loop, xcThreadMutex is held while taking xcMutex and waiting on xcCond. That blocks startXConfClient()/stopXConfClient()/uninitXConfClient() from acquiring xcThreadMutex to change stopFetchRemoteConfiguration, so restarts/shutdown can be delayed until RFC_RETRY_TIMEOUT (or longer if retries continue). Consider releasing xcThreadMutex before locking xcMutex / calling pthread_cond_timedwait, and only re-lock it to re-check stopFetchRemoteConfiguration (or use an atomic flag).
        pthread_mutex_lock(&xcThreadMutex);
        while(!stopFetchRemoteConfiguration)
        {
            pthread_mutex_unlock(&xcThreadMutex);
            urlFetchStatus = getRemoteConfigURL(&configURL);
            pthread_mutex_lock(&xcThreadMutex);
            if(urlFetchStatus == T2ERROR_SUCCESS)
            {
                break;
            }
            if (urlFetchStatus == T2ERROR_INVALID_RESPONSE)
            {
                T2Info("Config URL is not set to valid value. Xconfclient shall not proceed for T1.0 settings fetch attempts \n");
                break;
            }

            pthread_mutex_lock(&xcMutex);
            memset(&_ts, 0, sizeof(struct timespec));

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

T2Error("%s pthread_mutex_lock for sTDMutex failed\n", __FUNCTION__);
return T2ERROR_FAILURE;
}
ret = pthread_detach(erThread);
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

T2ER_StopDispatchThread detaches erThread, but this module supports stop/start cycles (ReportProfiles calls Stop then Start). Detaching removes the ability to join and guarantee the old dispatch thread has fully exited before a new one is created, which can lead to two dispatch threads running concurrently and racing on eQueue/markers. Prefer pthread_join(erThread, ...) (or another explicit shutdown handshake) instead of pthread_detach, and only allow restart after the thread is confirmed stopped.

Suggested change
ret = pthread_detach(erThread);
ret = pthread_join(erThread, NULL);
if(ret != 0)
{
T2Error("%s pthread_join failed with error code %d\n", __FUNCTION__, ret);
pthread_mutex_unlock(&sTDMutex);
return T2ERROR_FAILURE;
}

Copilot uses AI. Check for mistakes.
Comment on lines 100 to +116
pthread_mutex_lock(&tmpRpMutex);
pthread_mutex_lock(&rpMutex);
shouldContinue = !stopProcessing;
pthread_mutex_unlock(&rpMutex);
if(!shouldContinue)
{
pthread_mutex_unlock(&tmpRpMutex);
break;
}
T2Info("%s: Waiting for event from tr-181 \n", __FUNCTION__);
pthread_cond_wait(&tmpRpCond, &tmpRpMutex);
while(t2_queue_count(tmpRpQueue) == 0 && shouldContinue)
{
pthread_cond_wait(&tmpRpCond, &tmpRpMutex);
pthread_mutex_lock(&rpMutex);
shouldContinue = !stopProcessing;
pthread_mutex_unlock(&rpMutex);
}
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

process_tmprp_thread protects tmpRpQueue with tmpRpMutex, but datamodel_processProfile pushes to tmpRpQueue and signals tmpRpCond while holding rpMutex (not tmpRpMutex). Since t2_queue_* is not thread-safe, this inconsistent locking can cause data races and missed wakeups. Consider using tmpRpMutex consistently for tmpRpQueue operations + tmpRpCond signaling (and signal tmpRpCond during shutdown) so the consumer can reliably wake and the queue is properly synchronized.

Copilot uses AI. Check for mistakes.
Comment on lines 1381 to +1386
pthread_mutex_lock(&profile->reportInProgressMutex);
while (profile->reportInProgress && !profile->threadExists)
{
pthread_cond_wait(&profile->reportInProgressCond, &profile->reportInProgressMutex);
}
bool threadStillExists = profile->threadExists;
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleteProfile waits on reportInProgressCond with condition while (profile->reportInProgress && !profile->threadExists), but reportInProgressCond is only signaled when CollectAndReport sets reportInProgress=true (not when it becomes false). As written, this wait is unlikely to provide the intended “wait until report completes” behavior, and the predicate itself appears inverted (it waits when thread does not exist). Consider reworking this synchronization to wait for reportInProgress to become false (with a corresponding signal/broadcast when it is cleared), and use a predicate that matches the state transitions you need to observe.

Copilot uses AI. Check for mistakes.
Comment on lines +1260 to 1267
// Free url_params to avoid memory leak before continuing
if(url_params)
{
free(url_params);
url_params = NULL;
params_len = 0; // Reset params_len to match freed state
}
continue;
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On realloc failure for url_params, the code frees url_params and continues building the query string. That can silently drop already-appended parameters and produce an incomplete URL, which is likely worse than failing the request. Prefer aborting URL preparation on allocation failure (clean up curl/httpUrl/url_params and return NULL) so callers can handle the error deterministically.

Suggested change
// Free url_params to avoid memory leak before continuing
if(url_params)
{
free(url_params);
url_params = NULL;
params_len = 0; // Reset params_len to match freed state
}
continue;
if (url_params)
{
free(url_params);
url_params = NULL;
}
if (httpUrl)
{
free(httpUrl);
httpUrl = NULL;
}
curl_easy_cleanup(curl);
return NULL;

Copilot uses AI. Check for mistakes.
@madhubabutt madhubabutt force-pushed the feature/RDKE-60291 branch 2 times, most recently from 9471951 to 51053e3 Compare February 16, 2026 08:48
Copilot AI review requested due to automatic review settings February 16, 2026 08:48
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 17 out of 17 changed files in this pull request and generated 6 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +473 to 485
if (filestat.st_size < 0 || (size_t)filestat.st_size >= sizeof(data))
{
T2Error("File size %lld exceeds buffer capacity %zu for file : %s\n", (long long)filestat.st_size, sizeof(data) - 1, filePath);
fclose(fp);
return T2ERROR_FAILURE;
}
if (fread(data, sizeof(char), filestat.st_size, fp) != (size_t)filestat.st_size)
{
T2Error("Failed to read complete data from file : %s\n", filePath);
fclose(fp);
return T2ERROR_FAILURE;
}
*privMode = strdup(data);
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The buffer size check at line 475 uses sizeof(data) - 1 in the error message, suggesting the intent is to reserve space for a null terminator. However, the actual check at line 473 uses sizeof(data) which is 20. After reading filestat.st_size bytes, the data buffer is not null-terminated before being passed to strdup at line 485. If the file contains exactly 19 or 20 bytes without a null terminator, strdup may read beyond the buffer. Consider either null-terminating the buffer after fread or adjusting the size check to filestat.st_size >= sizeof(data) - 1 to ensure space for null termination.

Copilot uses AI. Check for mistakes.
Comment on lines +884 to +891
pthread_mutex_unlock(&profile->reuseThreadMutex);
T2Info("%s --out Exiting collect and report Thread\n", __FUNCTION__);
pthread_mutex_lock(&profile->reuseThreadMutex);
pthread_mutex_lock(&profile->reportInProgressMutex);
profile->reportInProgress = false;
pthread_mutex_unlock(&profile->reportInProgressMutex);
pthread_mutex_unlock(&profile->reuseThreadMutex);
pthread_mutex_lock(&profile->reuseThreadMutex);
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant mutex lock/unlock sequence. At line 884, reuseThreadMutex is unlocked after the do-while loop ends. Then immediately at line 886, the same mutex is locked again. This appears unnecessary - the mutex could remain locked continuously from line 884 through the operations at lines 887-893, or the unlock at line 884 could be removed if the mutex needs to stay locked for the subsequent operations. Consider simplifying this lock/unlock pattern.

Suggested change
pthread_mutex_unlock(&profile->reuseThreadMutex);
T2Info("%s --out Exiting collect and report Thread\n", __FUNCTION__);
pthread_mutex_lock(&profile->reuseThreadMutex);
pthread_mutex_lock(&profile->reportInProgressMutex);
profile->reportInProgress = false;
pthread_mutex_unlock(&profile->reportInProgressMutex);
pthread_mutex_unlock(&profile->reuseThreadMutex);
pthread_mutex_lock(&profile->reuseThreadMutex);
T2Info("%s --out Exiting collect and report Thread\n", __FUNCTION__);
pthread_mutex_lock(&profile->reportInProgressMutex);
profile->reportInProgress = false;
pthread_mutex_unlock(&profile->reportInProgressMutex);

Copilot uses AI. Check for mistakes.
Comment on lines +1063 to 1065
pthread_mutex_unlock(&xcThreadMutex);
}
while(isXconfInit); //End of do while loop
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential race condition accessing isXconfInit. At line 1065, the do-while loop condition checks isXconfInit without holding xcThreadMutex, but isXconfInit is modified under xcThreadMutex protection at lines 1094 and 1174. This could lead to reading stale values due to compiler optimizations or CPU caching. The loop condition should either use a local cached value set while holding the mutex, or the check should be performed inside a mutex-protected region.

Suggested change
pthread_mutex_unlock(&xcThreadMutex);
}
while(isXconfInit); //End of do while loop
/* Cache isXconfInit under lock to use in the loop condition */
bool continueLoop = isXconfInit;
pthread_mutex_unlock(&xcThreadMutex);
}
while(continueLoop); //End of do while loop

Copilot uses AI. Check for mistakes.
Comment on lines +650 to 657
pthread_mutex_lock(&profile->reuseThreadMutex);
pthread_mutex_lock(&profile->reportInProgressMutex);
profile->reportInProgress = false;
pthread_mutex_unlock(&profile->reportInProgressMutex);
pthread_mutex_unlock(&profile->reuseThreadMutex);
if(profile->triggerReportOnCondition)
{
T2Info(" Unlock trigger condition mutex and set report on condition to false \n");
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing pthread_mutex_destroy for profile->reportMutex in error path. When pthread_cond_timedwait returns an error (the else block starting at line 640), the mutex initialized at line 577 and locked at line 626 is unlocked at line 643 and the condition variable is destroyed at line 644, but the mutex itself is never destroyed before jumping to reportThreadEnd at line 672. This creates a resource leak. The code should call pthread_mutex_destroy after line 644.

Copilot uses AI. Check for mistakes.
Comment on lines +712 to 719
pthread_mutex_lock(&profile->reuseThreadMutex);
pthread_mutex_lock(&profile->reportInProgressMutex);
profile->reportInProgress = false;
pthread_mutex_unlock(&profile->reportInProgressMutex);
pthread_mutex_unlock(&profile->reuseThreadMutex);
if(profile->triggerReportOnCondition)
{
T2Info(" Unlock trigger condition mutex and set report on condition to false \n");
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing pthread_mutex_destroy for profile->reportMutex in error path. When pthread_cond_timedwait returns an error (the else block starting at line 707), the mutex initialized at line 577 and locked at line 693 is unlocked at line 710 and the condition variable is destroyed at line 711, but the mutex itself is never destroyed before jumping to reportThreadEnd at line 734. This creates a resource leak. The code should call pthread_mutex_destroy after line 711.

Copilot uses AI. Check for mistakes.
Comment on lines +159 to +161
pthread_mutex_lock(&rpMutex);
shouldContinue = !stopProcessing;
pthread_mutex_unlock(&rpMutex);
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential deadlock from nested mutex acquisition. At lines 159-161, rpMutex is acquired while holding rpMsgMutex (locked at line 155). This violates the stated intention in the comment at lines 143-145 to avoid nested mutex acquisition. If another thread acquires rpMutex then tries to acquire rpMsgMutex, a deadlock could occur. Consider checking shouldContinue before entering the while loop at line 156, or restructure to avoid holding rpMsgMutex when acquiring rpMutex.

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This additional looped call doesn't look right.

Comment on lines +159 to +161
pthread_mutex_lock(&rpMutex);
shouldContinue = !stopProcessing;
pthread_mutex_unlock(&rpMutex);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This additional looped call doesn't look right.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments