I still remember my first major Business Central catastrophe. A custom sales order processing extension we'd built suddenly stopped working during month-end closing—right when the sales team needed to process their final orders to hit quota. The error message was cryptically unhelpful: "A call to Microsoft.Dynamics.Nav.Runtime.NavCSide failed."
After four hours of frantic searching, I discovered the culprit: our code was trying to modify a posted sales invoice record that was locked by another process. The solution took less than 5 minutes to implement, but finding the problem felt like searching for a needle in a digital haystack.
That day changed how I approached BC development forever. Over the last seven years building and fixing Business Central extensions, I've developed a systematic debugging approach that has saved my team countless hours and prevented numerous production emergencies.
This guide shares what I've learned the hard way, so you don't have to.
Business Central presents unique debugging challenges that other platforms don't:
After debugging hundreds of issues, I've noticed that BC problems typically fall into recognizable patterns that require specific approaches.
Rather than generic debugging advice, let's examine the specific types of BC bugs you'll encounter and exactly how to approach each one:
Example: A page extension works for some users but fails silently for others.
Root causes typically include:
The debugging approach that works:
Enable the AL Debugger session:
Use strategic message logging:
procedure ProcessDocument(DocumentNo: Code[20])
var
SalesHeader: Record "Sales Header";
begin
Message('Starting process for %1', DocumentNo);
if not SalesHeader.Get(SalesHeader."Document Type"::Order, DocumentNo) then begin
Message('Sales order %1 not found', DocumentNo);
exit;
end;
// More code with strategic messages
end;
procedure DiagnoseEnvironment()
begin
Message('Client type: %1', Format(ClientTypeManagement.GetCurrentClientType()));
Message('User ID: %1', UserId);
Message('Company: %1', CompanyName);
end;
Real example: We had a page that mysteriously crashed for certain users. After adding diagnostic messages, we discovered they had their browser zoom level set to 150%, which caused a rendering bug in a custom control. Only by seeing exactly where execution stopped could we pinpoint this unusual cause.
Example: A report that used to run in seconds now takes minutes to complete.
Root causes typically include:
The debugging approach that works:
procedure ProcessSalesOrders()
var
StartTime: DateTime;
ElapsedSeconds: Decimal;
begin
StartTime := CurrentDateTime;
// Your processing code here
ElapsedSeconds := (CurrentDateTime - StartTime) / 1000;
Message('Processing took %1 seconds', ElapsedSeconds);
end;
procedure AnalyzePerformance()
var
StartTime: DateTime;
begin
StartTime := CurrentDateTime;
Step1_LoadData();
LogStepTime('Load Data', StartTime);
StartTime := CurrentDateTime;
Step2_ProcessRecords();
LogStepTime('Process Records', StartTime);
StartTime := CurrentDateTime;
Step3_GenerateOutput();
LogStepTime('Generate Output', StartTime);
end;
local procedure LogStepTime(StepName: Text; StartTime: DateTime)
begin
Message('Step %1 took %2 ms', StepName, CurrentDateTime - StartTime);
end;
procedure CheckQuerySize()
var
Customer: Record Customer;
Count: Integer;
begin
Customer.SetFilter("Location Code", 'EAST|WEST');
Customer.SetFilter("Date Filter", '>=%1', WorkDate());
Count := Customer.Count;
Message('Query will process %1 records', Count);
if Count > 1000 then
Message('Warning: Large recordset may cause performance issues');
end;
Real example: A client's sales analysis report suddenly took 4+ minutes instead of 10 seconds. Using step timing, we discovered a missing index on a custom field that had been fine with 10,000 records but became a bottleneck at 100,000 records. Adding the index restored performance immediately.
Example: Your BC extension fails when trying to send or receive data from external systems.
Root causes typically include:
The debugging approach that works:
procedure CallExternalAPI(Endpoint: Text; Payload: Text): Text
var
Client: HttpClient;
Request: HttpRequestMessage;
Response: HttpResponseMessage;
ResponseText: Text;
begin
LogMessage('API Call', StrSubstNo('Calling endpoint: %1', Endpoint));
LogMessage('Request', Payload);
// Make the actual call
Request.Method := 'POST';
Request.SetRequestUri(Endpoint);
Request.Content.WriteFrom(Payload);
Client.Send(Request, Response);
Response.Content.ReadAs(ResponseText);
LogMessage('Response', ResponseText);
exit(ResponseText);
end;
procedure CallWithRetry(Endpoint: Text): Boolean
var
Attempt: Integer;
MaxAttempts: Integer;
Success: Boolean;
begin
MaxAttempts := 3;
for Attempt := 1 to MaxAttempts do begin
LogMessage('Retry', StrSubstNo('Attempt %1 of %2', Attempt, MaxAttempts));
if TryCallEndpoint(Endpoint) then begin
Success := true;
break;
end;
Sleep(1000 * Attempt); // Exponential backoff
end;
if not Success then
LogMessage('Error', 'All retry attempts failed');
exit(Success);
end;
// Create a table called "Integration Log" with fields for:
// - Entry No. (Integer, PK)
// - Timestamp (DateTime)
// - Direction (Option: Inbound,Outbound)
// - Endpoint (Text)
// - Request (Blob)
// - Response (Blob)
// - Status (Option: Success,Error)
// - Error Message (Text)
procedure LogIntegrationCall(Direction: Option Inbound,Outbound; Endpoint: Text; Request: Text; Response: Text; Success: Boolean; ErrorMessage: Text)
var
IntegrationLog: Record "Integration Log";
begin
IntegrationLog.Init();
IntegrationLog."Entry No." := 0; // AutoIncrement
IntegrationLog.Timestamp := CurrentDateTime;
IntegrationLog.Direction := Direction;
IntegrationLog.Endpoint := CopyStr(Endpoint, 1, 250);
IntegrationLog.Status := iff(Success, IntegrationLog.Status::Success, IntegrationLog.Status::Error);
IntegrationLog."Error Message" := CopyStr(ErrorMessage, 1, 250);
// Store full request and response in blob fields
IntegrationLog.SetRequestContent(Request);
IntegrationLog.SetResponseContent(Response);
IntegrationLog.Insert();
end;
Real example: A client's BC system stopped communicating with their e-commerce platform every few days. Our integration logging revealed that the API token was expiring exactly every 72 hours. We added automatic token refresh, and the problem disappeared.
Example: Your extension worked perfectly until the latest Business Central update.
Root causes typically include:
The debugging approach that works:
Check Microsoft's compatibility list first:
Compare application behavior in old vs. new:
procedure CheckVersionCompatibility()
var
AppInfo: ModuleInfo;
begin
NavApp.GetCurrentModuleInfo(AppInfo);
Message('Running on version: %1', AppInfo.AppVersion);
// Version-specific code
if AppInfo.AppVersion >= Version.Create(22, 0, 0, 0) then
NewVersionBehavior()
else
OldVersionBehavior();
end;
// In AL extension settings, enable "Generate XML Documentation"
// Now debug by checking the symbol references
procedure DebugSymbolChanges()
var
Customer: Record Customer;
FieldRef: FieldRef;
RecRef: RecordRef;
begin
RecRef.GetTable(Customer);
// Check if field still exists in current version
if HasField(RecRef, 'My Custom Field') then
Message('Field exists')
else
Message('Field does not exist - API changed');
end;
local procedure HasField(RecRef: RecordRef; FieldName: Text): Boolean
var
FldRef: FieldRef;
begin
foreach FieldNo in RecRef.FieldIndex do begin
FldRef := RecRef.FieldIndex.Get(FieldNo);
if FldRef.Name = FieldName then
exit(true);
end;
exit(false);
end;
Real example: After upgrading to BC 20, a client's custom sales tax calculation extension stopped working. Using version-specific debugging, we discovered that Microsoft had changed the event firing order in the sales post routine. Simply by subscribing to a different event, we resolved the issue.
After trying dozens of approaches, these five tools have proven most valuable for BC debugging:
This built-in VS Code view is essential for variable inspection:
Pro tip: Add complex expressions to the Watch panel, not just variables:
Customer.Count()
StrLen(ErrorText) > 0
SalesHeader."Amount Including VAT" - SalesHeader."Amount"
For intermittent issues, snapshot debugging is invaluable:
"launch": {
"version": "0.2.0",
"configurations": [
{
"name": "Snapshot Debugging",
"request": "snapshotDebug",
"type": "al"
}
]
}
Real example: We had a posting routine that failed randomly about once every 50 executions. Snapshot debugging revealed that it only happened when a specific combination of discount types and payment terms was used simultaneously.
I add this to nearly every BC project:
codeunit 50101 "Error Handler"
{
procedure LogError(SourceProcedure: Text; ErrorText: Text; AdditionalContext: Text)
var
ErrorLog: Record "Error Log";
begin
ErrorLog.Init();
ErrorLog."Entry No." := 0; // Auto-increment
ErrorLog."User ID" := UserId;
ErrorLog."Date Time" := CurrentDateTime;
ErrorLog."Source Procedure" := CopyStr(SourceProcedure, 1, 250);
ErrorLog."Error Message" := CopyStr(ErrorText, 1, 250);
ErrorLog."Additional Context" := CopyStr(AdditionalContext, 1, 250);
ErrorLog.Insert();
// Optional: Send alert for critical errors
if ErrorText.Contains('CRITICAL') then
SendErrorAlert(ErrorLog);
end;
local procedure SendErrorAlert(ErrorLog: Record "Error Log")
var
EmailMessage: Codeunit "Email Message";
Email: Codeunit Email;
begin
EmailMessage.Create('admin@yourcompany.com', 'Critical Error in BC',
StrSubstNo('Error in %1: %2', ErrorLog."Source Procedure", ErrorLog."Error Message"));
Email.Send(EmailMessage);
end;
}
How to use it:
procedure RiskyOperation()
var
ErrorHandler: Codeunit "Error Handler";
begin
try
// Your code here
except
ErrorHandler.LogError('RiskyOperation', GetLastErrorText, GetLastErrorCallStack);
Error('Operation failed. The error has been logged.');
end;
end;
This helps track down mysterious locking issues:
procedure AnalyzeTableLocks()
var
SQLLocks: Record "SQL Locks";
TempBuffer: Record "Name/Value Buffer" temporary;
begin
GetActiveSQLLocks(SQLLocks);
if not SQLLocks.FindSet() then begin
Message('No active locks found');
exit;
end;
repeat
// Group and count locks
if not TempBuffer.Get(SQLLocks."Table ID") then begin
TempBuffer.ID := SQLLocks."Table ID";
TempBuffer.Name := GetTableName(SQLLocks."Table ID");
TempBuffer.Value := '1';
TempBuffer.Insert();
end else begin
Evaluate(Count, TempBuffer.Value);
Count += 1;
TempBuffer.Value := Format(Count);
TempBuffer.Modify();
end;
until SQLLocks.Next() = 0;
// Display results
if TempBuffer.FindSet() then
repeat
Message('Table %1: %2 locks', TempBuffer.Name, TempBuffer.Value);
until TempBuffer.Next() = 0;
end;
For production debugging, nothing beats telemetry:
codeunit 50102 "Telemetry Manager"
{
procedure TrackEvent(EventName: Text; Properties: Dictionary of [Text, Text]; Measurements: Dictionary of [Text, Decimal])
var
FeatureTelemetry: Codeunit "Feature Telemetry";
begin
FeatureTelemetry.LogUsage('CustomExtension', EventName, 'Custom tracking', Properties, Measurements);
end;
procedure TrackOperation(OperationName: Text)
var
Properties: Dictionary of [Text, Text];
Measurements: Dictionary of [Text, Decimal];
StartTime: DateTime;
begin
StartTime := CurrentDateTime;
Properties.Add('user', UserId);
Properties.Add('company', CompanyName);
// Your operation code here
Measurements.Add('duration_ms', (CurrentDateTime - StartTime) / 1000);
TrackEvent(OperationName, Properties, Measurements);
end;
}
Real example: By adding telemetry to a client's sales order processing extension, we discovered that 73% of performance issues occurred during a specific 30-minute window when both the warehouse scanning system and month-end reports were running simultaneously.
When tackling a new BC bug, I follow this systematic approach:
First, collect critical context:
Pro tip: Create a standardized "bug report template" for users that collects this information upfront.
Narrow down where the issue occurs:
Technique that works: The "binary search" debugging method—disable half your extensions, see if the problem persists, then keep narrowing down.
Don't just add breakpoints everywhere:
Example: For a posting routine problem, add breakpoints:
Once you've narrowed down the area:
Code example:
procedure DiagnosePostingIssue(DocNo: Code[20])
var
SalesHeader: Record "Sales Header";
Customer: Record Customer;
begin
if not SalesHeader.Get(SalesHeader."Document Type"::Order, DocNo) then begin
LogMessage('Error', 'Sales header not found');
exit;
end;
LogMessage('Info', StrSubstNo('Processing order %1 for customer %2',
DocNo, SalesHeader."Sell-to Customer No."));
if not Customer.Get(SalesHeader."Sell-to Customer No.") then begin
LogMessage('Error', 'Customer not found');
exit;
end;
LogMessage('Info', StrSubstNo('Customer %1 has credit limit %2',
Customer."No.", Customer."Credit Limit (LCY)"));
// Continue with more diagnostic steps
end;
Once you've identified the root cause:
Real example: After fixing a rounding issue in sales line discounts, we searched for all similar calculation patterns and found (and preemptively fixed) the same issue in purchase lines and job lines.
These advanced approaches have saved me countless hours when dealing with particularly nasty BC bugs:
For hard-to-diagnose issues, dynamic diagnosis code can be invaluable:
codeunit 50103 "Runtime Diagnostic"
{
procedure InjectDiagnostics()
begin
// This gets called from existing code
if not IsActive then
exit;
case DiagnosticMode of
DiagnosticMode::PerformanceMonitor:
TrackPerformance();
DiagnosticMode::DataValidator:
ValidateDataStructures();
DiagnosticMode::ErrorLogger:
SetupErrorCapture();
end;
end;
procedure EnableDiagnostics(Mode: Option)
begin
IsActive := true;
DiagnosticMode := Mode;
end;
var
IsActive: Boolean;
DiagnosticMode: Option PerformanceMonitor,DataValidator,ErrorLogger;
}
How to use it: Add calls to InjectDiagnostics() at strategic points in your code. When needed, enable specific diagnostic modes to gather information without changing your main codebase.
When you need to understand which events fire and in what order:
codeunit 50104 "Event Observer"
{
// Subscribe to ALL interesting events
[EventSubscriber(ObjectType::Table, Database::Customer, 'OnAfterModifyEvent', '', false, false)]
local procedure OnAfterModifyCustomer(var Rec: Record Customer)
begin
if not IsObserving then
exit;
RecordEvent('OnAfterModifyCustomer', Format(Rec."No."));
end;
// Add more event subscriptions as needed
procedure StartObserving()
begin
IsObserving := true;
Clear(EventLog);
end;
procedure StopObserving(): Text
begin
IsObserving := false;
exit(GetEventLog());
end;
local procedure RecordEvent(EventName: Text; Context: Text)
begin
EventLog += StrSubstNo('%1: %2 - %3\', Format(Time), EventName, Context);
end;
local procedure GetEventLog(): Text
begin
exit(EventLog);
end;
var
IsObserving: Boolean;
EventLog: Text;
}
Real example: We used this to diagnose a complex posting issue where events were firing in an unexpected order, causing validation to fail intermittently.
When you need to understand the full call stack at a specific point:
procedure CaptureCallStack(Reason: Text): Text
var
CallStackInfo: Text;
begin
// Force a controlled exception to capture the call stack
try
Error('CALLSTACK_CAPTURE: %1', Reason);
except
CallStackInfo := GetLastErrorCallStack();
end;
exit(CallStackInfo);
end;
How to use it: Call this function at any point where you need to understand the complete call hierarchy that led to that point in the code.
After mentoring junior BC developers, I've noticed these debugging mistakes repeatedly:
What happens: Developers spend hours debugging code that isn't actually causing the issue.
Better approach: Start with evidence, not assumptions. Use logging or breakpoints to verify where the problem actually occurs before diving deep.
What happens: When multiple changes are made simultaneously, you can't tell which one fixed the issue.
Better approach: Make one change at a time, test, and document the results. This methodical approach builds your debugging intuition over time.
What happens: Developers focus only on their custom code, forgetting that BC has complex event sequences.
Better approach: Remember that your code exists within BC's event framework. Use the event recorder to understand the complete execution flow.
What happens: Changes meant for diagnosis impact real users and data.
Better approach: Replicate the issue in a development environment first. Only use safe diagnostic techniques in production.
After years of wrestling with BC bugs, I've learned that effective debugging isn't just about technical skills—it's about developing the right mindset:
The best BC developers aren't those who never encounter bugs—they're the ones who can efficiently diagnose and resolve them when they inevitably appear. With the approaches in this guide, you'll join their ranks.
Learn how to use audit trails in Microsoft Dynamics 365 Business Central to track changes, maintain data integrity, and ensure accountability across your business operations.
Kery Nguyen
2024-07-01
Learn to navigate and use Microsoft Dynamics 365 Business Central with greater efficiency using our comprehensive guide to keyboard shortcuts. Perfect for financial professionals, managers, and project leaders aiming to boost productivity.
Kery Nguyen
2024-06-29
A comprehensive guide for developers new to the AL language, detailing the basics needed to extend functionalities in Microsoft Dynamics 365 Business Central.
Kery Nguyen
2024-06-15