How Retail Businesses on Microsoft Dynamics NAV Can unlock Smarter Inventory Management with Data Engineering
Technology Posts

How Retail Businesses on Microsoft Dynamics NAV Can unlock Smarter Inventory Management with Data Engineering

Discover how retailers running Microsoft Dynamics NAV can build a modern data engineering layer to unlock real-time stock visibility, demand forecasting, and automated reorder signals — without disrupting live NAV operations. A practical guide from Lucent Innovation covering SQL extraction, the medallion architecture, and predictive inventory.

18 Minute readMarch 13, 2026Ashish KasamaListen

The Inventory Problem Hiding in Plain Sight

For most retail businesses running on Microsoft Dynamics NAV — now rebranded as Business Central — the inventory problem isn't a shortage of data. NAV faithfully records every purchase order, goods receipt, sales shipment, inter-location transfer, and stock adjustment. Over five years of operation, that's potentially hundreds of millions of ledger entries sitting in a SQL Server database that most businesses only ever query through NAV's own built-in reports.

The result is a painful paradox: data-rich but insight-poor. Buyers make replenishment decisions from spreadsheets. Warehouse managers rely on gut feel for safety stock levels. Finance teams reconcile inventory valuations manually each month-end. And when a stockout happens, the post-mortem almost always reveals the signal was there — buried in the data — weeks before the shelves went empty.

📊 The cost is real
Industry research consistently shows that poor inventory management costs retailers 3–8% of annual revenue — split roughly equally between lost sales from stockouts and carrying costs from overstock. For a £50M retailer, that's £1.5M–£4M of value sitting on the table every year.

The answer is a modern data engineering layer that sits alongside NAV, continuously extracts inventory data, transforms it into analytics-ready structures, and delivers real-time visibility and predictive intelligence — without touching or disrupting NAV itself.

Why NAV Data Is So Hard to Work With

Anyone who has tried to query a NAV database directly will recognise the frustration. Microsoft Dynamics NAV's data model is powerful but deliberately abstract — optimised for transactional processing, not analytical queries. Several characteristics make it particularly challenging for data engineers.

Highly normalised, non-intuitive schema

The core inventory story in NAV is told across dozens of related tables: Item, Item Ledger Entry, Value Entry, Purchase Header/Line, Sales Header/Line, Transfer Header/Line, and more. There is no single "current stock" table. To know how many units of SKU-12345 are in Warehouse B right now, you must sum Item Ledger Entry rows going back to the beginning of time — or rely on the Item table's cached quantity fields, which can lag in certain configurations.

Cryptic field names and company-prefixed tables

NAV field names are frequently abbreviated — No_ instead of Number, Qty__on_Hand, Unit_Cost_LCY. Every table name includes the company name as a prefix. A standard deployment for "Retail Ltd" stores the Item table as [dbo].[RETAIL LTD$Item]. Multi-company deployments multiply this complexity significantly.

No native change data feed

NAV does not expose a change stream natively. Without additional tooling, there is no reliable way to ask "what changed since 3pm?" without full table scans or custom timestamp-based polling — both of which create performance risk on the same SQL Server instance serving live users.

⚠️ Never run heavy analytics queries against your live NAV database
Long-running analytical queries on the primary NAV SQL Server can lock tables, slow transaction processing, and in worst cases cause timeouts for users posting documents. Always extract to a separate analytical environment first — either a read replica or a dedicated landing store.

The Architecture: NAV to a Modern Data Platform

The solution is an extraction layer that reads from NAV non-invasively and delivers data to a modern analytical platform — Delta Lake on Databricks or Azure Synapse. The architecture follows the medallion model: Bronze (raw), Silver (clean), Gold (aggregated) — with NAV-specific extraction patterns feeding the ingestion layer.

Source
NAV / BC
SQL · OData · API
Ingestion
Extract
ADF · Python · CDC
Layer 1
Bronze
Raw · Replicated
Layer 2
Silver
Clean · Typed · Joined
Layer 3
Gold
KPIs · Forecasting

Extracting Data from NAV: Four Proven Methods

There is no single "correct" way to extract data from NAV. The right method depends on your version, whether you are on-premises or cloud, your latency requirements, and your team's technical capabilities. Here are the four approaches we use at Lucent Innovation, ranked by recommendation for most retail deployments.

Method 1: SQL Server read replica (recommended for on-prem NAV)

For on-premises NAV deployments, setting up an Always On Availability Group read replica is the cleanest extraction path. Your analytics workloads run against the secondary — zero impact on the primary instance serving NAV users. Watermark-based incremental extraction then runs on the key tables.

# Python — Watermark-based incremental extraction from NAV SQL Server
import pyodbc, pandas as pd
from datetime import datetime

conn = pyodbc.connect(
    "DRIVER={ODBC Driver 18 for SQL Server};"
    "SERVER=nav-replica.internal,1433;"
    "DATABASE=NAV_PROD;"
    "Authentication=ActiveDirectoryServicePrincipal;"
)

company   = "RETAIL LTD"
watermark = get_watermark("item_ledger_entry")  # from control table

query = f"""
SELECT
    [Entry No_]              AS entry_no,
    [Item No_]               AS item_no,
    [Posting Date]           AS posting_date,
    [Entry Type]             AS entry_type,
    [Location Code]          AS location_code,
    [Quantity]               AS quantity,
    [Remaining Quantity]     AS remaining_qty,
    [Cost Amount (Actual)]   AS cost_amount,
    [Document No_]           AS document_no,
    GETDATE()                AS _extracted_at
FROM [{company}$Item Ledger Entry]
WHERE [Entry No_] > {watermark}
ORDER BY [Entry No_]
"""

df = pd.read_sql(query, conn)
df.to_parquet(f"abfss://nav-raw/item_ledger_entry/{datetime.utcnow():%Y/%m/%d}/batch.parquet")
update_watermark("item_ledger_entry", int(df["entry_no"].max()))

Use Entry No_ as your watermark for Item Ledger Entry — not Posting Date
Posting Date can be backdated in NAV — users can post to a prior accounting period. Watermarking on the auto-incrementing Entry No_ integer guarantees you never miss a late-posted entry, regardless of which date it carries.

Method 2: SQL Server Change Data Capture (CDC)

For near-real-time extraction, SQL Server's built-in CDC feature captures row-level changes — inserts, updates, deletes — to designated tables into a change log you consume continuously. Combined with Debezium on a small VM, you get a Kafka topic per NAV table streaming every inventory change in under 30 seconds.

-- T-SQL — Enable CDC on NAV Item Ledger Entry
USE NAV_PROD;
GO

-- Enable CDC at database level (one-time)
EXEC sys.sp_cdc_enable_db;
GO

-- Enable CDC on the key inventory table
EXEC sys.sp_cdc_enable_table
    @source_schema = N'dbo',
    @source_name   = N'RETAIL LTD$Item Ledger Entry',
    @role_name     = NULL,
    @supports_net_changes = 1;
GO

Method 3: NAV OData Web Services

NAV exposes standard pages as OData REST endpoints — no direct database access required. Ideal when SQL access is unavailable (hosted environments) or for Business Central on-premises. The trade-off is throughput: OData is significantly slower than direct SQL for large-volume extraction and is best suited to low-frequency, low-volume objects.

Method 4: Business Central API v2.0 (cloud BC only)

For Business Central cloud, the v2.0 REST API is the supported extraction path. It uses OAuth 2.0 via Azure AD, supports $filter and $deltatoken for incremental sync, and requires no additional configuration. Coverage is limited to standard BC entities — custom extensions require custom API pages built by your BC partner.

Method Best For Latency Volume Complexity
SQL Read Replica On-prem NAV, any scale ~15 min batch Unlimited Medium
SQL CDC + Debezium Near real-time on-prem < 30 seconds Unlimited High
OData Web Services Hosted / no SQL access ~30 min batch Medium Low
BC API v2.0 Business Central cloud ~15 min batch Large Low

Building the Inventory Data Model

The NAV schema contains everything you need to build a comprehensive inventory data model — but it requires careful assembly. Here are the key NAV tables and how they map to the Silver analytical layer.

NAV Table What It Contains Silver Target
[Company$Item Ledger Entry] Every inventory movement ever — purchases, sales, adjustments, transfers, consumption silver.stock_movements
[Company$Item] Item master: description, unit of measure, costing method, reorder point, safety stock silver.item_master
[Company$Value Entry] Cost layer for each item ledger entry — links cost amounts to stock movements silver.inventory_valuation
[Company$Purchase Line] Open and received purchase order lines — inbound stock in transit silver.purchase_orders
[Company$Sales Line] Open and shipped sales order lines — committed and forecasted demand silver.sales_orders
[Company$Transfer Line] Inter-location stock transfers in progress silver.stock_movements
[Company$Location] Warehouse and store location master silver.location_master
[Company$Item Category] Category hierarchy for product classification silver.item_master (joined)

The Gold layer: current inventory position

The most important Gold table — and the one NAV's native reporting gets wrong most often — is the current inventory position: how many units of each SKU are at each location right now, adjusted for committed sales orders and inbound purchase orders.

-- SQL — Gold: current inventory position with availability calculation
CREATE OR REPLACE TABLE gold.inventory_position AS
SELECT
    im.item_no,
    im.description,
    im.category_code,
    im.vendor_no,
    im.reorder_point,
    im.safety_stock_qty,
    im.lead_time_days,
    lm.location_code,
    lm.location_name,

    -- Physical stock on hand (sum of all Item Ledger Entry movements)
    COALESCE(SUM(sm.quantity), 0)             AS qty_on_hand,

    -- Stock committed to open sales orders not yet shipped
    COALESCE(SUM(so.outstanding_qty), 0)      AS qty_committed,

    -- Stock on open purchase orders not yet received
    COALESCE(SUM(po.outstanding_qty), 0)      AS qty_on_order,

    -- Available to promise = on_hand - committed + on_order
    COALESCE(SUM(sm.quantity), 0)
      - COALESCE(SUM(so.outstanding_qty), 0)
      + COALESCE(SUM(po.outstanding_qty), 0)  AS qty_available,

    -- Days of cover based on 90-day average daily demand
    CASE WHEN d.avg_daily_demand > 0
         THEN COALESCE(SUM(sm.quantity), 0) / d.avg_daily_demand
         ELSE NULL
    END                                       AS days_cover,

    -- Reorder flag
    CASE WHEN COALESCE(SUM(sm.quantity), 0) <= im.reorder_point
         THEN TRUE ELSE FALSE
    END                                       AS reorder_required,

    current_timestamp()                       AS _refreshed_at

FROM silver.item_master     im
CROSS JOIN silver.location_master lm
LEFT JOIN silver.stock_movements  sm ON sm.item_no = im.item_no
                                    AND sm.location_code = lm.location_code
LEFT JOIN silver.sales_orders     so ON so.item_no = im.item_no
                                    AND so.location_code = lm.location_code
                                    AND so.status IN ('Open','Released')
LEFT JOIN silver.purchase_orders  po ON po.item_no = im.item_no
                                    AND po.location_code = lm.location_code
                                    AND po.status IN ('Open','Released')
LEFT JOIN gold.avg_daily_demand   d  ON d.item_no = im.item_no
                                    AND d.location_code = lm.location_code
GROUP BY
    im.item_no, im.description, im.category_code, im.vendor_no,
    im.reorder_point, im.safety_stock_qty, im.lead_time_days,
    lm.location_code, lm.location_name, d.avg_daily_demand;

Real-Time Stock Visibility Across Locations

Multi-location retail is where NAV's built-in reporting typically breaks down entirely. Understanding stock positions across five stores, a central warehouse, and a 3PL simultaneously — refreshed in near real-time — requires the data engineering layer described above.

<5 min
Stock position refresh with CDC + Auto Loader
100%
Location coverage in a single unified view
0
Impact on live NAV performance (read replica)
3 yrs
Historical movement history retained for forecasting

Once the pipeline is running, inventory questions that previously required a phone call to a warehouse manager or a manual spreadsheet reconciliation become self-service Power BI queries refreshing every few minutes:

  • What is the exact on-hand quantity of SKU-12345 at each location right now?
  • Which SKUs are below their reorder point across any location?
  • How many units are still in transit from the supplier on open purchase orders?
  • Which locations are overstocked on slow-moving lines that could be transferred?
  • What is the total inventory value by category, by location, by supplier?

Predictive Inventory: From Reactive to Proactive

Real-time visibility solves the what is happening now problem. Predictive inventory solves the harder problem: what will happen in the next 4–12 weeks, and what should we order today? With 2–3 years of historical Item Ledger Entry data in the Silver layer, you have everything needed to train demand forecasting models at the SKU-location level.

# Python — Feature engineering for SKU-level demand forecasting
# Built on top of silver.stock_movements in Databricks

from pyspark.sql import functions as F
from pyspark.sql.window import Window

# Daily demand series per SKU per location (sales movements only)
daily_demand = (
    spark.table("silver.stock_movements")
    .filter(F.col("entry_type") == 1)        # 1 = Sale
    .filter(F.col("quantity") < 0)           # Sales reduce stock
    .withColumn("demand", F.abs("quantity"))
    .groupBy("item_no", "location_code", "posting_date")
    .agg(F.sum("demand").alias("units_sold"))
)

# Rolling demand features for ML model input
w7  = Window.partitionBy("item_no","location_code").orderBy("posting_date").rowsBetween(-6,0)
w28 = Window.partitionBy("item_no","location_code").orderBy("posting_date").rowsBetween(-27,0)
w90 = Window.partitionBy("item_no","location_code").orderBy("posting_date").rowsBetween(-89,0)

features = (
    daily_demand
    .withColumn("avg_demand_7d",  F.avg("units_sold").over(w7))
    .withColumn("avg_demand_28d", F.avg("units_sold").over(w28))
    .withColumn("avg_demand_90d", F.avg("units_sold").over(w90))
    .withColumn("dow",            F.dayofweek("posting_date"))
    .withColumn("week_of_year",   F.weekofyear("posting_date"))
    .withColumn("demand_trend",   F.col("avg_demand_7d") - F.col("avg_demand_28d"))
)

features.write.mode("overwrite").saveAsTable("gold.demand_forecast_features")

The output feeds a forecasting model — LightGBM or Prophet depending on series length and seasonality — producing a 12-week forward demand forecast at the SKU-location level. Combine that with current on-hand quantity, supplier lead times from the item master, and safety stock rules to generate automated reorder recommendations that buyers can review in Power BI, or write directly back to NAV as purchase requisitions via the API.

🎯 Start with ABC-XYZ segmentation before forecasting
Not every SKU needs sophisticated ML. Segment your catalogue first: A-items (top 20% by revenue) with stable demand (X) get full forecasting. C-items with erratic demand (Z) are often better served by a simple min/max reorder policy. Focus modelling effort where it creates the most value.

Key Challenges and How to Solve Them

Challenge Why It Happens in NAV Solution
Company-prefixed table names Every NAV table includes the company name, making generic SQL fragile Parameterise company name in all extraction scripts; use a config registry for multi-company deployments
Backdated postings NAV allows posting to prior periods; date-based watermarks miss these entries Watermark on Entry No_ (auto-increment) for all ledger tables, not Posting Date
Customised schemas NAV partners add custom fields and tables that differ per client Schema discovery script on first run; store field mappings in a metadata config table
Inventory closing entries NAV cost adjustment runs create corrective entries that change historical cost values Re-extract Value Entry for the previous 30 days nightly; use MERGE to update Silver
Multi-company, multi-currency Retail groups run multiple NAV companies with different base currencies Add company_code and currency_code as first-class columns; normalise to reporting currency in Silver
Large Item Ledger Entry tables High-volume retailers accumulate 500M+ rows over 10+ years Partition Bronze by posting_date; Z-order Silver on item_no and location_code for fast filters

Business Impact: What You Can Achieve

The business case for a NAV data engineering investment is straightforward when you quantify what poor inventory management actually costs. Based on deployments at Lucent Innovation across multiple retail clients on Dynamics NAV, these are the outcomes we consistently see:

  • 30–40% reduction in stockouts through automated reorder signals and demand-driven safety stock levels, replacing static reorder points set months ago and rarely reviewed.
  • 15–25% reduction in inventory carrying costs by identifying slow-moving and obsolete stock early, enabling clearance pricing decisions before items become write-offs.
  • Elimination of monthly inventory reconciliation — a process that typically consumes 2–4 days of finance team effort each period — through continuous automated stock valuation in the data platform.
  • Buyer productivity gains: replenishment decisions that previously took 30 minutes per buyer per day reduced to a 5-minute review of a pre-built reorder recommendation dashboard.
  • Multi-location transfer optimisation: visibility of overstock at one location alongside stockout risk at another enables inter-location transfers that reduce both emergency orders and end-of-season markdowns.

How to Get Started

A production NAV inventory data engineering project typically runs 8–12 weeks, depending on NAV version, schema customisation depth, and how many locations and companies are in scope. At Lucent Innovation, we follow a phased delivery model that gets value into buyers' hands within the first two weeks.

Phase Timeline Deliverables
Phase 1
Discovery & Connect
Weeks 1–2 NAV schema discovery, read replica or OData access setup, extract Item + Item Ledger Entry to Bronze, first Power BI stock-on-hand report
Phase 2
Core Inventory Model
Weeks 3–5 Full Silver layer (all 8 core tables), Gold inventory position table, reorder signal dashboard, multi-location view
Phase 3
Historical & Valuation
Weeks 6–8 Full historical backfill (Value Entry, all movements), inventory valuation by period, COGS reporting, slow-mover identification
Phase 4
Forecasting & Automation
Weeks 9–12 Demand forecasting model, reorder recommendation engine, optional NAV write-back for purchase requisitions, buyer alert notifications

🚀 Quick win in week 2: the stockout risk report
After extracting just Item and Item Ledger Entry, you can build a simple days-of-cover report in Power BI that flags any SKU-location with fewer than 14 days of stock based on trailing 90-day average demand. For most retailers, this alone surfaces 10–20 urgent replenishment actions that were invisible in NAV's native reports.

Whether you're on NAV 2016, NAV 2018, or have recently migrated to Business Central cloud, the architectural patterns described here apply across the board. The extraction method changes; the data model and analytical value do not.

At Lucent Innovation, we specialise in building data platforms alongside ERP systems — turning the transaction data your business already generates into operational intelligence that drives better buying decisions, leaner inventory, and stronger margins. If you're ready to get more from your NAV data, let's talk.

SHARE

facebooktwitterwhatsapplinkedin
Ashish Kasama
Ashish Kasama
Co-founder & Your Technology Partner

Ready to Build Something Great?

Lucent Innovation designs and builds production-grade solutions for enterprise and ecommerce teams. Let's talk about your next project.