How Prices Are Assessed
Modeling Approaches
What methods does Intratec use to calculate price assessments?
Price assessments are produced through five distinct modeling approaches, each selected based on the data available and the characteristics of the commodity market.
Our price assessments are produced through structured, technology-driven pipelines built by market experts, computer scientists, and data scientists. For each commodity, the methodology selects the most appropriate of five approaches:
- Trade-based: Derived from official government trade statistics. The most common approach — used when trade data is available, timely, and suitable for the targeted specification.
- Formula-based: Calculated from the prices of related commodities and economic variables (such as currency rates and economic indicators) using regression models. Applied when direct trade data is unavailable.
- Freight-based (netback/netforward): Derived by subtracting or adding freight and insurance to existing assessments, producing values that reflect delivery at the loading or destination terminal.
- Manufacturing cost-based: Estimated from the operating costs of producing the commodity, including raw materials, utilities, labor, and overhead. Used for commodities whose market price closely tracks production economics.
- Compiled: Built from public-source price series that are statistically validated, cleaned of anomalies, and averaged over the month.
The approach applied to each commodity is documented in the corresponding assessment guide, which is publicly available.
What is a trade-based price and how is it calculated?
A trade-based price is derived from official government trade statistics after rigorous filtering and homogeneity checks — it represents what the market actually traded, not a surveyed or estimated value.
Trade-based prices start with official trade records from national statistics bureaus and governmental agencies. Before any calculation, the data is filtered to verify price, traded volume, commodity specification, location basis, and counterparty information. Transactions that fall outside the targeted specification are rejected.
Intratec applies clustering to group comparable transactions, which prevents basket effects — distortions that arise when mixing trades of different sizes, origins, or product grades in the same average.
Two output types are available:
- Unit value: Total trade value divided by total traded quantity. No additional statistical treatment is applied. This reflects the raw aggregate of all qualifying trades.
- Transaction price: A refined version that filters by minimum traded quantity and removes statistical outliers using robust outlier-detection methods. This is more representative of the central market but requires sufficient transaction volume.
When official trade statistics are delayed by one to three months — as is common with some government sources — preliminary prices are generated using regression models based on related assessments that carry no lag.
What is the difference between unit value and transaction price?
Both are derived from trade data, but unit value is a raw aggregate while transaction price is a statistically refined estimate of the central market — the right choice depends on how you intend to use the data.
Unit value is the simplest form: total trade value divided by total traded quantity for all qualifying transactions. It captures the full volume of trade but applies no statistical filtering beyond rejecting trades that fall outside the commodity specification. Because all qualifying trades are included — regardless of size — a few very large or atypical transactions can skew the result.
Transaction price introduces two additional filters: a minimum traded-quantity threshold (which removes small, potentially unrepresentative trades) and outlier-detection using robust statistical methods. The result is a value that better reflects the mainstream of market activity, at the cost of requiring more trade data to compute reliably.
For a commodity where most trades are medium-sized but a few very large state purchases occurred in a given month, the unit value would be pulled toward those large trades, while the transaction price would strip them out as outliers and report the mid-market level.
Both types are grounded in the same underlying trade statistics — the difference is how much statistical refinement is applied before the final value is reported.
How does Intratec calculate prices when there's no trade data?
When direct trade data is unavailable, formula-based prices use mathematical relationships between the target commodity and related commodities or economic variables to produce a model-derived assessment.
Formula-based prices are calculated using regression models fitted to the target assessment. The inputs can include:
- Prices of related commodities (for example, a derivative priced relative to its feedstock)
- Economic indicators (such as industrial production indexes)
- Currency exchange rates
The regression model is built from historical relationships between the target price and these inputs, then applied to current data to generate the assessment.
This approach allows coverage to extend beyond commodities with direct, current trade records — particularly useful for specialty chemicals, intermediates, or markets where trade statistics are sparse or significantly delayed.
If a commodity is consistently priced at a fixed spread above a well-traded feedstock, the formula-based model captures that relationship and uses it to derive a price whenever the feedstock price is updated, even in months when no direct trade data is available.
The model is reviewed and recalibrated at least quarterly as part of the standard price-series evaluation cycle.
How does Intratec estimate netback or netforward prices?
Netback and netforward prices are freight-adjusted assessments that translate a known price at one location into an equivalent value at another — accounting for the cost of moving the commodity between the two points.
Freight-based assessments are calculated by applying freight and insurance adjustments to existing price assessments:
- Netback: The value returned to the loading (export) terminal. Calculated by subtracting freight and insurance from the price at the destination. This answers the question: "What is this cargo worth at the origin port, after accounting for shipping costs?"
- Netforward: The value at the destination (import) terminal. Calculated by adding freight and insurance to the price at the loading terminal. This answers the question: "What will this cargo cost when it arrives, including shipping?"
These assessments are particularly useful for evaluating the competitiveness of different trade routes, assessing import/export economics, and comparing delivered costs across geographies.
If a commodity is assessed at $500/ton at a European import terminal and freight from a Middle Eastern export terminal is $30/ton, the netback to the Middle Eastern terminal would be approximately $470/ton — the net value the exporter receives after shipping costs.
Where do compiled price series come from?
Compiled price series are sourced from established public price databases and indexes, then statistically processed to produce consistent monthly values — they are not modeled by Intratec from raw trade data.
For some commodities, suitable public price series already exist from recognized sources such as market exchanges, national agencies, or international organizations. In these cases, Intratec builds compiled assessments by:
- Identifying and collecting the relevant public-source series
- Formatting and standardizing the data to Intratec's internal structures
- Statistically validating the series and removing anomalies or outliers
- Averaging the cleaned data over the month to produce a single monthly value
The compiled approach is selected when an existing public series accurately reflects the commodity market and when that series meets quality and continuity standards. It differs from trade-based or formula-based prices in that the primary data is already a price (not a raw trade record or an input variable) — Intratec's contribution is aggregation, validation, and quality control rather than original modeling.
Because compiled series rely on external sources, their continuity depends on the underlying source remaining available. Source changes or discontinuations are handled through the quarterly data-source review process.
How does Intratec estimate manufacturing cost-based prices?
Manufacturing cost-based prices estimate what it costs to produce a commodity, using operational cost components as a proxy for the market price — useful for commodities whose prices closely track production economics.
The manufacturing cost model combines the following cost components:
- Raw materials: Prices of the primary feedstocks used in production, net of any by-product revenue credits
- Utilities: Costs of electricity, steam, fuel, and water consumed in the process
- Labor: Operating and supervision labor rates
- Maintenance and operating charges: Routine plant maintenance and operational expenses
- Plant overhead, taxes, and insurance: Fixed and semi-fixed facility costs
The model may draw on data from other Intratec products — specifically, raw material prices from Primary Commodity Prices itself, energy costs from Energy Price References, and utility cost benchmarks from Industry Economics Worldwide — ensuring that the cost inputs are consistent with Intratec's broader data ecosystem.
This approach is applied when market prices for a commodity are not directly observable through trade statistics but can be reliably inferred from production economics. It is especially relevant for commodities where production costs set a meaningful floor on market pricing.
Data Gaps and Normalization
What happens when market data is insufficient for modeling?
When the standard data inputs for a model are insufficient or unsuitable, qualified analysts estimate representative values using alternative factual market information — the assessment is still grounded in observable market signals, not invented.
If gathered data does not meet the quality or quantity thresholds required for any of the five standard modeling approaches, the assessment is estimated by analysts drawing on a range of supplementary market signals, including:
- Other Intratec assessments for related commodities or locations
- Producer price references or export indexes
- Labor cost data
- Completed deal reports
- Spreads and exchange-traded derivatives
- Supply and demand fundamentals
- Any other factor materially contributing to price formation
This fallback is documented in the methodology and does not trigger a data-status change unless the data shortage is prolonged. For prolonged shortages, Intratec may change the data source, alter the commodity specification, or — as a last resort — discontinue the assessment.
All published values carry a data-status indicator that reflects whether an assessment is based on standard modeling, analyst estimation, or other conditions. Consult the help center article "What do the data statuses mean?" for details on how to interpret these indicators.
How are prices normalized across locations and quality grades?
Normalization aligns raw market data to Intratec's standard specification and timestamp before any price is published — it is what makes assessments from different markets directly comparable.
Before modeling, gathered data is normalized along four dimensions:
- Location: Freight adjustments translate prices observed at different delivery points to a common basis, so that trade data from multiple origins and destinations can be combined or compared.
- Quality and grade: Factors, indexes, and reference sources adjust for differences in product specification or purity, so that transactions involving slightly different grades are mapped to the target specification.
- Trade size and delivery terms: Averages and outlier-removal methods handle the fact that different trade sizes or contractual delivery terms can produce systematically different price levels for what is nominally the same commodity.
- Timing: Offsets between related assessment curves are corrected when data from different sources is released on different schedules, ensuring that inputs used together in a model reflect the same market moment.
Published historical and preliminary price data is rounded to three significant figures. Forecast data is rounded to two significant figures.
Review and Reliability
How does Intratec review modeled prices before publishing?
Every modeled price passes through a structured multi-layer review before publication, comparing it against related assessments, adjacent time frames, and independent trade sources — an inconsistency at any layer triggers further investigation before release.
After modeling, analysts review all assessments for inconsistencies that may arise from missing data, mathematical errors, technical issues, or data anomalies. The review framework cross-checks each price against:
- The same commodity reported by independent sources
- The same commodity assessed at different geographic locations
- Exporter and importer trade reports for the same commodity
- More actively traded neighboring specifications (e.g., a closely related grade or delivery term)
- Adjacent time frames (prior months and seasonal patterns)
- Feedstocks and derivatives (upstream and downstream price relationships)
- Different transport types, trade volumes, and shipping routes
If a modeled value does not reflect typical market behavior after this review, the analyst falls back to the model output rather than publishing the data-derived result. This ensures that a single anomalous data point does not distort the published series.
Final publication includes loading data into the API and website, testing the presentation layer, publishing release notes, and notifying customers by email after the database is updated.
How reliable are Intratec's price assessments?
Price assessments are built on official government trade data, processed through multi-layer quality controls, and reviewed by qualified analysts before publication — with full methodology documentation publicly available for independent verification.
Reliability is built into every stage of the process:
Data sourcing: All assessments draw primarily on official trade statistics from national statistics bureaus, governmental agencies, international organizations, and recognized market exchanges. These are public, auditable sources — not proprietary surveys or unverified submissions.
Processing integrity: Automated collection, transformation, and modeling pipelines are built and maintained by market experts, computer scientists, and data scientists. Quality controls validate data at ingestion, flag structural changes in source data, and cross-check manually gathered inputs through multiple independent professionals and algorithms.
Review before publication: Every modeled value is reviewed against related commodities, adjacent time frames, geographic benchmarks, and feedstock/derivative relationships before publication. Values that do not pass review are not published.
Governance and ethics: Analysts are required to confirm annually the absence of financial interests or relationships that could impair objectivity. The methodology emphasizes integrity, transparency, independence, and impartiality.
Methodology transparency: Full methodology documents — including the general methodology guide and seven industry-specific assessment guides — are publicly available for review, enabling customers to understand and audit how each assessment is produced.
Assessments are designed as benchmarks for trend analysis, procurement planning, and investment studies. For highly volatile periods, the methodology notes that sharp intra-month movements may not be fully captured in monthly aggregates — a caveat to keep in mind when using prices in real-time or intra-month contexts.
Read the Methodology Documents