Affilitrak Logo

Affilitrak Docs

Getting Started

Documentation

Manual orders

The Manual Orders page is used to create or replay commission attribution events when you need controlled testing, troubleshooting, or backfill-style verification of commission behavior. It gives you a safe operational workspace to run order attribution logic intentionally, inspect outcomes, and confirm how commissions are calculated without waiting for normal storefront traffic.

This page is especially useful when you want to validate affiliate attribution, test coupon and referral behavior, verify tax and shipping handling, or simulate specific order conditions before relying on production flow assumptions.

Why manual order processing matters

In real storefront flow, orders can vary in data quality, source payload shape, and timing. Manual Orders helps you remove uncertainty by letting you send known inputs into the same attribution and commission pipeline. Instead of guessing why a commission did or did not trigger, you can reproduce the scenario and inspect the result with clear control over inputs like amount, tax, shipping, currency, affiliate selection, and order identifiers.

For operations teams, this shortens debugging cycles. For program managers, it builds confidence that commission behavior is aligned with policy before scaling campaigns.

How the page works conceptually

The page gathers order-like inputs, builds a structured attribution payload, and runs it through the same commission processing flow used by normal order handling paths. That means the result reflects real commission rules, referral matching logic, and settings behavior rather than a disconnected calculation tool.

If the payload can be attributed to an affiliate and commission criteria are met, a commission-linked purchase record is produced. If attribution cannot be resolved, the page still returns a structured result that explains what happened, which is valuable for diagnostics.

Typical use cases

Manual Orders is commonly used when testing new affiliate setups, validating coupon-based attribution, checking referral-link behavior under different conditions, verifying commission math for tax and shipping inclusion/exclusion, and investigating edge cases where historical orders did not credit as expected.

It is also useful for controlled QA before changing program settings, because you can compare outcomes before and after configuration changes with known test inputs.

Inputs you usually control

The page allows you to control core order context such as affiliate target, product selection or manual amount context, discount code input, tax and shipping values, currency, and tax-included or tax-excluded behavior assumptions. In replay-oriented flows, it can also process real order references when available to reproduce attribution outcomes with production-like data.

Because these inputs are explicit, you can isolate a single variable at a time and quickly identify what changed the result.

How attribution is evaluated during manual processing

Manual order processing follows the same attribution strategy stack used by the app’s commission engine. Depending on available data, attribution can be driven by coupon linkage, direct affiliate candidate fields, checkout token relationships, and fallback matching logic where applicable. This is important because it means manual outcomes are representative of the real system, not a simplified shortcut.

When no affiliate is matched, the output still provides a valid diagnostic signal: the commission pipeline executed, but attribution conditions were not satisfied for that input set.

Commission calculation behavior in manual orders

Once attribution is resolved, commission is calculated using your active commission settings and program logic. This includes base amount handling, product/collection rule behavior, tier or affiliate-level defaults, fixed and percentage combinations where configured, and tax/shipping inclusion toggles from settings. If royalty or special logic applies in your configuration, that behavior is processed through the same orchestration path.

In practical terms, Manual Orders is not a separate commission engine. It is a controlled entry point into the same engine you rely on in production.

Currency and financial consistency

Manual order processing respects store currency context and commission processing conventions used elsewhere in the app. This allows realistic payout-impact verification because commissions created from manual runs are represented in the same economic model as live-attributed commissions.

When testing cross-currency assumptions or conversion-sensitive scenarios, manual processing provides a consistent way to validate expected output before making policy decisions.

Result interpretation and debugging workflow

A strong debugging workflow on this page starts with a minimal payload, confirms baseline attribution and commission behavior, and then adds complexity one variable at a time. If attribution fails, check referral/coupon inputs first. If attribution succeeds but commission is unexpected, check commission settings, tax/shipping toggles, and program-specific rules next.

This sequence avoids over-debugging and helps you quickly identify whether the issue is candidate matching, rule configuration, or input quality.

Safety and operational discipline

Manual Orders should be treated as an operational testing tool, not a replacement for normal order ingestion. Use clear test identifiers and deliberate scenarios so test records are easy to trace. If your team runs frequent manual tests, document a standard process for naming, expected outcomes, and post-test review to keep reporting and investigation clean.

With this discipline, Manual Orders becomes a high-value reliability tool rather than a source of ambiguity.

Best way to use Manual Orders during rollout changes

When launching new programs, commission rules, or referral policies, run a small suite of manual order scenarios first. Validate the most common path, the highest-risk edge case, and one failure case where attribution should not happen. If all three behave as expected, confidence in production behavior increases significantly.

This approach reduces surprises after launch and gives support teams a clear reference for expected outcomes when merchant or affiliate questions arise.

Common mistakes to avoid

Most mistakes come from changing too many inputs at once, testing with unclear affiliate context, or assuming a commission issue when attribution actually failed earlier in the pipeline. Another common issue is interpreting one-off test results without checking whether tax/shipping toggles or program assignment changed recently.

Avoid these issues by keeping scenarios controlled, documenting test assumptions, and verifying settings state before and after each run.

NEXT PAGE

Affiliate Overview