
QoS Over Satellite Links: Traffic Shaping, Latency, and Application Performance
Practical guide to QoS over satellite — traffic shaping, queuing, TCP acceleration, and SD-WAN integration for enterprise network engineers managing VSAT and LEO links.
QoS Over Satellite Links: Traffic Shaping, Latency, and Application Performance
Introduction
Every network benefits from Quality of Service policies. But on satellite links, QoS is not a best practice — it is a survival requirement. The combination of limited bandwidth, high latency, variable throughput, and shared capacity means that a satellite link without QoS will fail its users at the worst possible moment: during a VoIP call with a customer, a video conference with headquarters, a VPN session to the ERP system, or a SaaS application transaction that times out and forces a retry that compounds the congestion.
Terrestrial networks can often mask the absence of QoS through sheer bandwidth abundance. A 1 Gbps fiber connection rarely forces IT teams to choose which application gets priority. A 10 Mbps VSAT link serving 40 users on an offshore platform does. A 50 Mbps LEO terminal shared between crew welfare and operational systems on a vessel does. A 25 Mbps satellite backup link that just became the primary WAN after a fiber cut does.
This guide provides enterprise network engineers, SD-WAN architects, and IT managers with a practical framework for implementing QoS over satellite links. It covers the satellite-specific impairments that make QoS essential, the building blocks of a satellite QoS policy, traffic shaping patterns for common deployment scenarios, TCP acceleration, SD-WAN integration, and a configuration checklist for production deployments. For a broader overview of enterprise satellite connectivity, see Enterprise Satellite Internet Guide.
Satellite Impairments That Make QoS Essential
Before designing a QoS policy, engineers must understand the specific impairments that satellite links impose on application traffic. These impairments differ fundamentally from terrestrial network challenges and directly inform how QoS policies should be structured.
Latency and Bandwidth-Delay Product
GEO satellite links impose 480–600 ms round-trip latency — a physical consequence of the 35,786 km orbital altitude. LEO constellations reduce this to 20–40 ms, but even LEO latency exceeds what most enterprise applications experience on terrestrial paths. For a detailed comparison across orbital regimes, see Satellite Latency Comparison.
The bandwidth-delay product (BDP) — the product of link capacity and round-trip time — determines how much data can be "in flight" at any moment. A 10 Mbps GEO link with 600 ms RTT has a BDP of 750 KB. TCP's congestion window must reach this value before the link is fully utilized. Standard TCP implementations take many round trips to ramp up to this window size, leaving the link underutilized for the first several seconds of every connection. This is why TCP acceleration (covered below) is a critical companion to QoS on GEO links.
Jitter and Packet Loss
Satellite links exhibit jitter patterns that differ from terrestrial networks. GEO links have relatively stable jitter (typically 5–15 ms) but with a high baseline latency. LEO links can experience jitter spikes during satellite handovers — brief periods where latency fluctuates as the terminal transitions from one satellite to another. These handover-induced jitter events are typically under 100 ms but can impact real-time applications if jitter buffers are not sized appropriately.
Packet loss on satellite links comes from two sources: congestion (too many users competing for shared capacity) and physical-layer degradation. Rain fade is the primary physical-layer impairment for Ka-band services, reducing signal-to-noise ratio and forcing the modem to shift to more robust but lower-throughput modulation schemes — see Rain Fade in Satellite Communications. Adaptive Coding and Modulation (ACM) handles physical-layer degradation automatically, but the throughput reduction it causes can trigger congestion if QoS policies do not account for the reduced available capacity. For more on how ACM adjusts throughput dynamically, see Satellite Modulation and Coding Guide.
Congestion vs Physical-Layer Degradation
A critical distinction for QoS design: congestion and physical-layer degradation look similar to upper-layer protocols (both cause packet loss and increased latency) but require different responses. Congestion calls for traffic shaping — reducing the offered load to match available capacity. Physical-layer degradation calls for waiting for conditions to improve while protecting the highest-priority traffic classes. A well-designed QoS policy handles both scenarios, and a well-instrumented satellite modem provides the telemetry to distinguish between them.
Understanding the basic signal chain helps clarify where these impairments occur — see How Satellite Internet Works for the foundational concepts.
QoS Building Blocks
The core QoS mechanisms used on satellite links are the same as terrestrial networks, but their configuration requires satellite-specific tuning. The following building blocks form the foundation of any satellite QoS policy.
Classification and Marking
Traffic classification identifies which application or traffic type each packet belongs to. Marking stamps that classification onto the packet header — typically using DSCP (Differentiated Services Code Point) values in the IP header's ToS byte — so that every device in the forwarding path can apply the correct treatment without re-inspecting the packet payload.
On satellite networks, classification typically happens at the customer edge router or SD-WAN appliance before traffic enters the satellite modem. Common classification methods include:
- DSCP marking: The industry standard. Enterprise applications and voice gateways often mark their own traffic. The edge router trusts or re-marks as needed.
- Application-aware DPI: SD-WAN platforms and WAN optimization appliances can identify applications by deep packet inspection, even when traffic is encrypted (using SNI, certificate metadata, or behavioral fingerprinting).
- Source/destination policies: Static rules based on IP address, subnet, VLAN, or port — simple but effective for well-structured networks.
A typical satellite QoS deployment uses 4–6 traffic classes. More than 8 classes rarely improves performance and increases operational complexity. Shared satellite bandwidth in VSAT Network Architecture makes proper classification especially important, since multiple sites may share the same transponder capacity.
Queuing Disciplines
Once traffic is classified, queuing determines the order in which packets are transmitted. The queuing discipline directly controls which applications experience delay and which receive priority access to the link.
- Strict Priority Queue (PQ): Voice and real-time video packets are dequeued first, always. This guarantees minimum latency for these traffic classes but can starve other queues if priority traffic exceeds its allocation. A rate limiter (policer) on the priority queue prevents starvation — typically capping voice at 10–15% of link capacity.
- Weighted Fair Queuing (WFQ): Remaining bandwidth is distributed among non-priority traffic classes according to configurable weights. Business-critical applications (ERP, CRM, database replication) receive a higher weight than best-effort traffic (web browsing, social media, software updates).
- Class-Based WFQ (CBWFQ): Combines strict priority for real-time traffic with weighted fair queuing for everything else. This is the most common queuing model for enterprise satellite deployments.
Shaping vs Policing
Shaping and policing both limit traffic rates, but they behave differently and have different implications for satellite links.
- Shaping buffers excess traffic and releases it at a controlled rate. This smooths burst patterns and prevents the satellite modem's buffers from overflowing. Shaping is preferred on the outbound (transmit) side of a satellite link because it prevents congestion at the satellite modem.
- Policing drops or re-marks excess traffic immediately without buffering. Policing is used at trust boundaries (where untrusted traffic enters the network) and to enforce rate limits on specific traffic classes.
On satellite links, shaping is almost always preferred over policing for aggregate traffic management. The high latency of satellite links means that dropped packets take a long time to recover (TCP retransmission requires a full RTT), so buffering excess traffic briefly is less costly than dropping it. However, shaping introduces additional latency (the buffering delay), so the shaping buffer must be sized to balance throughput and delay — a deep buffer maximizes throughput but increases latency for all queued traffic.
Active Queue Management
Active Queue Management (AQM) algorithms prevent queue buildup from reaching the point where tail-drop causes synchronized TCP retransmissions — a phenomenon called "TCP global synchronization" that causes throughput oscillations.
- RED (Random Early Detection): Randomly drops packets before the queue is full, signaling TCP senders to reduce their rates gradually rather than all at once.
- CoDel (Controlled Delay): Monitors how long packets spend in the queue and drops packets when sojourn time exceeds a threshold. CoDel is particularly effective on satellite links because it targets delay rather than queue depth, making it adaptive to the variable throughput that ACM causes.
AQM is most effective on the shaping queue — the point where all classified, prioritized traffic converges before entering the satellite modem.
Traffic Shaping Patterns for SATCOM
The following patterns represent proven QoS configurations for common satellite deployment scenarios. Each pattern can be implemented on enterprise routers, SD-WAN appliances, or dedicated WAN optimization devices.
Voice and Video Priority
Real-time communications — VoIP, video conferencing, unified communications — are the most latency-sensitive applications on any network and the first to fail on an unmanaged satellite link.
Configuration pattern:
- Classify voice (RTP/SIP, DSCP EF) and interactive video (DSCP AF41) into a strict priority queue.
- Cap the priority queue at 15–20% of committed information rate (CIR) to prevent starvation of other traffic classes.
- Apply a jitter buffer at the voice gateway: 60–80 ms for LEO links, 150–200 ms for GEO links.
- Enable voice activity detection (VAD) to reduce bandwidth consumption during silence periods.
- Use G.729 or Opus codecs rather than G.711 — G.729 consumes 8 kbps per call versus 64 kbps for G.711, a critical difference on bandwidth-constrained satellite links.
A 10 Mbps satellite link with a 15% priority queue allocation supports approximately 150 concurrent G.729 calls — more than sufficient for most enterprise sites. The same link supports only 23 G.711 calls.
Business-Critical Application Guarantees
Enterprise applications — ERP (SAP, Oracle), CRM (Salesforce), database replication, cloud SaaS — require consistent throughput and bounded latency but do not need the strict priority treatment that voice demands.
Configuration pattern:
- Classify business-critical traffic into a CBWFQ class with a guaranteed minimum bandwidth (e.g., 40% of CIR).
- Apply DSCP AF31 marking for business-critical applications.
- Enable TCP acceleration for these flows to overcome the BDP challenge on GEO links.
- Monitor application response times — if SaaS applications exceed acceptable thresholds, increase the guaranteed bandwidth allocation.
Bulk Transfer Scheduling
Large data transfers — backup jobs, software distribution, video file uploads, database synchronization — can consume all available satellite bandwidth if left unmanaged. These transfers are typically delay-tolerant, making them ideal candidates for scheduled shaping.
Configuration pattern:
- Classify bulk traffic (identified by application, port, or DSCP CS1/AF11) into a scavenger or low-priority queue.
- Apply a maximum rate limit (e.g., 20% of CIR during business hours, 80% during off-peak).
- Schedule bandwidth-intensive operations (backups, updates) for off-peak windows — nighttime for fixed sites, defined maintenance windows for maritime and remote operations.
- Use WAN optimization (deduplication, compression) to reduce the volume of bulk transfers.
Maritime: Crew vs Operations Fairness
Maritime satellite deployments — see Maritime Satellite Internet — present a unique QoS challenge: operational traffic (navigation, weather, fleet management, regulatory reporting) and crew welfare traffic (web browsing, video streaming, social media, messaging) share the same satellite link.
Configuration pattern:
- Separate operational and crew traffic at the network level (VLAN segmentation or separate SSIDs).
- Guarantee a minimum bandwidth allocation for operational traffic (e.g., 60% of CIR) regardless of crew usage.
- Apply per-user rate limiting within the crew VLAN to prevent a single user from monopolizing crew bandwidth.
- Implement application-level restrictions on bandwidth-intensive crew applications (video streaming capped at 480p, peer-to-peer blocked entirely).
- During emergency or heavy weather conditions, policy should allow operational traffic to claim 100% of available capacity.
For satellite backhaul scenarios serving cellular towers or edge networks, similar fairness principles apply — see Satellite Backhaul Explained for backhaul-specific architecture.
TCP Acceleration and WAN Optimization
Why TCP Suffers on Satellite Links
TCP's congestion control algorithm was designed for terrestrial networks where round-trip times are measured in milliseconds. On a GEO satellite link with 600 ms RTT, TCP's behavior creates two significant problems:
- Slow start penalty: TCP begins each connection with a small congestion window (typically 10 segments) and doubles it each RTT. On a GEO link, reaching full link utilization takes many seconds — unacceptable for short-lived HTTP transactions that may complete before the window opens fully.
- Loss recovery delay: When a packet is lost, TCP waits for a full RTT before detecting the loss (via duplicate ACKs or retransmission timeout). On a GEO link, this means 600 ms of wasted capacity per loss event. On a congested link with 1% packet loss, throughput collapses.
What TCP Acceleration Does
TCP acceleration — also called WAN optimization, TCP spoofing, or Performance Enhancing Proxy (PEP) — places proxy devices on both sides of the satellite link. The proxies terminate TCP connections locally and relay data across the satellite link using an optimized protocol that accounts for the high-latency, high-BDP characteristics of the satellite path.
Key techniques include:
- Window scaling: The proxy uses a pre-opened TCP window sized to the satellite link's BDP, eliminating slow start.
- Selective acknowledgment: SACK-based recovery retransmits only lost segments rather than the entire window.
- Local acknowledgment: The near-side proxy ACKs the sender immediately, preventing the sender's TCP from stalling while data transits the satellite hop.
- Compression and deduplication: Byte-level deduplication and compression reduce the volume of data crossing the satellite link — particularly effective for repetitive enterprise traffic patterns.
Trade-offs
TCP acceleration is not without caveats:
- Encrypted traffic: End-to-end TLS/SSL encryption prevents the proxy from inspecting TCP headers and payload. Solutions include split-tunnel VPN configurations, proxy-aware certificate management, or accelerating at the tunnel level rather than the flow level.
- Split-TCP concerns: Some security architectures prohibit split-TCP because it breaks the end-to-end integrity model. In these environments, acceleration may be limited to compression and protocol optimization without TCP termination.
- LEO relevance: On LEO links with 20–40 ms RTT, TCP acceleration provides marginal benefit for most applications. The primary benefit shifts from latency mitigation to compression and deduplication for bandwidth savings.
SD-WAN and Satellite Integration
Modern SD-WAN platforms — Cisco Viptela, Fortinet, VMware VeloCloud, Cradlepoint, Peplink — provide native integration with satellite transports. SD-WAN adds intelligent path steering to the QoS toolkit, enabling real-time traffic decisions based on measured link quality.
Path Steering
SD-WAN continuously probes each available WAN path (fiber, LTE, satellite) for latency, jitter, packet loss, and available bandwidth. Application-aware policies steer traffic to the optimal path:
- VoIP → LEO satellite or LTE (lowest latency path)
- ERP/SaaS → fiber primary, LEO secondary
- Bulk transfer → GEO satellite (highest capacity, latency-tolerant)
- Crew welfare → satellite with rate limiting
For multi-orbit deployments combining LEO and GEO, SD-WAN path steering enables the architecture patterns described in Hybrid Satellite Network & Multi-Orbit.
Active/Active Satellite and Terrestrial
When both satellite and terrestrial links are available, SD-WAN can operate in active/active mode — distributing traffic across both paths simultaneously. This configuration requires careful QoS coordination to ensure that traffic classified for low-latency treatment does not get steered to a high-latency GEO satellite path.
Best practice: Define application SLA policies that include maximum acceptable latency. The SD-WAN controller will automatically exclude paths that exceed the SLA threshold for each application class, preventing misrouting even in active/active configurations.
Failover and Brownout Handling
Satellite links are most valuable during terrestrial outages — the moments when QoS matters most. SD-WAN must handle two scenarios:
- Hard failover: Terrestrial link goes down completely. All traffic moves to satellite. QoS policies must immediately constrain aggregate traffic to the satellite link's CIR to prevent overwhelming the satellite modem.
- Brownout: Terrestrial link degrades (increased packet loss, latency spikes) without failing completely. SD-WAN should selectively move latency-sensitive applications to satellite while keeping bulk traffic on the degraded terrestrial link, reducing load on the satellite.
Practical Configuration Checklist
The following checklist provides a step-by-step framework for implementing QoS on a satellite link. It applies to both new deployments and existing links that need QoS improvement.
Step 1: Baseline the Link
- Measure committed information rate (CIR) and peak information rate (PIR) from the satellite service provider's contract.
- Conduct throughput tests during peak and off-peak hours to establish real-world capacity.
- Measure baseline latency, jitter, and packet loss.
- Identify ACM-related throughput variations by monitoring modem SNR and modcod changes.
Step 2: Inventory Applications
- Catalog all applications using the satellite link, their bandwidth requirements, latency sensitivity, and business criticality.
- Assign each application to a traffic class (typically 4–6 classes).
- Define DSCP markings for each class.
Step 3: Design the QoS Policy
- Allocate bandwidth to each traffic class as a percentage of CIR (not PIR — PIR is not guaranteed).
- Configure strict priority queue for real-time traffic with a rate cap.
- Configure CBWFQ for remaining traffic classes.
- Set shaping rate to CIR on the customer edge device.
Step 4: Implement and Test
- Apply the QoS policy to the satellite-facing interface.
- Generate synthetic traffic for each class and verify that prioritization, bandwidth guarantees, and rate limits work as expected.
- Test under congestion conditions — saturate the link and verify that high-priority traffic is protected.
Step 5: Monitor and Tune
- Track per-class bandwidth utilization, queue drops, and latency continuously. For monitoring best practices and tool recommendations, see Network Management.
- Set alerting thresholds for queue depth, drop rate, and application response time.
- Review and adjust allocations quarterly or when application mix changes.
Common pitfall: Setting the shaping rate to PIR instead of CIR. PIR is burst capacity that is not guaranteed and may be revoked under congestion. If you shape to PIR, your QoS policy will collapse exactly when the network is most congested — the worst possible time. Always shape to CIR and treat PIR as bonus capacity for the scavenger class.
Monitoring success criteria: A well-tuned satellite QoS policy should achieve: voice MOS score above 3.5, video conferencing with under 2% packet loss, business-critical application response times within 20% of terrestrial baseline, and zero priority queue starvation events per month.
Frequently Asked Questions
What is QoS over satellite and why is it different from terrestrial QoS?
QoS over satellite applies the same classification, queuing, and shaping mechanisms used on terrestrial networks, but with satellite-specific tuning for high latency, limited bandwidth, shared capacity, and variable throughput caused by weather and ACM. The fundamental difference is that terrestrial networks can often avoid QoS through bandwidth overprovisioning — satellite links cannot. Every megabit on a satellite link costs significantly more than terrestrial bandwidth, and the link's capacity can change dynamically due to rain fade or congestion, making QoS policies essential rather than optional.
How does high latency affect VoIP quality on satellite?
GEO satellite latency (480–600 ms RTT) is perceptible in voice conversations and exceeds the ITU-T G.114 recommendation of 150 ms one-way delay. Users experience talk-over and unnatural conversation rhythm. LEO latency (20–40 ms) is within acceptable bounds for most voice applications. On GEO links, QoS must guarantee strict priority for voice packets, and jitter buffers must be configured to 150–200 ms. Using low-bitrate codecs (G.729, Opus) is essential to minimize the bandwidth consumed by voice traffic on the constrained satellite link.
Which queuing method works best for satellite — priority queuing, WFQ, or CBWFQ?
CBWFQ with a strict priority queue for real-time traffic is the industry standard for satellite links. Pure priority queuing risks starving non-real-time traffic during peak periods. Pure WFQ does not provide the strict latency guarantees that voice and video require. CBWFQ combines both: strict priority for real-time traffic (with a rate cap to prevent starvation) and weighted fair queuing for all other traffic classes, providing a balanced approach that protects both latency-sensitive and throughput-sensitive applications.
Do I need TCP acceleration on LEO satellite links?
On LEO links with 20–40 ms RTT, TCP acceleration provides limited latency benefit — TCP performs reasonably well at these round-trip times. However, TCP acceleration can still provide value through compression, deduplication, and protocol optimization that reduce the volume of data crossing the satellite link. For GEO links, TCP acceleration is strongly recommended and can improve throughput by 2–10x depending on application type and traffic patterns.
How do I handle QoS during rain fade events?
Rain fade reduces available throughput as the modem shifts to more robust modulation schemes. The QoS policy should be designed to function correctly at the reduced throughput — which it will if shaping is configured to CIR (the minimum guaranteed rate). If CIR is maintained during fade, the QoS policy continues to operate normally. If the fade is severe enough to reduce throughput below CIR, the strict priority queue protects voice and video while lower-priority classes experience increased delay and possible drops. Monitoring modem SNR and modcod status helps operators anticipate and respond to fade events.
Can SD-WAN replace traditional QoS on satellite links?
SD-WAN complements QoS but does not replace it. SD-WAN adds path steering — the ability to route traffic across multiple WAN links based on real-time quality measurements. But once traffic is committed to the satellite path, QoS at the interface level (classification, queuing, shaping) is still necessary to manage how that traffic competes for the satellite link's limited capacity. The best deployments use SD-WAN for inter-path decisions and interface-level QoS for intra-path prioritization.
What bandwidth percentage should I allocate to voice on a satellite link?
A common starting point is 10–15% of CIR for voice traffic. Using G.729 codec (8 kbps per call plus headers), a 10 Mbps link with 15% voice allocation supports approximately 150 concurrent calls. The actual allocation depends on the number of concurrent voice users at the site. Over-allocating voice bandwidth wastes capacity that could serve data applications; under-allocating causes voice quality degradation. Start with 15%, monitor call quality metrics (MOS, jitter, packet loss), and adjust based on observed usage.
How do I implement fair bandwidth sharing between crew and operations on maritime vessels?
Separate crew and operational traffic onto different VLANs or SSIDs. Apply QoS policies that guarantee a minimum bandwidth percentage (typically 50–60%) for operational traffic regardless of crew demand. Within the crew VLAN, implement per-user rate limiting to prevent individual users from monopolizing bandwidth. Restrict bandwidth-heavy applications (HD video streaming, large downloads) on the crew network. During critical operations or heavy weather, enable a policy override that allocates 100% of capacity to operational traffic.
Key Takeaways
- QoS is mandatory on satellite links — bandwidth scarcity, high latency, and shared capacity make unmanaged links unusable for enterprise applications.
- Shape to CIR, not PIR — burst capacity disappears during congestion, which is exactly when QoS matters most.
- Use CBWFQ with strict priority — real-time voice and video get priority treatment with a rate cap; business applications get guaranteed bandwidth; bulk traffic gets the remainder.
- TCP acceleration is essential on GEO — the bandwidth-delay product of high-latency links prevents standard TCP from utilizing available capacity.
- SD-WAN complements but does not replace QoS — use SD-WAN for path steering between links and interface-level QoS for prioritization within the satellite link.
- Design for degraded conditions — rain fade, congestion, and failover scenarios are when QoS policies prove their value.
- Monitor continuously — per-class utilization, queue drops, application response times, and modem SNR provide the telemetry needed to tune and validate QoS policies.
- Maritime and remote sites need explicit fairness rules — without per-user and per-application controls, crew welfare traffic will overwhelm operational systems.
Related Articles
- Enterprise Satellite Internet Guide
- Satellite Latency Comparison
- Rain Fade in Satellite Communications
- Hybrid Satellite Network & Multi-Orbit
- Network Management
- Satellite Backhaul Explained
- VSAT Network Architecture
- Maritime Satellite Internet
- Satellite Modulation and Coding Guide
- How Satellite Internet Works
Author
Categories
More Posts

Satellite Hub Redundancy Explained: How VSAT Networks Reduce Single Points of Failure
Engineering guide to satellite hub redundancy — RF, platform, power, and site-level redundancy models, failover mechanics, and practical design trade-offs for resilient VSAT networks.

End-to-End Architecture
Complete overview of satellite communication system architecture from space segment to user terminals.

Satellite Carrier Spacing Explained: Why Guard Bands Matter in RF Planning
Engineering guide to satellite carrier spacing covering guard bands, RF planning trade-offs, spectral efficiency, and adjacent carrier interference.
Newsletter
Join the community
Subscribe to our newsletter for the latest news and updates