[ Pobierz całość w formacie PDF ]

interactive nature of SNA 3270 users. Even with 10 or 20 users over the same DLSw+ session, transaction rates
are on the order of transactions per minute, which will result in packet rates of the same order. Under normal
circumstances, this transaction rate will never reach levels that one would consider high bandwidth.
A greater threat to SNA response times has to do with TCP congestion management (slow-start and back-off).
TCP congestion management uses a windowed protocol that automatically adjusts to the bottlenecks along the
session path. In extreme circumstances (for example, sharing a 64-kbps link with 20 or 30 FTPs) TCP windows
can be reduced to sizes that require every TCP packet to be acknowledged even during large file transfers (in other
words, a window size of one). This situation very closely resembles an interactive session that has the adverse
effect of creating artificial competition with interactive SNA DLSw+ traffic. Under these conditions WFQ cannot
distinguish between the SNA interactive traffic and the TCP/IP batch traffic. As a result, administrative
intervention is required to give SNA traffic differentiated service.
WFQ needs some method to distinguish SNA interactive traffic (encapsulated in TCP/IP) from other IP traffic
during periods of extreme congestion. One method that can be used is to modify the weight of the DLSw+ traffic
by using the precedence bits in the IP header. When WFQ determines the packet scheduling order, the lower the
weight, the higher the packet priority. The computed weight is a function of the frame length (or transmission
time of completion) and its position in its conversation queue. The length of the packet is reduced using the
precedence bits in the IP packet ToS field. Thus, the only advantage SNA can hope to have over other traffic lies
in gaining priority (lower weight) from the precedence bits. There are several mechanisms used to set precedence
on SNA traffic.
Setting precedence on DLSw+ packets is done by using policy-based routing or by relying on DLSw+. (See  Map
SNA CoS to IP ToS. ) Setting precedence on DLSw+ packets using policy-based routing can be done in the
following manner:
ip local policy route-map SNADLSW
access-list 101 permit tcp any any eq 2065
access-list 101 permit tcp any eq 2065 any
route-map snadlsw permit 10
match ip address 101
set ip precedence critical
WFQ is enabled on a physical interface by default. No configuration is required.
Note: WFQ must be used in combination with traffic shaping over Frame Relay to be most effective.
2-10 SNA Internetworking over Frame Relay Design and Implementation Guide
Weighted Random Early Detection
WRED is one of the newer and more sophisticated queuing methods. It is considered a congestion avoidance
method because it drops traffic based on mean queue depths instead of tail dropping as do the previous queuing
methods. WRED is also considered a class-based queuing method because it deals with traffic based on class
definitions. There are nine WRED classes. There is a class for each precedence level plus one for RSVP traffic.
Currently, WRED works only for IP traffic. Traffic is placed on a single queue for transmission. Packets are
selected for discard based on probabilities (thus the term random). The probability computations used for packet
selection are a function of the packet precedence and mean queue depth on the output interface. The probability
of a packet discard increases as the precedence decreases and as the mean queue depth increases.
By randomly selecting packets for discard based on probabilities, instead of tail-dropping when queues overflow,
WRED resolves a phenomenon called global synchronization and more fairly discards among sessions using the
link. Global synchronization can occur when simultaneously tail-dropping many packets across many sessions
makes TCP back-off algorithms kick-in at the same time. TCP slow-start ramps up only to repeat the same
pattern. This phenomenon has been observed on the Internet. Most enterprise networks today are not subject to
this phenomenon, but it is possible in theory.
WRED works in conjunction with the precedence bits to differentiate service the same way WFQ does. Giving
DLSw+ higher IP precedence reduces the probability that an SNA packet will be discarded, avoiding
retransmissions that negatively impact response times. Note that SNA traffic is not scheduled at a higher priority
using WRED. SNA is simply less likely to be dropped.
WRED is not recommended on slow links when using SNA because the single queue implementation can cause
some queuing delays for SNA traffic and WRED does no traffic prioritization or packet sorting. WRED is more
suitable for broadband trunks because the high speeds provide better algorithmic scaling properties and lessen
the problems associated with long queuing delays.
Use the random-detect command to enable WRED as follows:
interface serial0
random-detect 9
Traffic Shaping
Frame Relay networks without traffic shaping cannot effectively prioritize traffic.
For an overview of congestion management in Frame Relay see the Technology Overview chapter. From that
chapter, recall that mismatched access speeds can have a negative effect on SNA response times. Congestion
occurs on the egress port of the Frame Relay switch where little traffic prioritization takes place. The solution to
this problem is to use traffic shaping on the router and move the point of congestion to the router (see Figure
2-2). The concept is simple. Moving congestion from the Frame Relay switch to the router allows all of the above
traffic prioritization methods to take place.
Maintaining SNA Response Times 2-11
Figure 2-2 Effects of Traffic Shaping
Central Site Remote Site
Congestion
Data Data Data Data
LAN LAN
T1 64 kbps
Router Router
Without Traffic Shaping, Congesion
on Egress Port of Switch
Congestion
Data Data Data
64 kbps
T1
Router Router
With Traffic Shaping, Congestion on Each
VC on Router
An important clarification to make is that traffic shaping in a Frame Relay environment takes place on each DLCI
in the router. So, there is a traffic-shaping output queue for every Frame Relay virtual circuit. Traffic shaping
applied to each DLCI creates congestion (when traffic rates are higher than the traffic-shaping rates defined), and
the congestion effectively creates an output queue for each DLCI. This per DLCI output queue is called the
traffic-shaping queue. An output queuing method (such as PQ, CQ, WFQ, or the default FIFO) is applied to the
traffic-shaping queue. Therefore, a traffic-shaping queue for each DLCI requires additional system buffer
memory to accommodate the additional queuing. In situations where there are large numbers of DLCIs, buffer [ Pobierz całość w formacie PDF ]

  • zanotowane.pl
  • doc.pisz.pl
  • pdf.pisz.pl
  • sloneczny.htw.pl