Intel IXP400 Frozen Dessert Maker User Manual


 
Intel
®
IXP400 Software
Access-Layer Components: Ethernet Access (IxEthAcc) API
Programmer’s Guide IXP400 Software Version 2.0 April 2005
Document Number: 252539, Revision: 007 141
This is configured using the ixEthAccRxSchedulingDisciplineSet() function.
Rx FIFO Priority (QoS Mode)
IxEthAcc can support the ability to prioritize frames based upon 802.1Q VLAN data on the receive
path. This feature requires a compatible NPE microcode image with VLAN/QoS support. Enabling
this support requires a two-part process: IxEthDB must be properly configured with support for
this feature, and the Rx port in IxEthAcc must be configured using the
ixEthAccRxSchedulingDisciplineSet() function.
In receive QoS mode, IxEthAcc will support up to four IxQMgr priority receive queues in
configurations which involving only NPE-B and/or NPE-C. If NPE-A is configured for Ethernet by
selecting an Ethernet-enabled NPE microcode image for NPE-A, then eight IxQMgr receive
queues may be used. The NPE microcode will detect 802.1Q VLAN Priority data within an
incoming frame or insert this data into a frame if configured to do so by IxEthDB. The NPE will
then map the priority data to one of up to 8 traffic classes and places the IX_OSAL_MBUF header
for each frame into its respective IxQMgr queue. IxEthAcc will service all frames in higher priority
queues prior to servicing any entries in queues of a lower priority. Lower priority queues could be
starved indefinitely.
The actual impact on system performance of the Rx FIFO priority mode is heavily influenced by
the amount of traffic, priority level of the traffic, how often IxQMgr queues are serviced, and how
many IxQMgr queues have entries during the time of servicing by the dispatcher loop.
If the IxEthAccPortMultiBufferRxCallback() function is used, it will return all currently available
entries from all EthRx queues. If there are two entries in the Priority 3 EthRx queue and two entries
in the Priority 1 EthRx queue, then four entries will be returned with the multi-buffer callback.
Enabling the Rx QoS Mode generally involves the following process: initialize IxEthDB, enable
VLAN/QoS on the desired ports, download the appropriate QoS->Traffic Class priority map (or
use the default one, which is 802.1P compliant), initialize IxEthAcc and set the Rx discipline.
Freeing Buffers
Once this service calls the callback with the receive IX_OSAL_MBUF, “ownership” of the buffer
is transferred to the user of the access component (i.e., the access component will not free the
buffer). Once IxEthAcc calls the registered user-level receive callback, the receive
IX_OSAL_MBUF “ownership” is transferred to the user of the access component. IxEthAcc will
not free the buffer. Should a chain of IX_OSAL_MBUFs be received, the head of the buffer chain
is passed to the Rx callback.
Buffers can also be freed by disabling the port, using the IxEthAccPortDisable() function. This has
the result of returning all Rx buffers to the Rx registered callback, which may then de-allocate the
IX_OSAL_MBUFs to free memory.
Recycling Buffers
Buffers received (chained or unchained) on the Rx path can be used without modification in the Tx
path. Rx and TxEnetDone buffers (chained or unchained) should have the length of each cluster
reset to the cluster original size before re-using it in the ixEthAccPortRxFreeReplenish() function.