| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475 |
- Hyper-V network driver
- ======================
- Compatibility
- =============
- This driver is compatible with Windows Server 2012 R2, 2016 and
- Windows 10.
- Features
- ========
- Checksum offload
- ----------------
- The netvsc driver supports checksum offload as long as the
- Hyper-V host version does. Windows Server 2016 and Azure
- support checksum offload for TCP and UDP for both IPv4 and
- IPv6. Windows Server 2012 only supports checksum offload for TCP.
- Receive Side Scaling
- --------------------
- Hyper-V supports receive side scaling. For TCP & UDP, packets can
- be distributed among available queues based on IP address and port
- number.
- For TCP & UDP, we can switch hash level between L3 and L4 by ethtool
- command. TCP/UDP over IPv4 and v6 can be set differently. The default
- hash level is L4. We currently only allow switching TX hash level
- from within the guests.
- On Azure, fragmented UDP packets have high loss rate with L4
- hashing. Using L3 hashing is recommended in this case.
- For example, for UDP over IPv4 on eth0:
- To include UDP port numbers in hashing:
- ethtool -N eth0 rx-flow-hash udp4 sdfn
- To exclude UDP port numbers in hashing:
- ethtool -N eth0 rx-flow-hash udp4 sd
- To show UDP hash level:
- ethtool -n eth0 rx-flow-hash udp4
- Generic Receive Offload, aka GRO
- --------------------------------
- The driver supports GRO and it is enabled by default. GRO coalesces
- like packets and significantly reduces CPU usage under heavy Rx
- load.
- SR-IOV support
- --------------
- Hyper-V supports SR-IOV as a hardware acceleration option. If SR-IOV
- is enabled in both the vSwitch and the guest configuration, then the
- Virtual Function (VF) device is passed to the guest as a PCI
- device. In this case, both a synthetic (netvsc) and VF device are
- visible in the guest OS and both NIC's have the same MAC address.
- The VF is enslaved by netvsc device. The netvsc driver will transparently
- switch the data path to the VF when it is available and up.
- Network state (addresses, firewall, etc) should be applied only to the
- netvsc device; the slave device should not be accessed directly in
- most cases. The exceptions are if some special queue discipline or
- flow direction is desired, these should be applied directly to the
- VF slave device.
- Receive Buffer
- --------------
- Packets are received into a receive area which is created when device
- is probed. The receive area is broken into MTU sized chunks and each may
- contain one or more packets. The number of receive sections may be changed
- via ethtool Rx ring parameters.
- There is a similar send buffer which is used to aggregate packets for sending.
- The send area is broken into chunks of 6144 bytes, each of section may
- contain one or more packets. The send buffer is an optimization, the driver
- will use slower method to handle very large packets or if the send buffer
- area is exhausted.
|