支持虛擬化和帶寬分享的網絡適配器FPGA實現
發(fā)布時間:2019-04-07 16:16
【摘要】:伴隨著以太網技術的快速發(fā)展,萬兆以太網技術已經變得成熟。再加上CPU的性能不斷提高,PCI Express總線的帶寬越來越寬。萬兆以太網以網絡適配器形式通過PCI Express總線部署在服務器平臺上已經變得現實。與此同時,云計算的概念被提出后,云計算的熱潮也席卷了各個行業(yè),眾多IT企業(yè)和電信運營商都在向云計算進軍,都已經推出了自己相關的服務。隨著虛擬化技術的發(fā)展和云計算規(guī)模的壯大,在數據中心中的服務工作量日益增長,租戶數量也隨之增多,傳統的數據中心網絡也暴露出了很多局限性。局限性主要集中在可擴展性和資源合理分配問題上?蓴U展性是當規(guī)模龐大后再擴建帶來成本高,而且會影響原有的網絡服務質量。分配在問題上,像CPU和內存等硬件資源有合理的分配管理,但是網絡帶寬仍然是使用傳統的“盡力而為”方式被共享。網絡中經常會出現堵塞,租戶正面臨著帶寬分配不公平的困境。網絡資源的分配變成了一個迫切而棘手的問題。雖然很多相關的技術解決方案目前也有被提出,但是那些解決方案都有各自的局限性,到目前為止沒有一種完美的解決方案。針對這些問題,本文提出了一種與機架交換機一起控制發(fā)送帶寬的解決方案,在一定程度上解決了帶寬分享問題,并將數據中心網絡拓撲中的服務器端網絡適配器硬件部分進行了實現。網絡適配器硬件部分是在FPGA上進行實現的,采用Altera的Strativ IV 530芯片。實現中采用的主要技術如下:硬件使用DMA技術通過PCI Express總線與CPU實現通信;接口使用萬兆以太網接口;虛擬機隊列的調度采用差額輪詢調度。本設計主要成果如下:整體來說,在一定程度解決帶寬分配問題的同時,擴展性問題上也有良好的解決。一臺服務器上支持16個虛擬機,一個租戶中的虛擬機數量可以軟件動態(tài)控制分配。細節(jié)來說,在支持PCI Express 2.0總線基礎上,采用高性能DMA技術,特別是DMA讀內存的速率得到大幅提升,以適應本設計需求;萬兆以太網接口中將MAC層進行了設計實現;調度中對差額輪詢調度在FPGA上進行了實現。實現過程中,采用先功能仿真后下板測試,最終設計得到實現和驗證。
[Abstract]:With the rapid development of Ethernet technology, 10 Gigabit Ethernet technology has become mature. In addition, the performance of CPU continues to improve the bandwidth of, PCI Express bus. 10 Gigabit Ethernet as a network adapter deployed on the server platform via PCI Express bus has become a reality. At the same time, the concept of cloud computing has been proposed, cloud computing boom has also swept across various industries, many IT enterprises and telecommunications operators are moving toward cloud computing, have launched their own related services. With the development of virtualization technology and the scale of cloud computing, the service workload in the data center is increasing day by day, and the number of tenants is also increasing. The traditional data center network has also exposed a lot of limitations. The limitation is mainly focused on the problem of scalability and rational allocation of resources. Scalability is when the scale of the re-expansion brings high costs, and will affect the quality of service of the original network. On the problem of allocation, hardware resources such as CPU and memory are reasonably allocated and managed, but the network bandwidth is still shared using the traditional "best effort" method. There are often congestion in the network, tenants are facing unfair bandwidth allocation difficulties. The allocation of network resources has become an urgent and difficult problem. Although many related technical solutions have been proposed, but those solutions have their own limitations, so far there is no perfect solution. In order to solve these problems, this paper proposes a solution to control the transmission bandwidth together with the rack switch, which solves the bandwidth sharing problem to a certain extent. The hardware part of the server-side network adapter in the data center network topology is implemented. The hardware part of the network adapter is implemented on the FPGA, and the Strativ IV 530 chip of Altera is used. The main technologies used in the implementation are as follows: the hardware uses DMA technology to communicate with CPU through PCI Express bus; the interface uses 10 Gigabit Ethernet interface; and the scheduling of virtual machine queue adopts differential polling scheduling. The main achievements of this design are as follows: on the whole, the problem of bandwidth allocation is solved to a certain extent, and the problem of scalability is also well solved. A server supports 16 virtual machines, and the number of virtual machines in a tenant can be dynamically controlled by the software. In detail, on the basis of supporting PCI Express 2.0 bus, the high performance DMA technology, especially the rate of DMA read memory, has been greatly improved to meet the needs of this design. The MAC layer of 10 Gigabit Ethernet interface has been designed and implemented. In scheduling, the difference polling scheduling is implemented on FPGA. In the process of realization, the final design is realized and verified by the function simulation and then the test of the lower board.
【學位授予單位】:電子科技大學
【學位級別】:碩士
【學位授予年份】:2014
【分類號】:TP393.11;TP393.05
本文編號:2454218
[Abstract]:With the rapid development of Ethernet technology, 10 Gigabit Ethernet technology has become mature. In addition, the performance of CPU continues to improve the bandwidth of, PCI Express bus. 10 Gigabit Ethernet as a network adapter deployed on the server platform via PCI Express bus has become a reality. At the same time, the concept of cloud computing has been proposed, cloud computing boom has also swept across various industries, many IT enterprises and telecommunications operators are moving toward cloud computing, have launched their own related services. With the development of virtualization technology and the scale of cloud computing, the service workload in the data center is increasing day by day, and the number of tenants is also increasing. The traditional data center network has also exposed a lot of limitations. The limitation is mainly focused on the problem of scalability and rational allocation of resources. Scalability is when the scale of the re-expansion brings high costs, and will affect the quality of service of the original network. On the problem of allocation, hardware resources such as CPU and memory are reasonably allocated and managed, but the network bandwidth is still shared using the traditional "best effort" method. There are often congestion in the network, tenants are facing unfair bandwidth allocation difficulties. The allocation of network resources has become an urgent and difficult problem. Although many related technical solutions have been proposed, but those solutions have their own limitations, so far there is no perfect solution. In order to solve these problems, this paper proposes a solution to control the transmission bandwidth together with the rack switch, which solves the bandwidth sharing problem to a certain extent. The hardware part of the server-side network adapter in the data center network topology is implemented. The hardware part of the network adapter is implemented on the FPGA, and the Strativ IV 530 chip of Altera is used. The main technologies used in the implementation are as follows: the hardware uses DMA technology to communicate with CPU through PCI Express bus; the interface uses 10 Gigabit Ethernet interface; and the scheduling of virtual machine queue adopts differential polling scheduling. The main achievements of this design are as follows: on the whole, the problem of bandwidth allocation is solved to a certain extent, and the problem of scalability is also well solved. A server supports 16 virtual machines, and the number of virtual machines in a tenant can be dynamically controlled by the software. In detail, on the basis of supporting PCI Express 2.0 bus, the high performance DMA technology, especially the rate of DMA read memory, has been greatly improved to meet the needs of this design. The MAC layer of 10 Gigabit Ethernet interface has been designed and implemented. In scheduling, the difference polling scheduling is implemented on FPGA. In the process of realization, the final design is realized and verified by the function simulation and then the test of the lower board.
【學位授予單位】:電子科技大學
【學位級別】:碩士
【學位授予年份】:2014
【分類號】:TP393.11;TP393.05
【參考文獻】
相關期刊論文 前1條
1 王玉明;楊宗凱;范兵;劉彥;;高效的WF~2Q+調度算法的實現研究[J];微電子學與計算機;2006年01期
,本文編號:2454218
本文鏈接:http://sikaile.net/guanlilunwen/ydhl/2454218.html
最近更新
教材專著