基于流的路由和开放流

发布于 2024-10-09 06:26:55 字数 345 浏览 6 评论 0 原文

这可能不是典型的 stackoverflow 问题。

我的一位同事一直猜测基于流的路由将成为网络领域的下一个重大事件。 Openflow提供了在大型应用、IT数据中心等中使用低成本交换机的技术;取代Cisco、HP等交换机和路由器。理论上,您可以通过简单的配置创建这些 openflow 交换机的层次结构,例如。没有生成树。开放流将仅使用交换机层次结构的知识(无路由器)将每个流路由到适当的交换机/交换机端口。该解决方案旨在为企业节省资金并简化网络。

问:他推测这可能会极大地改变企业网络。由于多种原因,我对此表示怀疑。我想听听你的想法。

This may not be the typical stackoverflow question.

A colleague of mine has been speculating that flow-based routing is going to be the next big thing in networking. Openflow provides the technology to use low cost switches in large application, IT data-centers, etc; replacing Cisco, HP, etc switch and routers. The theory is that you can create a hierarchy these openflow switches with simple configuration, eg. no spanning tree. Open flow will route each flow to the appropriate switch/switch-port, using only the knowledge of the hierarchy of switches (no routers). The solution is suppose to save enterprises money and simplify networking.

Q. He is speculating that this may dramatically change enterprise networking. For many reasons, I am skeptical. I would like to hear your thoughts.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

作死小能手 2024-10-16 06:26:56

OpenFlow 是斯坦福大学的一个研究项目,由尼克·麦基翁。在原始 OpenFlow 研究论文中,OpenFlow 的目标是为研究人员提供了一种“在他们每天使用的网络中运行实验协议”的方法。多年来,网络研究人员在使用真实以太网交换机和 IP 路由器的真实网络上部署和评估他们的想法几乎是不可能完成的任务。困难在于,思科、惠普等公司的真正交换机和路由器都是封闭的、专有的盒子,它们实现标准“协议”,如以太网生成树和 OSPF。思科和惠普不允许您在他们的交换机和路由器上运行软件有商业原因;没有技术原因。 OpenFlow 的发明是为了解决一个人的问题:如果 Cisco 不愿意让你在他们的交换机上运行代码,也许他们至少可以提供一个非常窄的接口来让你远程配置他们的交换机,这个窄接口就称为 OpenFlow。

据我所知,目前有十几家公司正在为其交换机实施 OpenFlow 支持。像 HP 这样的一些公司只提供 OpenFlow 软件用于研究目的。 NEC 等其他公司实际上正在提供商业支持。

对于想要在实际网络中评估新路由协议的学术研究人员来说,OpenFlow 是一个巨大的胜利。对于交换机供应商来说,从长远来看,OpenFlow 支持是否会有所帮助、有害或没有影响尚不清楚。毕竟学术研究市场很小。

OpenFlow 最常在企业网络环境中讨论的原因是,OpenFlow 源于之前一个名为 Ethane 使用 OpenFlow 的机制对企业网络中的交换机进行远程编程,以集中安全策略。 Ethane 以及 OpenFlow 的扩展直接催生了两家初创公司: Nicira,由 Martin CasadoBig Switch Networks,由 Guido Appenzeller 创立。如果网络中的所有交换机都支持 OpenFlow,那么实现类似 Ethane 的系统会更容易。

与企业网络密切相关的是数据中心网络,该网络将 Google、Facebook、Microsoft、Amazon.com 和 Yahoo! 等公司的数千到数万台服务器互连。以太网的一个问题是它无法扩展到同一第 2 层网络上的这么多服务器。我们试图在一个名为 PortLand 的研究项目中解决这个问题。我们使用 OpenFlow 来促进从中央控制器(我们称之为 Fabric Manager)对交换机进行编程。我们以开源方式发布了 PortLand 源代码

然而,我们也发现了 OpenFlow 功能的限制。在另一个名为 Helios 的数据中心网络研究项目中,我们无法使用 OpenFlow因为它没有提供将多个交换机端口绑定到链路聚合组 (LAG) 的机制。据推测,人们可以无限期地扩展 OpenFlow 规范,直到所有可能的交换机功能都被公开。

还有其他网络,例如互联网接入网络、互联网骨干网、家庭网络、无线网络、蜂窝网络等。研究人员正在尝试了解 OpenFlow 适合所有这些市场的位置。归根结底,问题是“OpenFlow 解决了什么问题?” Ethane 为企业网络提供了案例,但我还没有看到任何其他类型网络的令人信服的案例。 OpenFlow 可能是下一个重大事件,也可能最终成为“不能用技术解决方案解决人员问题”的情况。

OpenFlow is a research project from Stanford University led by professor Nick McKeown. In the original OpenFlow research paper, the goal of OpenFlow was to give researchers a way "to run experimental protocols in the networks they use every day." For years networking researchers have had an almost impossible task deploying and evaluating their ideas on real networks with real Ethernet switches and IP routers. The difficultly is that real switches and routers from companies like Cisco, HP, and others, are all closed, proprietary boxes that implement standard "protocols", like Ethernet spanning tree, and OSPF. There are business reasons why Cisco and HP won't let you run software on their switches and routers; there is no technical reason. OpenFlow was invented to solve a people problem: if Cisco is not willing to let you run code on their switch, maybe they can at least provide a very narrow interface to let you remotely configure their switch, and that narrow interface is called OpenFlow.

To my knowledge more than a dozen companies are currently implementing OpenFlow support for their switches. Some like HP are only providing the OpenFlow software for research purposes. Others like NEC are actually offering commercial support.

For academic researchers that want to evaluate new routing protocols in real networks, OpenFlow is a huge win. For switch vendors, it is less clear if OpenFlow support will help, hurt, or have no effect in the long run. After all, the academic research market is very small.

The reason why OpenFlow is most often discussed in the context of enterprise networks is that OpenFlow grew out of a previous research project called Ethane that used OpenFlow's mechanism of remotely programming switches in an enterprise network in order to centralize a security policy. Ethane, and by extension OpenFlow, has led directly to two startup companies: Nicira, founded by Martin Casado, and Big Switch Networks, founded by Guido Appenzeller. It would be easier to implement an Ethane-like system if all of the switches in the network supported OpenFlow.

Closely related to enterprise networks are data center networks, the networks that interconnect thousands to tens of thousands of servers in companies such as Google, Facebook, Microsoft, Amazon.com, and Yahoo!. One problem with Ethernet is that it does not scale to this many servers on the same Layer 2 network. We attempted to solve this problem in a research project called PortLand. We used OpenFlow to facilitate programming the switches from a central controller, which we called a Fabric Manager. We released the PortLand source code as open source.

However, we also found a limitation to OpenFlow's functionality. In another data center networking research project called Helios, we were not able to use OpenFlow because it did not provide a mechanism for bonding multiple switch ports into a Link Aggregation Group (LAG). Presumably one could extend the OpenFlow specification indefinitely until it all possible switch features become exposed.

There are other networks as well such as the Internet access networks, Internet backbones, home networks, wireless networks, cellular networks, etc. Researchers are trying to see where OpenFlow fits into all of these markets. What it really comes down to is the question, "what problem does OpenFlow solve?" Ethane makes a case for enterprise networks but I have not yet seen a compelling case for any other type of network. OpenFlow might be the next big thing, or it might end up being a case of "don't solve a people problem with a technical solution."

や莫失莫忘 2024-10-16 06:26:56

为了评估基于流的网络和 OpenFlow 的未来,可以采用以下思考方式。

  1. 首先是硅趋势:摩尔定律(每 18-24 个月增加 2 倍晶体管),以及单个芯片上可用 I/O 带宽的相关但较慢的改进(大约每 30-36 个月增加 2 倍)。您现在可以购买具有 64 个端口的全功能 10GbE 单芯片交换机,以及具有可比总 I/O 带宽的 40GbE 和 10GbE 端口组合的芯片。

  2. 有多种方法可以将它们物理连接到网格中(忽略生成树的无环路约束和以太网学习 MAC 地址的方式)。在高性能计算 (HPC) 领域,已经完成了大量工作,通过 InfiniBand 和其他协议构建集群,使用小型交换机网格将计算服务器联网。这现在正应用于以太网网格。 CLOS 或胖树拓扑的几何形状可实现具有大量端口的两级网格。数学公式如下:其中 n 是每个芯片的端口数,在两级网格中可以连接的设备数量为 (n*2)/2,在三级网格中可以连接的设备数量为 (n*2)/2阶段网格为 (n*3)/4。虽然使用标准生成树和学习,生成树协议将禁用到第二阶段的多路径链路,但大多数以太网交换机供应商都有某种多机箱链路聚合协议,可以绕过多路径限制。该领域也有标准工作。尽管可能并不明显,但绝大多数链路聚合方案都会分配流量,因此任何给定流的所有帧都采用相同的路径。这样做是为了最大限度地减少无序帧,这样它们就不会被某些更高级别的协议丢弃。他们本可以选择将其称为“基于流的复用”,但他们将其称为“链路聚合”

  3. 尽管问题在于细节,但许多数据中心运营商和供应商得出的结论是,他们不需要在聚合/核心层使用大型多插槽机箱交换机来进行服务器连接,而是使用廉价的 1U 网格或 2U 交换机。
  4. 人们还得出结论,最终您需要某种管理站来设置所有交换机的配置。再次,根据 HPC 和 InfiniBand 的经验,他们使用了所谓的 InfiniBand 控制器。在电信领域,大多数电信网络已经发展到将管理和部分控制平面与承载数据流量的盒子分开。

总结以上几点,以太网交换机与外部管理平面的网格与多路径流量(其中流量保持有序)是演进的,而不是革命性的,并且很可能成为主流。至少有一家大公司,即瞻博网络,已经发表了一份重要的公开声明,表示他们对这种方法的认可。我将所有这些称为“基于流的路由”。

尽管瞻博网络和其他供应商采用了专有方法,但这仍然是一个亟需标准的领域。开放网络基金会 (ONF) 的成立是为了促进该领域的标准,首先是 OpenFlow。几个月内,ONF 的 60 多名成员将庆祝他们的一周年纪念日。我相信每个成员都支付了数万美元才能加入。虽然 OpenFlow 协议在被广泛采用之前还有很长的路要走,但它拥有真正的动力。

In order to assess the future of flow-based networking and OpenFlow, here’s the way to think about it.

  1. It starts with the silicon trends: Moore’s Law (2X transistors per 18-24 months), and a correlated but slower improvement in the I/O bandwidth available on a single chip (roughly 2X every 30-36 months). You can now buy full-featured 10GbE single chip switches with 64 ports, and chips which have a mix of 40GbE and 10GbE ports with comparable total I/O bandwidth.

  2. There are a variety of ways physically connect these in a mesh (ignoring the loop-free constraints of spanning tree and the way Ethernet learns MAC addresses). In the high performance computing (HPC) world, a lot of work has been done building clusters with InfiniBand and other protocols using meshes of small switches to network the compute servers. This is now being applied to Ethernet meshes. The geometry of a CLOS or fat-tree topology enables a two stage mesh with a large number of ports. The math is thus: Where n is the # of ports per chip, the number of devices you can connect in a two-stage mesh is (n*2)/2, and the number you can connect in a three-stage mesh is (n*3)/4. While with standard spanning tree and learning, the spanning tree protocol will disable the multi-path links to the second stage, most of the Ethernet switch vendors have some sort of multi-chassis Link Aggregation protocol which gets around the multi-pathing limitation. There is also standards work in this area. Although it might not be obvious, the vast majority of Link Aggregation schemes allocate traffic so all the frames of any given flow take the same path. This is done in order to minimize out-of-order frames so they don’t get dropped by some higher level protocol. They could have chosen to call this “flow based multiplexing” but instead they call it “link aggregation”.

  3. Although the devil is in the details, there are a variety of data center operators and vendors that have concluded they don’t need to have large multi-slot chassis switches in the aggregation/core layer for server connect, instead using meshes of inexpensive 1U or 2U switches.
  4. People have also concluded that eventually you need some kind of management station to set up the configuration of all the switches. Again, drawing from the experience with HPC and InfiniBand, they use what is called an InfiniBand Controller. In the telecom world, most telecom networks have evolved to separate the management and part of the control plane from the boxes that carry the data traffic.

Summarizing the points above, meshes of Ethernet switches with an external management plane with multipath traffic where flows are kept in order is evolutionary, not revolutionary, and is likely to become mainstream. At least one major company, Juniper, has made a big public statement about their endorsement of this approach. I'd call all of these "flow-based routing".

Juniper and other vendors’ proprietary approaches notwithstanding, this is an area that cries out for standards. The Open Networking Foundation (ONF), was founded to promote standards in this area, starting with OpenFlow. Within a couple of months, the sixty+ members of ONF will be celebrating their first year anniversary. Each member has, I am led to believe, paid tens of thousands of dollars to join. While the OpenFlow protocol has a ways to go before it is widely adopted, it has real momentum.

迷乱花海 2024-10-16 06:26:56

@Nathan:OpenFlow 1.1 实际上添加了一些原语,可以通过 多路径提案< /a>.

@Nathan: OpenFlow 1.1 actually adds some primitives that enable the use of multiple links via the Multipath Proposal.

够钟 2024-10-16 06:26:56

有关 SDN 的更多背景信息,讨论了 IETF 的 SDN 计划和 ONF 的 Openflow。结合使用是一个强大的组合http://bit.ly/A8xYso

More context on SDN which discusses IETF's SDN initiative and ONF's Openflow. Working in conjuction is a powerful combination http://bit.ly/A8xYso

冰火雁神 2024-10-16 06:26:56

Nathan,优秀的历史记录和开放流概述。谢谢!

你已经说到了我一直在思考为什么 Openflow 可能不会被广泛采用的问题。由于它的设计是开放的,以便研究人员能够运行实验协议,并且不一定与思科/惠普等大公司“兼容”。它将自己置于利基市场(尽管可能很大),稍后会详细介绍。正如您所说,它在“云数据中心(CDC)”(例如谷歌、Facebook 等)中得到了一些采用,因为它们需要利用实验协议来获得竞争优势或优化其应用程序。

正如您所说,一些交换机供应商添加了 openflow 功能,以利用学术界的利基需求,并可能向 CDC 出售产品;谷歌、脸书。这可能是一个很大的市场(如果你悲观的话,那就是泡沫)。

我看到的问题是,大部分市场(80% 或更多)都是企业 IT 数据中心。这里的要求是稳定、兼容的网络。开放且价格便宜固然很好,但不能以前者为代价。

人们可以想象有一天,企业 IT 部分或完全来自云,其中 QoS 由云提供商维护。在这种情况下,可以利用实验协议来提供速度或服务质量的竞争优势。在这种情况下; openflow 可以发挥更大的主导作用。我个人认为这种情况还需要很多年才能实现。

所以,我得出的结论是,除了研究和疾病预防控制中心(谷歌、Facebook)之外,市场很小。我想,如果研究人员使用开放流来提出更好的协议来进行链路聚合或拥塞管理,那么最终思科和惠普将在其标准产品中提供这些协议,因为他们的客户会需要它。因此,openflow 可能会成为市场影响者(通过研究社区),但它不会成为市场颠覆者。

你同意我的结论吗?感谢您的意见。

Nathan, Excellent historical account and overview of openflow. Thanks!

You've hit on the points that I've been wrapping my head around as to why Openflow might not be widely adopted. Since it was designed to be open to allow researcher the ability to run experimental protocols and not necessarily be "compatible with" the big players Cisco/HP/etc. it puts itself into niche (although potentially big) market, more on this later. And as you've stated it's recieved some adoption in the "cloud data centers (CDC)" e.g. google, facebook, etc because they need to exploit experimental protocols to gain a competitive advantage or optimize for their application.

As you've stated some switch vendors have added openflow capability to capitalize on the niche need in academia and potentially sell into the CDC; google, facebook. This is potentially a big market (or bubble if you're pessimistic).

The problem that I see is that the majority of the market (80% or more) is enterprise IT data centers. The requirements here is for stable, compatible networking. Open and less expensive would be nice, but not at the cost of the former.

One could think of a day where corporate IT is partially or completely cloud-sourced where QoS is maintained by the cloud provider. In this case, experimental protocols could be leveraged to provide a competitive advantaged for speed or QoS. In which case; openflow could play a more dominant roll. I personally think this scenario is many years off.

So, the conclusion I come to is that other than in research and perhaps CDCs (google, facebook), the market is pretty small. I suppose that if researchers use openflow to come up with a better protocol for say link aggregation, or congestion management, then eventually Cisco and HP will provide those in their standard offering because their customers will demand it. So openflow could be a market influencer (via the research community), but it would not be a market disruptor.

Do you agree with my conclusions? Thanks for your input.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文