用于分布式事务和/或集群中共享数据的 Java 解决方案

发布于 2024-08-08 23:26:56 字数 2617 浏览 6 评论 0原文

集群/分布 Java 服务器应用程序的最佳方法是什么? 我正在寻找一种方法,允许您通过添加更多应用程序服务器和更多数据库服务器来水平扩展。

  • 您建议使用哪些技术(软件工程技术或特定技术)来解决此类问题?
  • 您使用什么技术来设计持久层以扩展到许多读者/作者 扩展应用程序事务并扩展对共享数据的访问(最好的方法是消除共享数据;您可以应用哪些技术来消除共享数据)。
  • 根据您的事务是读还是写繁重,似乎需要不同的方法,但我觉得如果您可以优化“写”繁重的应用程序,该应用程序对于“读”也将是有效的

“最佳”解决方案将允许您编写用于单个节点的 Java 应用程序,并希望“隐藏”访问/锁定共享数据的大部分细节。

在分布式环境中,最困难的问题始终归结为多个事务访问共享数据。并发事务似乎有两种常见的方法。

  1. 显式锁(这非常容易出错,并且在一个网络中的多个节点之间协调起来很慢)分布式系统)
  2. 软件事务内存 (STM) AKA 乐观并发,其中事务在提交期间回滚如果它发现共享状态已更改(并且稍后可以重试事务)。 哪种方法可扩展性更好?分布式系统中的权衡是什么?

我一直在研究扩展解决方案(以及提供如何扩展示例的一般应用程序),例如:

  1. Terracotta -通过使用 Java 并发锁定机制(同步、ReentrantReadWriteLocks)扩展 Java 内存模型以包括分布式共享内存,提供“透明”扩展。
  2. Google App Engine Java -允许您编写Java(或Python)应用程序,这些应用程序将分布在“云”服务器中,您可以在其中分配处理事务的服务器,并使用BigTable来存储持久数据(不确定如何访问共享数据或处理锁争用的事务)以便能够有效地扩展)
  3. Darkstar MMO 服务器 - Darkstar 是 Sun 扩展的开源 MMO(大型多人在线)游戏服务器以线程事务方式进行事务,允许给定事务仅运行一定数量并提交,如果需要很长时间,它将回滚(有点像软件事务内存)。他们一直在研究支持多节点服务器设置以实现扩展。
  4. Hibernate 的乐观锁定 - 如果您正在使用 Hibernate,您可以使用乐观并发支持来支持软件事务内存类型行为
  5. Apache CouchDB 应该自然地“扩展到”网格配置中的许多读取器/写入器数据库。 (是否有一个很好的示例来说明如何管理锁定数据或确保事务隔离?):
  6. JCache - 通过将结果缓存到常见查询来扩展“读取”繁重的应用程序,您可以在 Google appengine 中使用这些查询来访问 memcached 并缓存其他经常读取的数据。

Terracotta 似乎是最完整的解决方案,因为您可以“轻松”修改现有服务器应用程序以支持扩展(在定义 @Root 对象和 @AutoLockRead/Write 方法之后)。问题是要真正从分布式应用程序中获得最大性能,分布式系统的优化并不是事后才想到的,您在设计它时必须了解对象访问可能会被网络 I/O 阻止。

为了正确扩展,它似乎总是归结为数据分区和负载平衡事务,以便给定的“执行单元”(CPU 核心 -> 线程 -> 分布式应用程序节点 -> DB 主节点

)要通过集群使任何应用程序正确扩展,您需要能够根据数据访问读/写对事务进行分区。人们提出了哪些解决方案来分发应用程序数据(Oracle、Google BigTable、MySQL、数据仓库),以及通常如何管理分区数据(许多写入主机,更多读取数据库等)。

在扩展数据持久层方面,哪种类型的配置在将数据分区到许多读者/许多编写者方面效果最佳(通常我会根据给定用户(或通常是您的任何核心实体)对数据进行分区“根”对象实体)由单个主数据库拥有)

What are the best approaches to clustering/distributing a Java server application ?
I'm looking for an approach that allows you to scale horizontally by adding more application servers, and more database servers.

  • What technologies (software engineering techniques or specific technologies) would you suggest to approach this type of problem?
  • What techniques do you use to design a persistence layer to scale to many readers/writers
    Scale application transactions and scale access to shared data (best approach is to eliminate shared data; what techniques can you apply to eliminate shared data).
  • Different approaches seem to be needed depending on whether your transactions are read or write heavy, but I feel like if you can optimize a "write" heavy application that would also be efficient for "read"

The "best" solution would allow you to write a Java application for a single node and hopefully "hide" most of the details of accessing/locking shared data.

In a distributed environment the most difficult issue always comes down to having multiple transactions accessing shared data. There seems like there's 2 common approaches to concurrent transactions.

  1. Explicit locks (which is extremely error prone and slow to coordinate across multiple nodes in a distributed system)
  2. Software transactional memory (STM) AKA optimistic concurrency where a transaction is rolled back during a commit if it discovers that shared state has changed (and the transaction can later be retried).
    Which approach scales better and what are the trade-offs in a distributed system?

I've been researching scaling solutions (and in general applications that provide an example of how to scale) such as:

  1. Terracotta - provides "transparent" scaling by extending the Java memory model to include distributed shared memory using Java's concurrency locking mechanism (synchronized, ReentrantReadWriteLocks).
  2. Google App Engine Java - Allows you to write Java (or python) applications that will be distributed amongst "cloud" servers where you distribute what server handles a transaction and you use BigTable to store your persistent data (not sure how you transactions that access shared data or handle lock contentions to be able to scale effectively)
  3. Darkstar MMO Server - Darkstar is Sun's open source MMO (massively multiplayer online) game server they scale transactions in a thread transactional manner allowing a given transaction to only run for a certain amount and committing and if it takes to long it will rollback (kinda like software transactional memory). They've been doing research into supporting a multi-node server setup for scaling.
  4. Hibernate's optimistic locking - if you are using Hibernate you can use their optimistic concurrency support to support software transactional memory type behavior
  5. Apache CouchDB is supposed to "scale" to many reader/writer DB's in a mesh configuration naturally. (is there a good example of how you manage locking data or ensuring transaction isolation?):
  6. JCache - Scaling "read" heavy apps by caching results to common queries you can use in Google appengine to access memcached and to cache other frequently read data.

Terracotta seems to be the most complete solution in that you can "easily" modify an existing server application to support scaling (after defining @Root objects and @AutoLockRead/Write methods). The trouble is to really get the most performance out of a distributed application, optimization for distributed systems isn't really an after thought you kinda have to design it with the knowledge that object access could potentially be blocked by network I/O.

To scale properly it seems like it always comes down to partitioning data and load balancing transactions such that a given "execution unit" (cpu core -> thread -> distributed application node -> DB master node)

It seems like though to make any app scale properly by clustering you need to be able to partition your transactions in terms of their data access reads/writes. What solutions have people come up with to distribute their applications data (Oracle, Google BigTable, MySQL, Data warehousing), and generally how do you manage partitioning data (many write masters, with many more read DBs etc).

In terms of scaling your data persistence layer what type of configuration scales out the best in terms of partitioning your data to many readers/many writers (generally I'd partition my data based on a given user (or whatever core entity that generally is your "root" object entity) being owned by a single master DB)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

王权女流氓 2024-08-15 23:26:56

我以为我找到了一个很棒的 Java 集群/分布式平台,想重新打开这个 -

签出 http://www.hazelcast.com

我运行了测试程序,它非常酷,非常轻量且易于使用。它会自动检测对等配置中的集群成员。机会是无限的。

Thought I found a great Java Clustering/Distributed platform, wanted to reopen this-

Checkout http://www.hazelcast.com

I ran the test programs, it is very cool, very light-weight and simple to use. It automatically detects the Cluster Members in a Peer-to-Peer configuration. The opportunities are limitless.

成熟的代价 2024-08-15 23:26:56

感谢您在一个地方很好地总结了所有可能性。

但这里缺少一项技术。它就是MapReduce-Hadoop。如果可以将问题纳入 MapReduce 范式,那么它可能是最广泛使用的解决方案。我还想知道 Actor 框架模式(JetLang、Kilim 等)是否可以扩展到集群。

Thanks for nicely summarizing all possibilities in one place.

One technique is missing here though. It is MapReduce-Hadoop. If it is possible to fit the problem into the MapReduce paradigm, it is perhaps the most widely available solution. I also wonder if the Actor Framework pattern (JetLang, Kilim, etc) can be extended to a cluster.

沩ん囻菔务 2024-08-15 23:26:56
请别遗忘我 2024-08-15 23:26:56

不要忘记 Erlang 的 Mnesia

Mnesia 为您提供了在普通数据库中习惯的事务等功能,但还提供实时操作和容错功能。另外,您可以在不停机的情况下重新配置。缺点是它是一个驻留在内存的数据库,因此您必须对非常大的表进行分段。最大表大小为 4Gb。

Don't forget Erlang's Mnesia.

Mnesia gives you stuff like transactions that you're used to in a normal DB, but provides real-time operations and fault-tolerance. Plus you can reconfigure things without downtime. Downside is that it's a memory resident database, so you have to fragment really large tables. Largest table size is 4Gb.

蓝色星空 2024-08-15 23:26:56

虽然 Oracle Coherence 和建议的许多其他解决方案非常适合共享数据,但您仅引用锁定和 STM 作为在分布式环境中管理状态突变的方法;这些通常都是扩展状态管理的非常糟糕的方法。在另一个网站上,我最近发布了有关如何实现(例如)序列计数器的以下内容:

如果您正在查看计数器,那么使用 Coherence EntryProcessor 之类的东西将轻松实现“一次且仅一次”行为HA 用于任意数量的单调递增序列;这是整个实现:

public class SequenceCounterProcessor
        extends AbstractProcessor
    {
    public Object process(InvocableMap.Entry entry)
        {
        long l = entry.isPresent() ? (Long) entry.getValue() + 1 : 0;
        entry.setValue(l);
        return l;
        }
    }

是的。就是这样。自动无缝 HA、动态横向扩展弹性、一次性行为等。完成。

EntryProcessor 是我们在 2005 年引入的一种分布式闭包。

顺便说一句,在 Java 8(尚未发布)中,Lambda 项目在语言和标准库中引入了官方闭包支持。

基本上,这个想法是将闭包传递到分布式环境中数据“所有者”的位置。 Coherence 通过使用动态分区动态管理数据所有权,允许分布式系统在运行的各种机器和节点之间负载平衡数据。事实上,默认情况下,所有这些都是 100% 自动化的,因此您永远不会真正告诉它将数据放在哪里,或者有多少数据流向哪里。此外,还有在其他节点和其他物理服务器上管理的数据的二级(可能还有三级等)副本,以在进程失败或服务器死亡时提供高可用性。同样,这些备份副本的管理默认情况下是完全自动且完全同步的,这意味着系统默认情况下是 100% HA(即无需配置)。

当闭包到达数据所有者时,它会在事务工作区中执行,如果操作成功完成,则会将其发送到备份以安全保存。只有成功进行备份后,数据突变(例如操作的结果)才对系统的其余部分可见。

对上述内容的一些优化包括添加ExternalizedLite & PortableObject 接口用于优化序列化,并通过直接处理数据的“网络就绪”形式来避免装箱长序列化:

public Object process(InvocableMap.Entry entry)
    {
    try
        {
        BinaryEntry binentry = (BinaryEntry) entry;
        long l = entry.isPresent() ? binentry.getBinaryValue()
                .getBufferInput().readLong() + 1 : 0L;
        BinaryWriteBuffer buf = new BinaryWriteBuffer(8);
        buf.getBufferOutput().writeLong(l);
        binentry.updateBinaryValue(buf.toBinary());
        return l;
        }
    catch (IOException e)
        {
        throw new RuntimeException(e);
        }
    }

既然它是无状态的,为什么不准备一个单例实例呢?

public static final SequenceCounterProcessor INSTANCE =
        new SequenceCounterProcessor();

从网络上的任何地方使用它就像一行代码一样简单:

long l = (Long) sequences.invoke(x, SequenceCounterProcessor.INSTANCE);

其中“x”是标识您要使用的特定序列计数器的任何对象或名称。有关详细信息,请参阅 Coherence 知识库:http://coherence.oracle.com/

Oracle Coherence 是分布式系统。每当您启动一个 Coherence 节点时,它都会与已运行的其他 Coherence 节点结合起来,并动态地形成一个弹性集群。该集群以分区、高可用 (HA) 和事务一致的方式托管数据,并以“一次且仅一次”的方式托管对该数据进行操作的操作(如我上面所示的操作)。

此外,除了能够从任何 Coherence 节点透明地调用任何逻辑或访问任何数据之外,您还可以从网络上的任何进程透明地调用任何逻辑或访问任何数据(需要进行身份验证)当然还有授权)。因此,这段代码可以在任何 Coherence 集群节点或任何 (Java / C / C++ / C# / .NET) 客户端上运行:

为了充分披露,我在 Oracle 工作。这篇文章中表达的意见和观点是我自己的,并不一定反映我的雇主的意见或看法。

While Oracle Coherence and a lot of the other solutions suggested are good for sharing data, you only cited locking and STM as ways to manage state mutation in a distributed environment; those are both generally pretty poor ways to scale state management. On a different site, I recently posted the following about how to implement (for example) sequence counters:

If you're looking at a counter, then using something like a Coherence EntryProcessor will easily achieve "once-and-only-once" behavior and HA for any number of monotonically increasing sequences; here's the entire implementation:

public class SequenceCounterProcessor
        extends AbstractProcessor
    {
    public Object process(InvocableMap.Entry entry)
        {
        long l = entry.isPresent() ? (Long) entry.getValue() + 1 : 0;
        entry.setValue(l);
        return l;
        }
    }

Yup. That's it. Automatic and seamless HA, dynamic scale-out elasticity, once-and-only-once behavior, etc. Done.

The EntryProcessor is a type of distributed closure that we introduced in 2005.

As an aside, in Java 8 (not yet release), project Lambda introduces official closure support in the language and the standard libraries.

Basically, the idea is to deliver the closure to the location of the "owner" of the data in a distributed environment. Coherence dynamically manages data ownership by using dynamic partitioning, allowing the distributed system to load balance data across the various machines and nodes that are running. In fact, by default all of this is 100% automated, so you never actually tell it where to put the data, or how much data goes where. Additionally, there are secondary (and perhaps tertiary etc.) copies of the data managed on other nodes and other physical servers, to provide high availability in case a process fails or a server dies. Again, the management of these backup copies is completely automatic and completely synchronous by default, meaning that the system is 100% HA by default (i.e. with no configuration).

When the closure arrives at the data owner, it is executed in a transactional workspace, and if the operation completes successfully then it is shipped to the backup for safe keeping. The data mutation (e.g. the result of the operation) is only made visible to the remainder of the system once the backup has been successfully made.

A few optimizations to the above include adding the ExternalizableLite & PortableObject interfaces for optimized serialization, and avoiding the serialization of the boxed long by going after the "network ready" form of the data directly:

public Object process(InvocableMap.Entry entry)
    {
    try
        {
        BinaryEntry binentry = (BinaryEntry) entry;
        long l = entry.isPresent() ? binentry.getBinaryValue()
                .getBufferInput().readLong() + 1 : 0L;
        BinaryWriteBuffer buf = new BinaryWriteBuffer(8);
        buf.getBufferOutput().writeLong(l);
        binentry.updateBinaryValue(buf.toBinary());
        return l;
        }
    catch (IOException e)
        {
        throw new RuntimeException(e);
        }
    }

And since it's stateless, why not have a singleton instance ready to go?

public static final SequenceCounterProcessor INSTANCE =
        new SequenceCounterProcessor();

Using it from anywhere on the network is as simple as a single line of code:

long l = (Long) sequences.invoke(x, SequenceCounterProcessor.INSTANCE);

Where "x" is any object or name that identifies the particular sequence counter you want to use. For more info, see the Coherence knowledge base at: http://coherence.oracle.com/

Oracle Coherence is a distributed system. Whenever you start a Coherence node, it joins with other Coherence nodes that are already running, and dynamically forms an elastic cluster. That cluster hosts data in a partitioned, highly available (HA), and transactionally consistent manner, and hosts operations (like the one I showed above) that operate on that data in a "once and only once" manner.

Furthermore, in addition to the ability to invoke any of that logic or access any of that data transparently from any Coherence node, you can also invoke any of that logic or access any of that data transparently from any process on the network (subject to authentication and authorization, of course). So this code would work from any Coherence cluster node or from any (Java / C / C++ / C# / .NET) client:

For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.

合久必婚 2024-08-15 23:26:56

也许这些幻灯片会有所帮助。根据我们的经验,我推荐 Oracle (Tangosol) Coherence 和 GigaSpaces 作为最强大的数据和处理分发框架。根据问题的具体性质,其中之一可能会发挥作用。 Terracotta也相当适用于一些问题。

Maybe those slides will be helpful. From our experience I would recommend Oracle (Tangosol) Coherence and GigaSpaces as a most powerful data and processing distribution frameworks out there. Depending on exact nature of the problem, one of those may shine. Terracotta also quite applicable for some of the problems.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文