MongoDB C# 驱动程序不释放连接然后出现错误

发布于 2024-12-05 00:20:26 字数 1390 浏览 0 评论 0原文

我正在使用最新版本的 MongoDB(在 Win 64 服务器上)和 C# 驱动程序。我有一个 Windows 服务,每分钟执行 800 次读取和更新,几分钟后,当前使用的线程数超过 200,然后每个 mongodb 调用都会给出此错误:

System.IO.IOException: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
   at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
   at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)

我在正在读取的字段上有一个索引,所以这就是不是问题。以下是读取代码:

public static UserUpdateMongo Find(int userId, long deviceId)
{
    return Collection().Find(
        Query.And(
            Query.EQ("UserId", userId),
            Query.EQ("DeviceId", deviceId))).FirstOrDefault();
}

我像这样实例化连接:

var settings = new MongoServerSettings
{
    Server = new MongoServerAddress(segments[0], Convert.ToInt32(segments[1])),MaxConnectionPoolSize = 1000};
    Server = MongoServer.Create(settings);
}

我做错了什么还是 C# 驱动程序存在问题?帮助!!

I'm using the latest versions of MongoDB (on a Win 64 Server) and the C# driver. I have a windows service that is doing 800 reads and updates per minute, and after a few minutes the current threads used goes above 200 and then every single mongodb call gives this error:

System.IO.IOException: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
   at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
   at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)

I have an index on the fields that is reading by so that's not the issue. Here is the code for the read:

public static UserUpdateMongo Find(int userId, long deviceId)
{
    return Collection().Find(
        Query.And(
            Query.EQ("UserId", userId),
            Query.EQ("DeviceId", deviceId))).FirstOrDefault();
}

I instantiate the connection like so:

var settings = new MongoServerSettings
{
    Server = new MongoServerAddress(segments[0], Convert.ToInt32(segments[1])),MaxConnectionPoolSize = 1000};
    Server = MongoServer.Create(settings);
}

Am I doing something wrong or is there an issue with the C# driver? Help!!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

老子叫无熙 2024-12-12 00:20:26

C#驱动程序有一个连接池,连接池的最大大小默认为100。因此,您永远不会看到单个 C# 客户端进程与 mongod 的连接超过 100 个。 1.1 版本的 C# 驱动程序确实在重负载下偶尔会出现问题,其中一个连接上的错误可能会导致大量的断开和连接。您可以通过查看服务器日志来判断您是否遇到这种情况,每次打开或关闭连接时都会在其中写入日志条目。如果是这样,您可以尝试一下本周发布的 1.2 C# 驱动程序吗?

您不应该需要创建待处理更新的队列。连接池通过限制并发请求的数量充当某种队列。

如果您可以在服务器日志中找到任何内容,请告诉我,如果还有什么我可以帮助您。

The C# driver has a connection pool, and the maximum size of the connection pool is 100 by default. So you should never see more than 100 connections to mongod from a single C# client process. The 1.1 version of the C# driver did have an occasional problem under heavy load, where an error on one connection could result in a storm of disconnects and connects. You would be able to tell if that was happening to you by looking at the server logs, where a log entry is written every time a connection is opened or closed. If so, can you try the 1.2 C# driver that was released this week?

You should not have needed to create a queue of pending updates. The connection pool acts as a queue of sorts by limiting the number of concurrent requests.

Let me know if you can find anything in the server logs, and if there is anything further I can help you with.

嘿看小鸭子会跑 2024-12-12 00:20:26

解决方案是停止在每个单独的线程上保存记录,并开始将它们添加到内存中的“待保存”列表中。然后有一个单独的线程,同步处理所有到 mongodb 的保存。我不知道为什么异步调用会导致 C# 驱动程序出错,但现在工作得很好。如果其他人遇到此问题,这里是一些示例代码:

public static class UserUpdateSaver
    {
        public static List<UserUpdateView> PendingUserUpdates;

        public static void Initialize()
        {
            PendingUserUpdates = new List<UserUpdateView>();
            var saveUserUpdatesTime = Convert.ToInt32(ConfigurationBL.ReadApplicationValue("SaveUserUpdatesTime"));
            LogWriter.Write("Setting up timer to save user updates every " + saveUserUpdatesTime + " seconds", LoggingEnums.LogEntryType.Warning);
            var worker = new BackgroundWorker();
            worker.DoWork += delegate(object s, DoWorkEventArgs args)
            {
                while (true)
                {//process pending user updates every x seconds.
                    Thread.Sleep(saveUserUpdatesTime * 1000);
                    ProcessPendingUserUpdates();
                }
            };
            worker.RunWorkerAsync();
        }

        public static void AddUserUpdateToSave(UserUpdateView userUpdate)
        {
            Monitor.Enter(PendingUserUpdates);
            PendingUserUpdates.Add(userUpdate);
            Monitor.Exit(PendingUserUpdates);
        }

        private static void ProcessPendingUserUpdates()
        {
            //get pending user updates.
            var pendingUserUpdates = new List<UserUpdateView>(PendingUserUpdates);
            if (pendingUserUpdates.Count > 0)
            {
                var startDate = DateTime.Now;

                foreach (var userUpdate in pendingUserUpdates)
                {
                    try
                    {
                        UserUpdateStore.Update(userUpdate);
                    }
                    catch (Exception exc)
                    {
                        LogWriter.WriteError(exc);
                    }
                    finally
                    {
                        Monitor.Enter(PendingUserUpdates);
                        PendingUserUpdates.Remove(userUpdate);
                        Monitor.Exit(PendingUserUpdates);
                    }
                }

                var duration = DateTime.Now.Subtract(startDate);
                LogWriter.Write(String.Format("Processed {0} user updates in {1} seconds",
                    pendingUserUpdates.Count, duration.TotalSeconds), LoggingEnums.LogEntryType.Warning);
            }
            else
            {
                LogWriter.Write("No user updates to process", LoggingEnums.LogEntryType.Warning);
            }
        }
    }

The solution was to stop saving records on each individual thread and to start adding them to a "pending to save" list in memory. Then have a separate thread and that handles all saves to mongodb synchronously. I don't know why the async calls cause the C# driver to trip up, but this is working beautifully now. Here is some sample code if others run into this problem:

public static class UserUpdateSaver
    {
        public static List<UserUpdateView> PendingUserUpdates;

        public static void Initialize()
        {
            PendingUserUpdates = new List<UserUpdateView>();
            var saveUserUpdatesTime = Convert.ToInt32(ConfigurationBL.ReadApplicationValue("SaveUserUpdatesTime"));
            LogWriter.Write("Setting up timer to save user updates every " + saveUserUpdatesTime + " seconds", LoggingEnums.LogEntryType.Warning);
            var worker = new BackgroundWorker();
            worker.DoWork += delegate(object s, DoWorkEventArgs args)
            {
                while (true)
                {//process pending user updates every x seconds.
                    Thread.Sleep(saveUserUpdatesTime * 1000);
                    ProcessPendingUserUpdates();
                }
            };
            worker.RunWorkerAsync();
        }

        public static void AddUserUpdateToSave(UserUpdateView userUpdate)
        {
            Monitor.Enter(PendingUserUpdates);
            PendingUserUpdates.Add(userUpdate);
            Monitor.Exit(PendingUserUpdates);
        }

        private static void ProcessPendingUserUpdates()
        {
            //get pending user updates.
            var pendingUserUpdates = new List<UserUpdateView>(PendingUserUpdates);
            if (pendingUserUpdates.Count > 0)
            {
                var startDate = DateTime.Now;

                foreach (var userUpdate in pendingUserUpdates)
                {
                    try
                    {
                        UserUpdateStore.Update(userUpdate);
                    }
                    catch (Exception exc)
                    {
                        LogWriter.WriteError(exc);
                    }
                    finally
                    {
                        Monitor.Enter(PendingUserUpdates);
                        PendingUserUpdates.Remove(userUpdate);
                        Monitor.Exit(PendingUserUpdates);
                    }
                }

                var duration = DateTime.Now.Subtract(startDate);
                LogWriter.Write(String.Format("Processed {0} user updates in {1} seconds",
                    pendingUserUpdates.Count, duration.TotalSeconds), LoggingEnums.LogEntryType.Warning);
            }
            else
            {
                LogWriter.Write("No user updates to process", LoggingEnums.LogEntryType.Warning);
            }
        }
    }
天冷不及心凉 2024-12-12 00:20:26

您听说过消息队列吗?
您可以放置​​一堆盒子来处理此类负载,并使用消息队列机制将数据保存到 mongodb。
但是,在这种情况下,您的消息队列必须能够运行并发发布订阅。
一个免费的消息队列(在我看来非常好)是带有 RabbitMQ 的 MassTransit。

工作流程将是:
1. 在消息队列中发布您的数据;
2. 到达那里后,与保存和处理 mongo 数据的订阅者一起启动任意数量的盒子。

如果您需要扩展,这种方法会很好。

Have you heard about Message Queueing?
You could put a bunch of boxes to handle such load and use message queueing mechanism to save your data to mongodb.
But, in this case, your message queue must be able to run concurrent publish subscribe.
A free message queue (very good in my opinion) is MassTransit with RabbitMQ.

The workflow would be:
1. Publish your data in message queue;
2. Once its there, launch as many boxes as you want with the subscribers that saves and processes your mongo data.

This approach will be good if you need to scale.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文