处理服务器-客户端应用程序中不稳定的互联网连接
我可以使用什么技术来管理服务器客户端应用程序中不稳定的互联网连接。 我主要了解PHP(+Zend Framework),学习C# & ASP.NET MVC。 我听说 WCF/MSMQ 可以提供帮助...但是...PHP(我更熟悉)可以做些什么吗? 但了解 .NET 替代方案也是很好的,如果它更好的话
背景:
客户端***s*** 将连接到服务器数据库来执行 CRUD。 但如果互联网连接失败,这将是不可能的。 那么我该如何解决这个问题呢?
现在使用的解决方案是拥有本地主机数据库。 在一天结束时,所有客户端都将上传到服务器,并早上从服务器下载“合并”数据库。 这并不是万无一失的,因为上传/下载仍然可能失败。 考虑到传输的大量数据,它实际上增加了机会。
更新:是否有 MSMQ/WCF 的 PHP/Zend Framework/MySQL 替代品?
what technology can i use to manage unstable internet connection in a Server-Client App. i know mainly PHP (+Zend Framework), learning C# & ASP.NET MVC. i heard WCF/MSMQ is something that can help... but how ... is there something PHP (which i am more familiar) can do? but it is also good to know a .NET alternative if its better
the background:
client***s*** will connect to server db to do CRUDs. but if the internet connection fails this will not be possible. so how do i fix this?
the solution used now was have localhost db's. at the end of the day, all clients will upload to server and morning download "consolidated" db from server. this is not foolproof as upload/download may still fail. and considering large amts of data transfered, it actually increases the chances.
UPDATE: is there a PHP/Zend Framework/MySQL replacement for MSMQ/WCF?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
WCF 可以提供帮助,因为它支持各种可靠消息传输技术。
可能对您有帮助的一件事是让客户端在本地进行数据更改,然后将这些更改上传到可靠的消息队列。 您不会在单个事务中上传所有更改。 您可以一次上传 10 个,也可能一次上传一个。 当上传的消息在服务器上处理时,服务器会将事务结果写入另一个队列,该队列对于每个客户端都是唯一的。 上传后(或者可能同时),客户端将检查该队列以查看每次上传的结果是什么。 如果结果成功,则客户端可以删除其本地数据库。 如果结果失败,则客户端应尝试重新上传。
当然,您应该始终小心,不要尝试进行错误恢复,以免事情变得更糟。 坏链接上过多的重试流量很可能会导致更多流量,这本身可能需要恢复等。
当然,最终的解决方案是转向更可靠的链接。 不一定更快,但只是更可靠。
WCF can help, because it supports various technologies for reliable message transfer.
One thing that might help you is to have the clients make their data changes locally, then upload those changes to a reliable message queue. You would not upload all changes in a single transaction. You might upload 10 at a time, possibly one at a time. As the uploaded messages are processed on the server, the server would write the transaction results to another queue, unique to each client. After the upload (or maybe at the same time), the client would check that queue to see what the result of each upload was. If the result was success, then the client can remove their local database. If the result was a failure, then the client should try uploading it again.
Of course, you should always be careful that your attempts at error recovery don't make things worse. Too much retry traffic on a bad link may very well cause more traffic, which may itself need recovery, etc.
And, of course, the ultimate solution is to move towards links that are more reliable. Not necessarily faster, but just more reliable.