我有大约 50 个网站,在 5 个网络服务器之间实现负载平衡。它们都使用 Enterprise Library Caching,并访问相同的 Caching 数据库。缓存数据库中的项目每隔几个小时就会使用 ICacheItemRefreshAction 实现刷新一次。
我想通过将刷新代码放入 关键部分来保证只有一个网站刷新缓存。
然而,这些并不能确保关键部分跨多个网络服务器。
目前,我正在缓存数据库中创建一个新密钥来充当互斥体。这通常会起作用,但我可以看到两个进程进入临界区的可能性很小。
public class TakeLongTimeToRefresh : ICacheItemRefreshAction
{
#region ICacheItemRefreshAction Members
public void Refresh(string removedKey, object expiredValue, CacheItemRemovedReason removalReason)
{
string lockKey = "lockKey";
ICacheManager cm = CacheFactory.GetCacheManager();
if (!cm.Contains(lockKey))
{
Debug.WriteLine("Entering critical section");
// Add a lock-key which will never expire for synchronisation.
// I can see a small window of opportunity for another process to enter
// the critical section here...
cm.Add(lockKey, lockKey,
CacheItemPriority.NotRemovable, null,
new NeverExpired());
object newValue = SomeLengthyWebserviceCall();
cm.Remove(removedKey);
Utilities.AddToCache(removedKey, newValue);
cm.Remove("lockkey");
}
}
}
有没有办法拥有一个保证关键部分来确保我不会两次调用网络服务?
编辑 我应该补充一点,我无法使用共享文件,因为部署策略会阻止它。
StackOverflow 参考资料:
I have about 50 web-sites, load-balanced across 5 web-servers. They all use Enterprise Library Caching, and access the same Caching database. The items in the Caching database are refreshed every few hours, using an ICacheItemRefreshAction implementation.
I want to guarantee that only one web-site ever refreshes the cache, by putting the refresh code in a critical section.
-
If the web-sites were running in a single app-pool on a single server, I could use a lock()
-
If the web-sites were running in separate app-pools on a single server, I could use a Mutex.
However, these will not ensure the critical section across multiple web-servers.
Currently, I am creating a new key in the caching database to act as a mutex. This will generally work, but I can see a slim chance that 2 processes could enter the critical section.
public class TakeLongTimeToRefresh : ICacheItemRefreshAction
{
#region ICacheItemRefreshAction Members
public void Refresh(string removedKey, object expiredValue, CacheItemRemovedReason removalReason)
{
string lockKey = "lockKey";
ICacheManager cm = CacheFactory.GetCacheManager();
if (!cm.Contains(lockKey))
{
Debug.WriteLine("Entering critical section");
// Add a lock-key which will never expire for synchronisation.
// I can see a small window of opportunity for another process to enter
// the critical section here...
cm.Add(lockKey, lockKey,
CacheItemPriority.NotRemovable, null,
new NeverExpired());
object newValue = SomeLengthyWebserviceCall();
cm.Remove(removedKey);
Utilities.AddToCache(removedKey, newValue);
cm.Remove("lockkey");
}
}
}
Is there a way of having a guaranteed critical section to ensure I don't call the web-service twice?
EDIT I should add that I can't use a shared file, as the deployment policies will prevent it.
StackOverflow references:
发布评论
评论(2)
您必须涉及一些所有人都通用的外部锁获取。例如,SQL 中的表 t 具有一行和一个锁字段,您将通过以下方式获取锁:
检查受影响的行,如果其为 1,则您拥有锁,通过将锁更新为 0 来释放它。这本质上是依靠 SQLServer 的行锁,如果两个同时启动,则只有一个在 S 锁之后获得 U 锁,另一个将阻塞并随后返回受影响的 0 行(因为第一个事务将其翻转为 1)。
You have to involve some external lock acquisiton common to all. For example, a table t in SQL with one row and one lock field where you will acquire a lock with:
check rows affected and if its 1 you have the lock, release it by updating lock to 0. This essentially piggybacks on SQLServer's row lock, if two start at the same time only one will gain U lock after S lock, the other one will block and subsequently return 0 rows affected (since the first transaction flipped it to 1).
我建议您将创建/返回锁句柄的逻辑移至数据库并组合它们,这将保证始终是一个进程拥有锁。
因此,数据库可能有一个您要求锁定的存储过程,它要么返回空结果(不成功),要么创建一条记录并返回它。
I suggest you move the logic for creating/returning a lock handle to database and combine them and this will guarantee it is always one process having the lock.
So the database could have a stored procedure which you ask for a lock, and either it will return empty result (unsuccessful) or will create a record and return it.