Oracle 一致性如何因检索到的关键对象而失败?
我们遇到了一个奇怪的问题。我们获取 Oracle Coherence 缓存的 KeySet,但无法直接从缓存中获取值,即使没有更新活动也是如此。
以下代码始终失败(即输出“>>>>NULL”,因为未检索到对象)。问题是:为什么?
NamedCache nc = CacheFactory.getCache(cacheName);
Set<Object> keys = (Set<Object>)nc.keySet();
for ( Object key : keys ) {
Object o = nc.get(key);
if ( o == null ) {
System.out.println(">>>>NULL:"+keyStr);
}
}
缓存是具有多个索引的分区命名缓存。
键是一个带有一个实例变量的对象(未显示),即 HashMap。
密钥对象还具有 equals() 和 hashCode() 方法,如下所示:
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((values == null) ? 0 : values.hashCode());
return result;
}
@Override
public boolean equals(Object obj) {
System.out.println("EQUALS");
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
AbstractCacheKey other = (AbstractCacheKey) obj;
if (values == null) {
if (other.values != null)
return false;
} else if (!values.equals(other.values))
return false;
return true;
}
我相信 Coherence 在该配置中使用序列化密钥对象的哈希值,这将使这两个方法无关,但我不知道这对于这两个方法都是如此前端缓存(本地 JVM,已关闭 localstorage)和后端缓存(存储节点 JVM)。
我们的一些代码通过重建键、按标准顺序插入值来部分解决这个问题。这通常有效。我不明白为什么这是必要的,因为我们的 hashCode() 方法和 Java 的 HashMap 的 hashCode() AFAIK 对哈希的迭代顺序不敏感。为什么它通常但并不总是有效也是一个谜。
We are experiencing an odd problem. We get the KeySet of an Oracle Coherence cache, but cannot straight-forwardly get the values from the cache, even with no update activity on it.
The following code fails consistently (i.e. outputs ">>>>NULL" because the object is not retrieved). The question is: WHY?
NamedCache nc = CacheFactory.getCache(cacheName);
Set<Object> keys = (Set<Object>)nc.keySet();
for ( Object key : keys ) {
Object o = nc.get(key);
if ( o == null ) {
System.out.println(">>>>NULL:"+keyStr);
}
}
The cache is a partitioned named cache with multiple indices.
The key is an object (not shown) with one instance variable, a HashMap.
The key object also has equals() and hashCode() methods as follows:
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((values == null) ? 0 : values.hashCode());
return result;
}
@Override
public boolean equals(Object obj) {
System.out.println("EQUALS");
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
AbstractCacheKey other = (AbstractCacheKey) obj;
if (values == null) {
if (other.values != null)
return false;
} else if (!values.equals(other.values))
return false;
return true;
}
I believe Coherence uses the hash of the serialized key object in this configuration, which would render these two methods irrelevant, except I don't know this is true for both front cache (local JVM, has localstorage turned off) and back cache (storage node JVM's).
Some of our code partially solves this problem by rebuilding the key, inserting the values in a standard order. This usually works. I don't see why this is necessary, since our hashCode() method and Java's hashCode() for HashMap are, AFAIK, insensitive to the iteration order of the hash. Why it usually, but not always works is also a mystery.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
答案(感谢 Dimitri)是 HashMap 不保证其序列化顺序,因此 serialized-hash->deserialize->object-hash ->serialize->serialized-hash 可能会导致第二个序列化哈希是与第一个不同的字节流。
Java 不保证哈希中的顺序,并且序列化依赖于顺序。不同 JVM 的序列化可能不同,甚至在同一个 JVM 中也是如此。由于HashMap的内部实现是一个典型的内存哈希,有N个桶,每个桶(可能通过链表)保存一组条目,其哈希对应于该桶,因此条目放入哈希的顺序决定了(以非指定的方式)键集迭代返回它们的顺序。相比之下,TreeMap 应该产生一致的排序,从而大概产生一致的序列化。
一致性分区缓存以序列化形式存储键和值,因此它们在键的序列化版本上计算哈希函数,并对序列化键进行相等性检查。尽管序列化流对于重建对象而言是等效的,但不能保证它与散列和相等检查操作所需的相同。
更复杂的是,在近缓存中,对象以反序列化形式保存,因此使用其 equals() 和 hashCode() 方法。
最后,Coherence 建议使用其专有的 POF 序列化,这通常会减少序列化大小,并为正在序列化的对象提供序列化的直接控制。
The answer (thanks, Dimitri) is that the HashMap doesn't guarantee its serialization ordering, so serialized-hash->deserialize->object-hash->serialize->serialized-hash may result in the second serialized hash being a different byte stream than the first.
Java makes no guarantees about ordering in a hash, and serialization is dependent on ordering. Serialization can be different from one JVM to anther, and even within the same JVM. Since the internal implementation of a HashMap is a typical in-memory hash, with N buckets, each holding (probably via a linked list) a set of entries whose hash corresponds to the bucket, the order in which entries are put into the hash determines (in a non-specified way) the order in which the keyset iteration will return them. A TreeMap, by comparison, should produce consistent ordering and thus, presumably, consistent serialization.
Coherence partitioned caches store keys and values in serialized form, so they compute the hash function on the serialized version of the key, and do equality checks on the serialized keys. Even though the serialized stream is equivalent for the purposes of reconstructing the object, it is not guaranteed identical as needed for the hashing and equality checking operations.
To complicate matters more, in near-cache, the object is kept in deserialized form and hence its equals() and hashCode() methods are used instead.
Finally, Coherence recommends the use of their proprietary POF serialization, which usually results in reduced serialized size and gives direct control of serialization to the object being serialized.