从 DataTable 和自定义 IEqualityComparer中删除重复项
我如何实现 IEqualityComparerDataTable
中删除重复行:
ID primary key, col_1, col_2, col_3, col_4
默认比较器不起作用,因为每一行都有自己的、唯一的主行钥匙。
如何实现 IEqualityComparer
将跳过主键并仅比较剩余的数据。
我有这样的东西:
public class DataRowComparer : IEqualityComparer<DataRow>
{
public bool Equals(DataRow x, DataRow y)
{
return
x.ItemArray.Except(new object[] { x[x.Table.PrimaryKey[0].ColumnName] }) ==
y.ItemArray.Except(new object[] { y[y.Table.PrimaryKey[0].ColumnName] });
}
public int GetHashCode(DataRow obj)
{
return obj.ToString().GetHashCode();
}
}
但
public static DataTable RemoveDuplicates(this DataTable table)
{
return
(table.Rows.Count > 0) ?
table.AsEnumerable().Distinct(new DataRowComparer()).CopyToDataTable() :
table;
}
它只调用 GetHashCode()
而不会调用 Equals()
How have I to implement IEqualityComparer<DataRow>
to remove duplicates rows from a DataTable
with next structure:
ID primary key, col_1, col_2, col_3, col_4
The default comparer doesn't work because each row has it's own, unique primary key.
How to implement IEqualityComparer<DataRow>
that will skip primary key and compare only data remained.
I have something like this:
public class DataRowComparer : IEqualityComparer<DataRow>
{
public bool Equals(DataRow x, DataRow y)
{
return
x.ItemArray.Except(new object[] { x[x.Table.PrimaryKey[0].ColumnName] }) ==
y.ItemArray.Except(new object[] { y[y.Table.PrimaryKey[0].ColumnName] });
}
public int GetHashCode(DataRow obj)
{
return obj.ToString().GetHashCode();
}
}
and
public static DataTable RemoveDuplicates(this DataTable table)
{
return
(table.Rows.Count > 0) ?
table.AsEnumerable().Distinct(new DataRowComparer()).CopyToDataTable() :
table;
}
but it calls only GetHashCode()
and doesn't call Equals()
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
这就是
Distinct
的工作方式。它本质上使用GetHashCode
方法。您可以编写GetHashCode
来执行您需要的操作。 由于您更了解数据,因此您可能会想出更好的方法来生成哈希值。
That is the way
Distinct
works. Intenally it uses theGetHashCode
method. You can write theGetHashCode
to do what you need. Something likeSince you know your data better you can probably come up with a better way to generate the hash.