elasticsearch模糊查询为什么搜不到啊, 我用的就是ngram啊
我要实现模糊查询, 建立了如下的索引
PUT myidx1
{
"_all": {
"enabled": false
},
"settings": {
"analysis": {
"tokenizer": {
"my_ngram": {
"type": "nGram",
"min_gram": "1",
"max_gram": "20",
"token_chars": [
"letter",
"digit"
]
}
},
"analyzer": {
"mylike": {
"tokenizer": "my_ngram",
"filter": [
"lowercase"
]
}
}
}
},
"mapping": {
"mytype": {
"dynamic": false,
"properties": {
"name": {
"type": "string",
"analyzer": "mylike"
}
}
}
}
}
测试一个
POST myidx1/_analyze
{
"analyzer": "mylike",
"text": "文档3-aaa111"
}
结果如下,
{
"tokens": [
{
"token": "文",
"start_offset": 0,
"end_offset": 1,
"type": "word",
"position": 0
},
{
"token": "文档",
"start_offset": 0,
"end_offset": 2,
"type": "word",
"position": 1
},
{
"token": "文档3",
"start_offset": 0,
"end_offset": 3,
"type": "word",
"position": 2
},
{
"token": "档",
"start_offset": 1,
"end_offset": 2,
"type": "word",
"position": 3
},
{
"token": "档3",
"start_offset": 1,
"end_offset": 3,
"type": "word",
"position": 4
},
{
"token": "3",
"start_offset": 2,
"end_offset": 3,
"type": "word",
"position": 5
},
{
"token": "a",
"start_offset": 4,
"end_offset": 5,
"type": "word",
"position": 6
},
{
"token": "aa",
"start_offset": 4,
"end_offset": 6,
"type": "word",
"position": 7
},
{
"token": "aaa",
"start_offset": 4,
"end_offset": 7,
"type": "word",
"position": 8
},
{
"token": "aaa1",
"start_offset": 4,
"end_offset": 8,
"type": "word",
"position": 9
},
{
"token": "aaa11",
"start_offset": 4,
"end_offset": 9,
"type": "word",
"position": 10
},
{
"token": "aaa111",
"start_offset": 4,
"end_offset": 10,
"type": "word",
"position": 11
},
{
"token": "a",
"start_offset": 5,
"end_offset": 6,
"type": "word",
"position": 12
},
{
"token": "aa",
"start_offset": 5,
"end_offset": 7,
"type": "word",
"position": 13
},
{
"token": "aa1",
"start_offset": 5,
"end_offset": 8,
"type": "word",
"position": 14
},
{
"token": "aa11",
"start_offset": 5,
"end_offset": 9,
"type": "word",
"position": 15
},
{
"token": "aa111",
"start_offset": 5,
"end_offset": 10,
"type": "word",
"position": 16
},
{
"token": "a",
"start_offset": 6,
"end_offset": 7,
"type": "word",
"position": 17
},
{
"token": "a1",
"start_offset": 6,
"end_offset": 8,
"type": "word",
"position": 18
},
{
"token": "a11",
"start_offset": 6,
"end_offset": 9,
"type": "word",
"position": 19
},
{
"token": "a111",
"start_offset": 6,
"end_offset": 10,
"type": "word",
"position": 20
},
{
"token": "1",
"start_offset": 7,
"end_offset": 8,
"type": "word",
"position": 21
},
{
"token": "11",
"start_offset": 7,
"end_offset": 9,
"type": "word",
"position": 22
},
{
"token": "111",
"start_offset": 7,
"end_offset": 10,
"type": "word",
"position": 23
},
{
"token": "1",
"start_offset": 8,
"end_offset": 9,
"type": "word",
"position": 24
},
{
"token": "11",
"start_offset": 8,
"end_offset": 10,
"type": "word",
"position": 25
},
{
"token": "1",
"start_offset": 9,
"end_offset": 10,
"type": "word",
"position": 26
}
]
}
这个结果是不是就证明, 我可以用a搜索, 用1也可以搜到????
那么测试一下
插入数据
POST myidx1/mytype/_bulk
{ "index": { "_id": 4 }}
{ "name": "文档3-aaa111" }
{ "index": { "_id": 5 }}
{ "name": "yyy111"}
{ "index": { "_id": 6 }}
{ "name": "yyy111"}
然后查不到呢
GET myidx1/mytype/_search
{
"query": {
"match": {
"name": "1"
}
}
}
结果如下:
{
"took": 10,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
为什么, 这是为什么啊!!!!快崩溃了~
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
建立mapping的时候 少了个s, 哈哈