elasticsearch模糊查询为什么搜不到啊, 我用的就是ngram啊

发布于 2022-09-04 01:42:10 字数 5474 浏览 15 评论 0

我要实现模糊查询, 建立了如下的索引

PUT myidx1
{
  "_all": {
    "enabled": false
  },
  "settings": {
    "analysis": {
      "tokenizer": {
        "my_ngram": {
          "type": "nGram",
          "min_gram": "1",
          "max_gram": "20",
          "token_chars": [
            "letter",
            "digit"
          ]
        }
      },
      "analyzer": {
        "mylike": {
          "tokenizer": "my_ngram",
          "filter": [
            "lowercase"
          ]
        }
      }
    }
  },
  "mapping": {
    "mytype": {
      "dynamic": false,
      "properties": {
        "name": {
          "type": "string",
          "analyzer": "mylike"
        }
      }
    }
  }
}

测试一个

POST myidx1/_analyze
{
    "analyzer": "mylike",
    "text": "文档3-aaa111"
}

结果如下,

{
   "tokens": [
      {
         "token": "文",
         "start_offset": 0,
         "end_offset": 1,
         "type": "word",
         "position": 0
      },
      {
         "token": "文档",
         "start_offset": 0,
         "end_offset": 2,
         "type": "word",
         "position": 1
      },
      {
         "token": "文档3",
         "start_offset": 0,
         "end_offset": 3,
         "type": "word",
         "position": 2
      },
      {
         "token": "档",
         "start_offset": 1,
         "end_offset": 2,
         "type": "word",
         "position": 3
      },
      {
         "token": "档3",
         "start_offset": 1,
         "end_offset": 3,
         "type": "word",
         "position": 4
      },
      {
         "token": "3",
         "start_offset": 2,
         "end_offset": 3,
         "type": "word",
         "position": 5
      },
      {
         "token": "a",
         "start_offset": 4,
         "end_offset": 5,
         "type": "word",
         "position": 6
      },
      {
         "token": "aa",
         "start_offset": 4,
         "end_offset": 6,
         "type": "word",
         "position": 7
      },
      {
         "token": "aaa",
         "start_offset": 4,
         "end_offset": 7,
         "type": "word",
         "position": 8
      },
      {
         "token": "aaa1",
         "start_offset": 4,
         "end_offset": 8,
         "type": "word",
         "position": 9
      },
      {
         "token": "aaa11",
         "start_offset": 4,
         "end_offset": 9,
         "type": "word",
         "position": 10
      },
      {
         "token": "aaa111",
         "start_offset": 4,
         "end_offset": 10,
         "type": "word",
         "position": 11
      },
      {
         "token": "a",
         "start_offset": 5,
         "end_offset": 6,
         "type": "word",
         "position": 12
      },
      {
         "token": "aa",
         "start_offset": 5,
         "end_offset": 7,
         "type": "word",
         "position": 13
      },
      {
         "token": "aa1",
         "start_offset": 5,
         "end_offset": 8,
         "type": "word",
         "position": 14
      },
      {
         "token": "aa11",
         "start_offset": 5,
         "end_offset": 9,
         "type": "word",
         "position": 15
      },
      {
         "token": "aa111",
         "start_offset": 5,
         "end_offset": 10,
         "type": "word",
         "position": 16
      },
      {
         "token": "a",
         "start_offset": 6,
         "end_offset": 7,
         "type": "word",
         "position": 17
      },
      {
         "token": "a1",
         "start_offset": 6,
         "end_offset": 8,
         "type": "word",
         "position": 18
      },
      {
         "token": "a11",
         "start_offset": 6,
         "end_offset": 9,
         "type": "word",
         "position": 19
      },
      {
         "token": "a111",
         "start_offset": 6,
         "end_offset": 10,
         "type": "word",
         "position": 20
      },
      {
         "token": "1",
         "start_offset": 7,
         "end_offset": 8,
         "type": "word",
         "position": 21
      },
      {
         "token": "11",
         "start_offset": 7,
         "end_offset": 9,
         "type": "word",
         "position": 22
      },
      {
         "token": "111",
         "start_offset": 7,
         "end_offset": 10,
         "type": "word",
         "position": 23
      },
      {
         "token": "1",
         "start_offset": 8,
         "end_offset": 9,
         "type": "word",
         "position": 24
      },
      {
         "token": "11",
         "start_offset": 8,
         "end_offset": 10,
         "type": "word",
         "position": 25
      },
      {
         "token": "1",
         "start_offset": 9,
         "end_offset": 10,
         "type": "word",
         "position": 26
      }
   ]
}

这个结果是不是就证明, 我可以用a搜索, 用1也可以搜到????
那么测试一下
插入数据

POST myidx1/mytype/_bulk
{ "index": { "_id": 4            }}
{ "name": "文档3-aaa111" }
{ "index": { "_id": 5            }}
{ "name": "yyy111"}
{ "index": { "_id": 6            }}
{ "name": "yyy111"}

然后查不到呢

GET myidx1/mytype/_search
{
    "query": {
        "match": {
            "name": "1"
        }
    }
}

结果如下:

{
   "took": 10,
   "timed_out": false,
   "_shards": {
      "total": 5,
      "successful": 5,
      "failed": 0
   },
   "hits": {
      "total": 0,
      "max_score": null,
      "hits": []
   }
}

为什么, 这是为什么啊!!!!快崩溃了~

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

苏别ゝ 2022-09-11 01:42:10

建立mapping的时候 少了个s, 哈哈

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文