-
今天产品和我提了一个问题, 为什么在我们的产品里搜索
be
搜不到想要的数据 -
我们的视频里的确有类似的,比如
i want to be xxx
停用词
所以我要做的只要把停用词删除掉即可
- 进入
Elasticsearch
的根目录下(以你安装的为准, 我使用的是Docker
) cd /usr/share/elasticsearch
- 进入
ik
的配置(在es
的config
目录下寻找, 旧版本可能在plugins
目录里) cd config/analysis-ik
- 查看英文停用词文件
stopword.dic
# cat stopword.dic
a
an
and
are
as
at
be
but
by
for
if
in
into
is
it
no
not
of
on
or
such
that
the
their
then
there
these
they
this
to
was
will
with
- 删除我们想要查找的停用词
be
- 重启
Elasticsearch
- 然后重新对文档索引, 之后便可以通过之前的停用词
be
查找到文档了
自定义词库
- 看一下当前的分词效果
POST /_analyze
{
"analyzer": "ik_max_word",
"text": "q宠大乱斗"
}
--------------
{
"tokens": [{
"token": "q",
"start_offset": 0,
"end_offset": 1,
"type": "ENGLISH",
"position": 0
},
{
"token": "宠大",
"start_offset": 1,
"end_offset": 3,
"type": "CN_WORD",
"position": 1
},
{
"token": "大乱斗",
"start_offset": 2,
"end_offset": 5,
"type": "CN_WORD",
"position": 2
},
{
"token": "大乱",
"start_offset": 2,
"end_offset": 4,
"type": "CN_WORD",
"position": 3
},
{
"token": "斗",
"start_offset": 4,
"end_offset": 5,
"type": "CN_CHAR",
"position": 4
}
]
}
- 查看
ik
下的配置文件IKAnalyzer.cfg.xml
# cat IKAnalyzer.cfg.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>IK Analyzer 扩展配置</comment>
<!--用户可以在这里配置自己的扩展字典 -->
<entry key="ext_dict"></entry>
<!--用户可以在这里配置自己的扩展停止词字典-->
<entry key="ext_stopwords"></entry>
<!--用户可以在这里配置远程扩展字典 -->
<!-- <entry key="remote_ext_dict">words_location</entry> -->
<!--用户可以在这里配置远程扩展停止词字典-->
<!-- <entry key="remote_ext_stopwords">words_location</entry> -->
</properties>
- 我们可以在
ext_dict
扩展自定义词库, 多个文件使用;
隔开
# cat IKAnalyzer.cfg.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>IK Analyzer 扩展配置</comment>
<!--用户可以在这里配置自己的扩展字典 -->
<entry key="ext_dict">custom_words.dic</entry>
<!--用户可以在这里配置自己的扩展停止词字典-->
<entry key="ext_stopwords"></entry>
<!--用户可以在这里配置远程扩展字典 -->
<!-- <entry key="remote_ext_dict">words_location</entry> -->
<!--用户可以在这里配置远程扩展停止词字典-->
<!-- <entry key="remote_ext_stopwords">words_location</entry> -->
</properties>
- 新建自定义词库文件
# cat custom_words.dic
q宠
- 重启
Elasticsearch
- 再次查看分词效果
POST /_analyze
{
"analyzer": "ik_max_word",
"text": "q宠大乱斗"
}
--------------
{
"tokens": [{
"token": "q宠",
"start_offset": 0,
"end_offset": 2,
"type": "CN_WORD",
"position": 0
},
{
"token": "q",
"start_offset": 0,
"end_offset": 1,
"type": "ENGLISH",
"position": 1
},
{
"token": "宠大",
"start_offset": 1,
"end_offset": 3,
"type": "CN_WORD",
"position": 2
},
{
"token": "大乱斗",
"start_offset": 2,
"end_offset": 5,
"type": "CN_WORD",
"position": 3
},
{
"token": "大乱",
"start_offset": 2,
"end_offset": 4,
"type": "CN_WORD",
"position": 4
},
{
"token": "斗",
"start_offset": 4,
"end_offset": 5,
"type": "CN_CHAR",
"position": 5
}
]
}
- 至此已经可以看到有我们自定义的词条
q宠