如何使用Python(nltk)匹配段落中的关键词

关键词:

Keywords={u'secondary': [u'sales growth', u'next generation store', u'Steps Down', u' Profit warning', u'Store Of The Future', u'groceries']}

段落:

paragraph="""HOUSTON -- Target has unveiled its first "next generation" store in the Houston area, part of a multibillion-dollar effort to reimagine more than 1,000 stores nationwide to compete with e-commerce giants.The 124,000-square-foot store, which opened earlier this week at Aliana market center in Richmond, Texas, has two distinct entrances and aims to appeal to consumers on both ends of the shopping spectrum.Busy families seeking convenience can enter the "ease" side of the store, which offers a supermarket-style experience. Customers can pick up online orders, both in store and curbside, and buy grab-and-go items like groceries, wine, last-minute gifts, cleaning supplies and prepared meals."""

有什么方法可以在段落中匹配关键词吗?(不使用正则表达式)

输出:

匹配的关键词:next generation store , groceries


回答:

首先,如果你的关键词只有一个键,你并不需要使用dict。可以使用set()来代替。

Keywords={u'secondary': [u'sales growth', u'next generation store',                          u'Steps Down', u' Profit warning',                          u'Store Of The Future', u'groceries']}keywords = {u'sales growth', u'next generation store',             u'Steps Down', u' Profit warning',             u'Store Of The Future', u'groceries'}paragraph="""HOUSTON -- Target has unveiled its first "next generation" store in the Houston area, part of a multibillion-dollar effort to reimagine more than 1,000 stores nationwide to compete with e-commerce giants.The 124,000-square-foot store, which opened earlier this week at Aliana market center in Richmond, Texas, has two distinct entrances and aims to appeal to consumers on both ends of the shopping spectrum.Busy families seeking convenience can enter the "ease" side of the store, which offers a supermarket-style experience. Customers can pick up online orders, both in store and curbside, and buy grab-and-go items like groceries, wine, last-minute gifts, cleaning supplies and prepared meals."""

然后,稍作调整,参考在Python中查找标记化文本中的多词术语

from nltk.tokenize import MWETokenizerfrom nltk import sent_tokenize, word_tokenizemwe = MWETokenizer([k.lower().split() for k in keywords], separator='_')# 清理句子中的标点符号。import stringpuncts = list(string.punctuation)cleaned_paragraph = ''.join([ch if ch not in puncts else '' for ch in paragraph.lower()])tokenized_paragraph = [token for token in mwe.tokenize(word_tokenize(cleaned_paragraph))                       if token.replace('_', ' ') in keywords]print(tokenized_paragraph)

[out]:

>>> print(tokenized_paragraph)['next_generation_store', 'groceries'

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注