关键词:
Keywords={u'secondary': [u'sales growth', u'next generation store', u'Steps Down', u' Profit warning', u'Store Of The Future', u'groceries']}
段落:
paragraph="""HOUSTON -- Target has unveiled its first "next generation" store in the Houston area, part of a multibillion-dollar effort to reimagine more than 1,000 stores nationwide to compete with e-commerce giants.The 124,000-square-foot store, which opened earlier this week at Aliana market center in Richmond, Texas, has two distinct entrances and aims to appeal to consumers on both ends of the shopping spectrum.Busy families seeking convenience can enter the "ease" side of the store, which offers a supermarket-style experience. Customers can pick up online orders, both in store and curbside, and buy grab-and-go items like groceries, wine, last-minute gifts, cleaning supplies and prepared meals."""
有什么方法可以在段落中匹配关键词吗?(不使用正则表达式)
输出:
匹配的关键词:next generation store , groceries
回答:
首先,如果你的关键词只有一个键,你并不需要使用dict
。可以使用set()
来代替。
Keywords={u'secondary': [u'sales growth', u'next generation store', u'Steps Down', u' Profit warning', u'Store Of The Future', u'groceries']}keywords = {u'sales growth', u'next generation store', u'Steps Down', u' Profit warning', u'Store Of The Future', u'groceries'}paragraph="""HOUSTON -- Target has unveiled its first "next generation" store in the Houston area, part of a multibillion-dollar effort to reimagine more than 1,000 stores nationwide to compete with e-commerce giants.The 124,000-square-foot store, which opened earlier this week at Aliana market center in Richmond, Texas, has two distinct entrances and aims to appeal to consumers on both ends of the shopping spectrum.Busy families seeking convenience can enter the "ease" side of the store, which offers a supermarket-style experience. Customers can pick up online orders, both in store and curbside, and buy grab-and-go items like groceries, wine, last-minute gifts, cleaning supplies and prepared meals."""
然后,稍作调整,参考在Python中查找标记化文本中的多词术语
from nltk.tokenize import MWETokenizerfrom nltk import sent_tokenize, word_tokenizemwe = MWETokenizer([k.lower().split() for k in keywords], separator='_')# 清理句子中的标点符号。import stringpuncts = list(string.punctuation)cleaned_paragraph = ''.join([ch if ch not in puncts else '' for ch in paragraph.lower()])tokenized_paragraph = [token for token in mwe.tokenize(word_tokenize(cleaned_paragraph)) if token.replace('_', ' ') in keywords]print(tokenized_paragraph)
[out]:
>>> print(tokenized_paragraph)['next_generation_store', 'groceries'