将文件中的句子转换为列表中的单词标记
我正在使用 python 将文本文件中句子中的单词转换为列表中的单个标记,以计算单词频率。我无法将不同的句子转换为单个列表。这就是我所做的:
f = open('music.txt', 'r')
sent = [word.lower().split() for word in f]
这给了我以下列表:
[['party', 'rock', 'is', 'in', 'the', 'house', 'tonight'],
['everybody', 'just', 'have', 'a', 'good', 'time'],...]
由于文件中的句子位于单独的行中,因此它返回此列表列表,并且 defaultdict 无法识别要计数的各个标记。
它尝试使用以下列表理解来隔离不同列表中的标记并将它们返回到单个列表,但它返回一个空列表:
sent2 = [[w for w in word] for word in sent]
有没有办法使用列表理解来做到这一点?或者也许还有另一种更简单的方法?
I'm using python to convert the words in sentences in a text file to individual tokens in a list for the purpose of counting up word frequencies. I'm having trouble converting the different sentences into a single list. Here's what I do:
f = open('music.txt', 'r')
sent = [word.lower().split() for word in f]
That gives me the following list:
[['party', 'rock', 'is', 'in', 'the', 'house', 'tonight'],
['everybody', 'just', 'have', 'a', 'good', 'time'],...]
Since the sentences in the file were in separate lines, it returns this list of lists and defaultdict can't identify the individual tokens to count up.
It tried the following list comprehension to isolate the tokens in the different lists and return them to a single list, but it returns an empty list instead:
sent2 = [[w for w in word] for word in sent]
Is there a way to do this using list comprehensions? Or perhaps another easier way?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
只需在列表理解中使用嵌套循环:
这种方法有一些替代方法,例如使用 itertools.chain.from_iterable(),但我认为在这种情况下嵌套循环要容易得多。
Just use a nested loop inside the list comprehension:
There are some alternatives to this approach, for example using
itertools.chain.from_iterable()
, but I think the nested loop is much easier in this case.只需将整个文件作为单个字符串读取到内存中,然后对字符串应用
split
一次。在这种情况下,无需逐行读取文件。
因此,您的核心可以短至:(当然
,还有一些细节,例如关闭文件、检查错误、将代码变大一点)
由于您想要计算词频,因此可以使用 collections.Counter 类那:
Just read the entire file to memory,a s a single string, and apply
split
once tot hat string.There is no need to read the file line by line in such a case.
Therefore your core can be as short as:
(A few niceties like closing the file, checking for errors, turn the code a little larger, of course)
Since you want to be counting word frequencies, you can use the collections.Counter class for that:
列表推导式可以完成这项工作,但会将所有内容累积在内存中。对于大量投入来说,这可能是不可接受的成本。下面的解决方案不会在内存中积累大量数据,即使对于大文件也是如此。最终产品是一个
{token:occurrences}
形式的字典。List comprehensions can do the job but will accumulate everything in memory. For large inputs this could be an unacceptable cost. The below solution will not accumulate large amounts of data in memory, even for large files. The final product is a dictionary of the form
{token: occurrences}
.