python3.x 中的标记化

发布于 2024-10-16 14:21:05 字数 556 浏览 1 评论 0原文

我在 python2.x 中有以下代码:

class _CHAIN(object):

    def __init__(self, execution_context=None):
        self.execution_context = execution_context

    def eat(self, toktype, tokval, rowcol, line, logical_line):        
        #some code and error checking


operations = _CHAIN(execution_context)

tokenize(StringIO(somevalue).readline, operations.eat)

现在的问题是在 python3.x 中第二个参数不存在。我需要在标记化之前调用函数 Operations.eat() 。我们如何在python3.x中执行上述任务。一个想法是在“tokenize”语句(代码的最后一行)之前直接调用函数 tokenize.eat() 。但我不确定要通过的论点。我确信一定有更好的方法来做到这一点。

I have following codes in python2.x:

class _CHAIN(object):

    def __init__(self, execution_context=None):
        self.execution_context = execution_context

    def eat(self, toktype, tokval, rowcol, line, logical_line):        
        #some code and error checking


operations = _CHAIN(execution_context)

tokenize(StringIO(somevalue).readline, operations.eat)

Now problem is that in python3.x second argument does not exist. I need to call the function operations.eat() before tokenize. How can we perform the above task in python3.x. One idea is to directly call the function tokenize.eat() before 'tokenize' statement(last line of the code). But I am not sure about the arguments to be passed. I'm sure there must be better ways to do it.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

临风闻羌笛 2024-10-23 14:21:05

您使用的是一个稍微奇怪的遗留系统,您向函数传递一个可迭代的函数以及一个可以接受令牌的可调用函数。新方法在概念上更简单,并且适用于 Python 2 和 3:

from tokenize import generate_tokens
for token in generate_tokens(StringIO(somevalue).readline):
    eat(token)

这在技术上没有针对 Python 3 的文档记录,但不太可能被取消。 Python 3 中的官方 tokenize 函数需要字节,而不是字符串。有请求使用官方API来标记字符串,但它似乎已经停滞了。

You're using a slightly odd legacy system where you pass the function an iterable along with a callable which can accept the tokens. The new way is conceptually more simple, and works in both Python 2 and 3:

from tokenize import generate_tokens
for token in generate_tokens(StringIO(somevalue).readline):
    eat(token)

This is technically undocumented for Python 3, but unlikely to be taken away. The official tokenize function in Python 3 expects bytes, rather than strings. There's a request for an official API to tokenize strings, but it seems to have stalled.

南…巷孤猫 2024-10-23 14:21:05

根据 http://docs.python.org/py3k/library/tokenize.html,您现在应该使用tokenize.tokenize(readline)

import tokenize
import io

class _CHAIN(object):

    def __init__(self, execution_context=None):
        self.execution_context = execution_context

    def eat(self, toktype, tokval, rowcol, line, logical_line):        
        #some code and error checking
        print(toktype, tokval, rowcol, line, logical_line)


operations = _CHAIN(None)

readline = io.StringIO('aaaa').readline

#Python 2 way:
#tokenize.tokenize(readline, operations.eat)

#Python 3 way:
for token in tokenize.generate_tokens(readline):
    operations.eat(token[0], token[1], token[2], token[3], token[4])

Accoring to http://docs.python.org/py3k/library/tokenize.html, you should now use tokenize.tokenize(readline):

import tokenize
import io

class _CHAIN(object):

    def __init__(self, execution_context=None):
        self.execution_context = execution_context

    def eat(self, toktype, tokval, rowcol, line, logical_line):        
        #some code and error checking
        print(toktype, tokval, rowcol, line, logical_line)


operations = _CHAIN(None)

readline = io.StringIO('aaaa').readline

#Python 2 way:
#tokenize.tokenize(readline, operations.eat)

#Python 3 way:
for token in tokenize.generate_tokens(readline):
    operations.eat(token[0], token[1], token[2], token[3], token[4])
寒冷纷飞旳雪 2024-10-23 14:21:05
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import string
import pymorphy2
import re
import nltk
nltk.download('punkt')

reg =  re.compile('[^а-яА-Я ]')
morph = pymorphy2.MorphAnalyzer()
stop_words = stopwords.words('russian')

def sentence(words):
    words = reg.sub('', words)
    words = word_tokenize(words, language = 'russian')
    tokens = [i for i in words if (i not in string.punctuation)]  
    tokens = [i for i in tokens if (i not in stop_words)]
    tokens = [morph.parse(word)[0].normal_form for word in tokens]
    tokens = [i for i in tokens if (i not in stop_words)]
    
    return tokens

df['text']=df['text'].apply(str)    
df['text'] = df['text'].apply(lambda x: sentence(x))
df['text'] = df['text'].apply(lambda x: " ".join(x))
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import string
import pymorphy2
import re
import nltk
nltk.download('punkt')

reg =  re.compile('[^а-яА-Я ]')
morph = pymorphy2.MorphAnalyzer()
stop_words = stopwords.words('russian')

def sentence(words):
    words = reg.sub('', words)
    words = word_tokenize(words, language = 'russian')
    tokens = [i for i in words if (i not in string.punctuation)]  
    tokens = [i for i in tokens if (i not in stop_words)]
    tokens = [morph.parse(word)[0].normal_form for word in tokens]
    tokens = [i for i in tokens if (i not in stop_words)]
    
    return tokens

df['text']=df['text'].apply(str)    
df['text'] = df['text'].apply(lambda x: sentence(x))
df['text'] = df['text'].apply(lambda x: " ".join(x))
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文