使用 lucene 提取 tf-idf 向量

发布于 2025-01-03 02:57:36 字数 2632 浏览 2 评论 0原文

我已经使用 lucene 索引了一组文档。我还为每个文档内容存储了 DocumentTermVector。我编写了一个程序并获取每个文档的词频向量,但是如何获取每个文档的 tf-idf 向量?

这是我的代码,它输出每个文档中的术语频率:

Directory dir = FSDirectory.open(new File(indexDir));
    IndexReader ir = IndexReader.open(dir);
    for (int docNum=0; docNum<ir.numDocs(); docNum++) {
        System.out.println(ir.document(docNum).getField("filename").stringValue());
        TermFreqVector tfv = ir.getTermFreqVector(docNum, "contents");
        if (tfv == null) {
        // ignore empty fields
        continue;
        }
        String terms[] = tfv.getTerms();
        int termCount = terms.length;
        int freqs[] = tfv.getTermFrequencies();

        for (int t=0; t < termCount; t++) {
        System.out.println(terms[t] + " " + freqs[t]);
        }
    }

Lucene 中是否有任何内置函数可供我执行此操作?


没有人帮忙,我自己做的:

    Directory dir = FSDirectory.open(new File(indexDir));
    IndexReader ir = IndexReader.open(dir);

    int docNum;
    for (docNum = 0; docNum<ir.numDocs(); docNum++) {
        TermFreqVector tfv = ir.getTermFreqVector(docNum, "title");
        if (tfv == null) {
                // ignore empty fields
                continue;
        }
        String tterms[] = tfv.getTerms();
        int termCount = tterms.length;
        int freqs[] = tfv.getTermFrequencies();

        for (int t=0; t < termCount; t++) {
            double idf = ir.numDocs()/ir.docFreq(new Term("title", tterms[t]));
            System.out.println(tterms[t] + " " + freqs[t]*Math.log(idf));
        }
    }

有没有办法找到每个term的ID号?


没有人帮忙,我又自己做了:

    List list = new LinkedList();
    terms = null;
    try
    {
        terms = ir.terms(new Term("title", ""));
        while ("title".equals(terms.term().field()))
        {
        list.add(terms.term().text());
        if (!terms.next())
            break;
        }
    }
    finally
    {
        terms.close();
    }
    int docNum;
    for (docNum = 0; docNum<ir.numDocs(); docNum++) {
        TermFreqVector tfv = ir.getTermFreqVector(docNum, "title");
        if (tfv == null) {
                // ignore empty fields
                continue;
        }
        String tterms[] = tfv.getTerms();
        int termCount = tterms.length;
        int freqs[] = tfv.getTermFrequencies();

        for (int t=0; t < termCount; t++) {
            double idf = ir.numDocs()/ir.docFreq(new Term("title", tterms[t]));
            System.out.println(Collections.binarySearch(list, tterms[t]) + " " + tterms[t] + " " + freqs[t]*Math.log(idf));
        }
    }

I have indexed a set of documents using lucene. I also have stored DocumentTermVector for each document content. I wrote a program and got the term frequency vector for each document, but how can I get tf-idf vector of each document?

Here is my code that outputs term frequencies in each document:

Directory dir = FSDirectory.open(new File(indexDir));
    IndexReader ir = IndexReader.open(dir);
    for (int docNum=0; docNum<ir.numDocs(); docNum++) {
        System.out.println(ir.document(docNum).getField("filename").stringValue());
        TermFreqVector tfv = ir.getTermFreqVector(docNum, "contents");
        if (tfv == null) {
        // ignore empty fields
        continue;
        }
        String terms[] = tfv.getTerms();
        int termCount = terms.length;
        int freqs[] = tfv.getTermFrequencies();

        for (int t=0; t < termCount; t++) {
        System.out.println(terms[t] + " " + freqs[t]);
        }
    }

Is there any buit-in function in lucene for me to do that?


Nobody helped, and I did it by myself:

    Directory dir = FSDirectory.open(new File(indexDir));
    IndexReader ir = IndexReader.open(dir);

    int docNum;
    for (docNum = 0; docNum<ir.numDocs(); docNum++) {
        TermFreqVector tfv = ir.getTermFreqVector(docNum, "title");
        if (tfv == null) {
                // ignore empty fields
                continue;
        }
        String tterms[] = tfv.getTerms();
        int termCount = tterms.length;
        int freqs[] = tfv.getTermFrequencies();

        for (int t=0; t < termCount; t++) {
            double idf = ir.numDocs()/ir.docFreq(new Term("title", tterms[t]));
            System.out.println(tterms[t] + " " + freqs[t]*Math.log(idf));
        }
    }

is there any way to find the ID number of each term?


Nobody helped, and I did it by myself again:

    List list = new LinkedList();
    terms = null;
    try
    {
        terms = ir.terms(new Term("title", ""));
        while ("title".equals(terms.term().field()))
        {
        list.add(terms.term().text());
        if (!terms.next())
            break;
        }
    }
    finally
    {
        terms.close();
    }
    int docNum;
    for (docNum = 0; docNum<ir.numDocs(); docNum++) {
        TermFreqVector tfv = ir.getTermFreqVector(docNum, "title");
        if (tfv == null) {
                // ignore empty fields
                continue;
        }
        String tterms[] = tfv.getTerms();
        int termCount = tterms.length;
        int freqs[] = tfv.getTermFrequencies();

        for (int t=0; t < termCount; t++) {
            double idf = ir.numDocs()/ir.docFreq(new Term("title", tterms[t]));
            System.out.println(Collections.binarySearch(list, tterms[t]) + " " + tterms[t] + " " + freqs[t]*Math.log(idf));
        }
    }

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

囚我心虐我身 2025-01-10 02:57:36

您可能找不到 tf-idf 向量。但正如您已经完成的那样,您可以手动计算 IDF。最好使用 DefaultSimilarity (或您正在使用的任何相似度实现)来为您计算它。

关于Term ID,我认为目前还不能。 至少在 Lucene 4.0 之前不会,请参阅 < a href="http://lucene.apache.org/core/4_1_0/core/org/apache/lucene/index/TermsEnum.html" rel="nofollow">这个。

You'll probably not found a tf-idf vector. But as you've already done, you can calculate IDF by hand. It is probably better to use the DefaultSimilarity (or whatever Similarity implementation you are using) to calculate it for you.

Regarding Term ID, I think currently you can't. At least not until Lucene 4.0, see this.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文