是否有可能使用 Erlang、Mnesia 和 Web 开发一个强大的网络搜索引擎? 雅司病?

发布于 2024-07-06 09:36:05 字数 107 浏览 7 评论 0 原文

我正在考虑使用 Erlang、Mnesia & 开发一个网络搜索引擎。 雅司病。 是否有可能使用这些软件制作一个功能强大且速度最快的网络搜索引擎? 实现这一目标需要什么以及我该如何开始?

I am thinking of developing a web search engine using Erlang, Mnesia & Yaws. Is it possible to make a powerful and the fastest web search engine using these software? What will it need to accomplish this and how what do I start with?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

垂暮老矣 2024-07-13 09:36:05

Erlang 可以打造当今最强大的网络爬虫。 让我带您了解我的简单爬虫。

步骤 1. 我创建一个简单的并行模块,我将其称为 mapreduce

-module(mapreduce).
-export([compute/2]).
%%=====================================================================
%% usage example
%% Module = string
%% Function = tokens
%% List_of_arg_lists = [["file\r\nfile","\r\n"],["muzaaya_joshua","_"]]
%% Ans = [["file","file"],["muzaaya","joshua"]]
%% Job being done by two processes
%% i.e no. of processes spawned = length(List_of_arg_lists)

compute({Module,Function},List_of_arg_lists)->
    S = self(),
    Ref = erlang:make_ref(),
    PJob = fun(Arg_list) -> erlang:apply(Module,Function,Arg_list) end,
    Spawn_job = fun(Arg_list) -> 
                    spawn(fun() -> execute(S,Ref,PJob,Arg_list) end)
                end,
    lists:foreach(Spawn_job,List_of_arg_lists),
    gather(length(List_of_arg_lists),Ref,[]).
gather(0, _, L) -> L; gather(N, Ref, L) -> receive {Ref,{'EXIT',_}} -> gather(N-1,Ref,L); {Ref, Result} -> gather(N-1, Ref, [Result|L]) end.
execute(Parent,Ref,Fun,Arg)-> Parent ! {Ref,(catch Fun(Arg))}.

步骤 2. HTTP 客户端

人们通常会使用 erlang 内置的 inets httpc moduleibrowse 。 然而,为了内存管理和速度(使内存占用尽可能低),优秀的 erlang 程序员会选择使用 卷曲。 通过应用 os:cmd/1 采用curl 命令行,可以将输出直接输入到erlang 调用函数中。 然而,最好让curl将其输出扔到文件中,然后我们的应用程序有另一个线程(进程)来读取和解析这些文件

Command: curl "http://www.erlang.org" -o "/downloaded_sites/erlang/file1.html"
In Erlang
os:cmd("curl \"http://www.erlang.org\" -o \"/downloaded_sites/erlang/file1.html\"").

So you can spawn many processes. You remember to escape the URL as well as the output file path as you execute that command. There is a process on the other hand whose work is to watch the directory of downloaded pages. These pages it reads and parses them, it may then delete after parsing or save in a different location or even better, archive them using the zip module

folder_check()->
    spawn(fun() -> check_and_report() end),
    ok.

-define(CHECK_INTERVAL,5).

check_and_report()->
    %% avoid using
    %% filelib:list_dir/1
    %% if files are many, memory !!!
    case os:cmd("ls | wc -l") of
        "0\n" -> ok;
        "0" -> ok;
        _ -> ?MODULE:new_files_found()
    end,
    sleep(timer:seconds(?CHECK_INTERVAL)),
    %% keep checking
    check_and_report().

new_files_found()->
    %% inform our parser to pick files
    %% once it parses a file, it has to 
    %% delete it or save it some
    %% where else
    gen_server:cast(?MODULE,files_detected).

步骤 3.html 解析器。
更好地使用这个 mochiweb 的 html解析器和 XPATH。 这将帮助您解析并获取所有您喜欢的 HTML 标签,提取内容,然后就可以开始了。 在下面的示例中,我仅关注标记中的 Keywordsdescriptiontitle

在 shell 中进行模块测试...非常棒的结果!!!

2> spider_bot:parse_url("http://erlang.org").
[[[],[],
  {"keywords",
   "erlang, functional, programming, fault-tolerant, distributed, multi-platform, portable, software, multi-core, smp, concurrency "},
  {"description","open-source erlang official website"}],
 {title,"erlang programming language, official website"}]

3> spider_bot:parse_url("http://facebook.com").
[[{"description",
   " facebook is a social utility that connects people with friends and others who work, study and live around them. people use facebook to keep up with friends, upload an unlimited number of photos, post links
 and videos, and learn more about the people they meet."},
  {"robots","noodp,noydir"},
    [],[],[],[]],
 {title,"incompatible browser | facebook"}]

4> spider_bot:parse_url("http://python.org").
[[{"description",
   "      home page for python, an interpreted, interactive, object-oriented, extensible\n      programming language. it provides an extraordinary combination of clarity and\n      versatility, and is free and
comprehensively ported."},
  {"keywords",
   "python programming language object oriented web free source"},
  []],
 {title,"python programming language – official website"}]

5> spider_bot:parse_url("http://www.house.gov/").
[[[],[],[],
  {"description",
   "home page of the united states house of representatives"},
  {"description",
   "home page of the united states house of representatives"},
  [],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],
  [],[],[]|...],
 {title,"united states house of representatives, 111th congress, 2nd session"}]

您现在可以意识到,我们可以根据关键字对页面进行索引,并制定良好的页面重新访问计划。 另一个挑战是如何制作一个爬虫(可以在整个网络中从一个域移动到另一个域的东西),但这很容易。 可以通过解析 Html 文件中的 href 标签来实现。 让 HTML 解析器提取所有 href 标签,然后您可能需要一些正则表达式来获取给定域下的链接。

运行爬虫

7> spider_connect:conn2("http://erlang.org").        

        Links: ["http://www.erlang.org/index.html",
                "http://www.erlang.org/rss.xml",
                "http://erlang.org/index.html","http://erlang.org/about.html",
                "http://erlang.org/download.html",
                "http://erlang.org/links.html","http://erlang.org/faq.html",
                "http://erlang.org/eep.html",
                "http://erlang.org/starting.html",
                "http://erlang.org/doc.html",
                "http://erlang.org/examples.html",
                "http://erlang.org/user.html",
                "http://erlang.org/mirrors.html",
                "http://www.pragprog.com/titles/jaerlang/programming-erlang",
                "http://oreilly.com/catalog/9780596518189",
                "http://erlang.org/download.html",
                "http://www.erlang-factory.com/conference/ErlangUserConference2010/speakers",
                "http://erlang.org/download/otp_src_R14B.readme",
                "http://erlang.org/download.html",
                "https://www.erlang-factory.com/conference/ErlangUserConference2010/register",
                "http://www.erlang-factory.com/conference/ErlangUserConference2010/submit_talk",
                "http://www.erlang.org/workshop/2010/",
                "http://erlangcamp.com","http://manning.com/logan",
                "http://erlangcamp.com","http://twitter.com/erlangcamp",
                "http://www.erlang-factory.com/conference/London2010/speakers/joearmstrong/",
                "http://www.erlang-factory.com/conference/London2010/speakers/RobertVirding/",
                "http://www.erlang-factory.com/conference/London2010/speakers/MartinOdersky/",
                "http://www.erlang-factory.com/",
                "http://erlang.org/download/otp_src_R14A.readme",
                "http://erlang.org/download.html",
                "http://www.erlang-factory.com/conference/London2010",
                "http://github.com/erlang/otp",
                "http://erlang.org/download.html",
                "http://erlang.org/doc/man/erl_nif.html",
                "http://github.com/erlang/otp",
                "http://erlang.org/download.html",
                "http://www.erlang-factory.com/conference/ErlangUserConference2009",
                "http://erlang.org/doc/efficiency_guide/drivers.html",
                "http://erlang.org/download.html",
                "http://erlang.org/workshop/2009/index.html",
                "http://groups.google.com/group/erlang-programming",
                "http://www.erlang.org/eeps/eep-0010.html",
                "http://erlang.org/download/otp_src_R13B.readme",
                "http://erlang.org/download.html",
                "http://oreilly.com/catalog/9780596518189",
                "http://www.erlang-factory.com",
                "http://www.manning.com/logan",
                "http://www.erlang.se/euc/08/index.html",
                "http://erlang.org/download/otp_src_R12B-5.readme",
                "http://erlang.org/download.html",
                "http://erlang.org/workshop/2008/index.html",
                "http://www.erlang-exchange.com",
                "http://erlang.org/doc/highlights.html",
                "http://www.erlang.se/euc/07/",
                "http://www.erlang.se/workshop/2007/",
                "http://erlang.org/eep.html",
                "http://erlang.org/download/otp_src_R11B-5.readme",
                "http://pragmaticprogrammer.com/titles/jaerlang/index.html",
                "http://erlang.org/project/test_server",
                "http://erlang.org/download-stats/",
                "http://erlang.org/user.html#smtp_client-1.0",
                "http://erlang.org/user.html#xmlrpc-1.13",
                "http://erlang.org/EPLICENSE",
                "http://erlang.org/project/megaco/",
                "http://www.erlang-consulting.com/training_fs.html",
                "http://erlang.org/old_news.html"]
ok

Storage: Is one of the most important concepts for a search engine. Its a big mistake to store search engine data in an RDBMS like MySQL, Oracle, MS SQL e.t.c. Such systems are completely complex and the applications that interface with them employ heuristic algorithms. This brings us to Key-Value Stores, of which the two of my best are Couch Base Server and Riak. These are great Cloud File Systems. Another important parameter is caching. Caching is attained using say Memcached, of which the other two storage systems mentioned above have support for it. Storage systems for Search engines ought to be schemaless DBMS,which focuses on Availability rather than Consistency. Read more on Search Engines from here: http://en.wikipedia.org/wiki/Web_search_engine

Erlang can make the most powerful web crawler today. Let me take you through my simple crawler.

Step 1. I create a simple parallelism module, which i call mapreduce

-module(mapreduce).
-export([compute/2]).
%%=====================================================================
%% usage example
%% Module = string
%% Function = tokens
%% List_of_arg_lists = [["file\r\nfile","\r\n"],["muzaaya_joshua","_"]]
%% Ans = [["file","file"],["muzaaya","joshua"]]
%% Job being done by two processes
%% i.e no. of processes spawned = length(List_of_arg_lists)

compute({Module,Function},List_of_arg_lists)->
    S = self(),
    Ref = erlang:make_ref(),
    PJob = fun(Arg_list) -> erlang:apply(Module,Function,Arg_list) end,
    Spawn_job = fun(Arg_list) -> 
                    spawn(fun() -> execute(S,Ref,PJob,Arg_list) end)
                end,
    lists:foreach(Spawn_job,List_of_arg_lists),
    gather(length(List_of_arg_lists),Ref,[]).
gather(0, _, L) -> L; gather(N, Ref, L) -> receive {Ref,{'EXIT',_}} -> gather(N-1,Ref,L); {Ref, Result} -> gather(N-1, Ref, [Result|L]) end.
execute(Parent,Ref,Fun,Arg)-> Parent ! {Ref,(catch Fun(Arg))}.

Step 2. The HTTP Client

One would normally use either inets httpc module built into erlang or ibrowse. However, for memory management and speed (getting the memory foot print as low as possible), a good erlang programmer would choose to use curl. By applying the os:cmd/1 which takes that curl command line, one would get the output direct into the erlang calling function. Yet still, its better, to make curl throw its outputs into files and then our application has another thread (process) which reads and parses these files

Command: curl "http://www.erlang.org" -o "/downloaded_sites/erlang/file1.html"
In Erlang
os:cmd("curl \"http://www.erlang.org\" -o \"/downloaded_sites/erlang/file1.html\"").

So you can spawn many processes. You remember to escape the URL as well as the output file path as you execute that command. There is a process on the other hand whose work is to watch the directory of downloaded pages. These pages it reads and parses them, it may then delete after parsing or save in a different location or even better, archive them using the zip module

folder_check()->
    spawn(fun() -> check_and_report() end),
    ok.

-define(CHECK_INTERVAL,5).

check_and_report()->
    %% avoid using
    %% filelib:list_dir/1
    %% if files are many, memory !!!
    case os:cmd("ls | wc -l") of
        "0\n" -> ok;
        "0" -> ok;
        _ -> ?MODULE:new_files_found()
    end,
    sleep(timer:seconds(?CHECK_INTERVAL)),
    %% keep checking
    check_and_report().

new_files_found()->
    %% inform our parser to pick files
    %% once it parses a file, it has to 
    %% delete it or save it some
    %% where else
    gen_server:cast(?MODULE,files_detected).

Step 3. The html parser.
Better use this mochiweb's html parser and XPATH. This will help you parse and get all your favorite HTML tags, extract the contents and then good to go. The examples below, i focused on only the Keywords, description and title in the markup

Module Testing in shell...awesome results!!!

2> spider_bot:parse_url("http://erlang.org").
[[[],[],
  {"keywords",
   "erlang, functional, programming, fault-tolerant, distributed, multi-platform, portable, software, multi-core, smp, concurrency "},
  {"description","open-source erlang official website"}],
 {title,"erlang programming language, official website"}]

3> spider_bot:parse_url("http://facebook.com").
[[{"description",
   " facebook is a social utility that connects people with friends and others who work, study and live around them. people use facebook to keep up with friends, upload an unlimited number of photos, post links
 and videos, and learn more about the people they meet."},
  {"robots","noodp,noydir"},
    [],[],[],[]],
 {title,"incompatible browser | facebook"}]

4> spider_bot:parse_url("http://python.org").
[[{"description",
   "      home page for python, an interpreted, interactive, object-oriented, extensible\n      programming language. it provides an extraordinary combination of clarity and\n      versatility, and is free and
comprehensively ported."},
  {"keywords",
   "python programming language object oriented web free source"},
  []],
 {title,"python programming language – official website"}]

5> spider_bot:parse_url("http://www.house.gov/").
[[[],[],[],
  {"description",
   "home page of the united states house of representatives"},
  {"description",
   "home page of the united states house of representatives"},
  [],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],
  [],[],[]|...],
 {title,"united states house of representatives, 111th congress, 2nd session"}]

You can now realise that, we can index the pages against their keywords, plus a good schedule of page revisists. Another challenge was how to make a crawler (something that will move around the entire web, from domain to domain), but that one is easy. Its possible by parsing an Html file for the href tags. Make the HTML Parser to extract all href tags and then you might need some regular expressions here and there to get the links right under a given domain.

Running the crawler

7> spider_connect:conn2("http://erlang.org").        

        Links: ["http://www.erlang.org/index.html",
                "http://www.erlang.org/rss.xml",
                "http://erlang.org/index.html","http://erlang.org/about.html",
                "http://erlang.org/download.html",
                "http://erlang.org/links.html","http://erlang.org/faq.html",
                "http://erlang.org/eep.html",
                "http://erlang.org/starting.html",
                "http://erlang.org/doc.html",
                "http://erlang.org/examples.html",
                "http://erlang.org/user.html",
                "http://erlang.org/mirrors.html",
                "http://www.pragprog.com/titles/jaerlang/programming-erlang",
                "http://oreilly.com/catalog/9780596518189",
                "http://erlang.org/download.html",
                "http://www.erlang-factory.com/conference/ErlangUserConference2010/speakers",
                "http://erlang.org/download/otp_src_R14B.readme",
                "http://erlang.org/download.html",
                "https://www.erlang-factory.com/conference/ErlangUserConference2010/register",
                "http://www.erlang-factory.com/conference/ErlangUserConference2010/submit_talk",
                "http://www.erlang.org/workshop/2010/",
                "http://erlangcamp.com","http://manning.com/logan",
                "http://erlangcamp.com","http://twitter.com/erlangcamp",
                "http://www.erlang-factory.com/conference/London2010/speakers/joearmstrong/",
                "http://www.erlang-factory.com/conference/London2010/speakers/RobertVirding/",
                "http://www.erlang-factory.com/conference/London2010/speakers/MartinOdersky/",
                "http://www.erlang-factory.com/",
                "http://erlang.org/download/otp_src_R14A.readme",
                "http://erlang.org/download.html",
                "http://www.erlang-factory.com/conference/London2010",
                "http://github.com/erlang/otp",
                "http://erlang.org/download.html",
                "http://erlang.org/doc/man/erl_nif.html",
                "http://github.com/erlang/otp",
                "http://erlang.org/download.html",
                "http://www.erlang-factory.com/conference/ErlangUserConference2009",
                "http://erlang.org/doc/efficiency_guide/drivers.html",
                "http://erlang.org/download.html",
                "http://erlang.org/workshop/2009/index.html",
                "http://groups.google.com/group/erlang-programming",
                "http://www.erlang.org/eeps/eep-0010.html",
                "http://erlang.org/download/otp_src_R13B.readme",
                "http://erlang.org/download.html",
                "http://oreilly.com/catalog/9780596518189",
                "http://www.erlang-factory.com",
                "http://www.manning.com/logan",
                "http://www.erlang.se/euc/08/index.html",
                "http://erlang.org/download/otp_src_R12B-5.readme",
                "http://erlang.org/download.html",
                "http://erlang.org/workshop/2008/index.html",
                "http://www.erlang-exchange.com",
                "http://erlang.org/doc/highlights.html",
                "http://www.erlang.se/euc/07/",
                "http://www.erlang.se/workshop/2007/",
                "http://erlang.org/eep.html",
                "http://erlang.org/download/otp_src_R11B-5.readme",
                "http://pragmaticprogrammer.com/titles/jaerlang/index.html",
                "http://erlang.org/project/test_server",
                "http://erlang.org/download-stats/",
                "http://erlang.org/user.html#smtp_client-1.0",
                "http://erlang.org/user.html#xmlrpc-1.13",
                "http://erlang.org/EPLICENSE",
                "http://erlang.org/project/megaco/",
                "http://www.erlang-consulting.com/training_fs.html",
                "http://erlang.org/old_news.html"]
ok

Storage: Is one of the most important concepts for a search engine. Its a big mistake to store search engine data in an RDBMS like MySQL, Oracle, MS SQL e.t.c. Such systems are completely complex and the applications that interface with them employ heuristic algorithms. This brings us to Key-Value Stores, of which the two of my best are Couch Base Server and Riak. These are great Cloud File Systems. Another important parameter is caching. Caching is attained using say Memcached, of which the other two storage systems mentioned above have support for it. Storage systems for Search engines ought to be schemaless DBMS,which focuses on Availability rather than Consistency. Read more on Search Engines from here: http://en.wikipedia.org/wiki/Web_search_engine

娇妻 2024-07-13 09:36:05

据我所知Powerset的自然语言处理搜索引擎是使用erlang开发的。

您是否将 couchdb (也是用 erlang 编写的)视为可以帮助您的工具解决路上的一些问题?

As far as I know Powerset's natural language procesing search engine is developed using erlang.

Did you look at couchdb (which is written in erlang as well) as a possible tool to help you to solve few problems on your way?

ι不睡觉的鱼゛ 2024-07-13 09:36:05

我会推荐 CouchDB 而不是 Mnesia。

YAWS 相当不错。 您还应该考虑 MochiWeb。

使用 Erlang 不会出错

I would recommend CouchDB instead of Mnesia.

YAWS is pretty good. You should also consider MochiWeb.

You won't go wrong with Erlang

得不到的就毁灭 2024-07-13 09:36:05

'rdbms'贡献中,有波特词干算法的实现。 它从未集成到“rdbms”中,所以它基本上只是坐在那里。 我们已经在内部使用了它,并且它运行得很好,至少对于不太大的数据集来说(我还没有在巨大的数据量上测试过它)。

相关模块有:

rdbms_wsearch.erl
rdbms_wsearch_idx.erl
rdbms_wsearch_porter.erl

当然还有Disco Map-Reduce 框架

我不能说你是否能制造出最快的引擎。 是否存在更快搜索引擎的市场? 我从来没有遇到过谷歌等速度的问题。 但是,如果有一个搜索工具能够增加我找到问题良好答案的机会,我就会感兴趣。

In the 'rdbms' contrib, there is an implementation of the Porter Stemming Algorithm. It was never integrated into 'rdbms', so it's basically just sitting out there. We have used it internally, and it worked quite well, at least for datasets that weren't huge (I haven't tested it on huge data volumes).

The relevant modules are:

rdbms_wsearch.erl
rdbms_wsearch_idx.erl
rdbms_wsearch_porter.erl

Then there is, of course, the Disco Map-Reduce framework.

Whether or not you can make the fastest engine out there, I couldn't say. Is there a market for a faster search engine? I've never had problems with the speed of e.g. Google. But a search facility that increased my chances of finding good answers to my questions would interest me.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文