“追踪”与 Thrift 的联系

发布于 2024-08-29 02:18:42 字数 1082 浏览 7 评论 0 原文

我正在尝试使用 Thrift 创建一个游戏,以便客户端是玩家,服务器管理棋盘,就像 这个。但是,我无法弄清楚 Facebook 的 Thrift 服务器如何“跟踪”用户,即在 他们的服务,我不必再次表明自己的身份。

根据生成的服务器存根的建议,没有办法做到这一点:

int main(int argc, char **argv) {
  int port = 9090;
  shared_ptr<ConnectFourHandler> handler(new ConnectFourHandler());
  shared_ptr<TProcessor> processor(new ConnectFourProcessor(handler));
  shared_ptr<TServerTransport> serverTransport(new TServerSocket(port));
  shared_ptr<TTransportFactory> transportFactory(new TBufferedTransportFactory());
  shared_ptr<TProtocolFactory> protocolFactory(new TBinaryProtocolFactory());

  TSimpleServer server(processor, serverTransport, transportFactory, protocolFactory);
  server.serve();
  return 0;
}

在该示例中,只有一个为服务器创建的处理程序,并且服务器接受连接。

如果所有请求仅通过每台服务器的一个处理程序进行路由,Facebook 如何能够跟踪哪些客户端连接到服务器?

I am trying to create a game using Thrift, so that the clients are the players, and the server manages the boards, much like this. However, I cannot figure out how Facebook's Thrift server can "track" the user, i.e. when calling attack() on their service, I do not have to identify myself again.

According to what the generated server stub suggests, there is no way to do this:

int main(int argc, char **argv) {
  int port = 9090;
  shared_ptr<ConnectFourHandler> handler(new ConnectFourHandler());
  shared_ptr<TProcessor> processor(new ConnectFourProcessor(handler));
  shared_ptr<TServerTransport> serverTransport(new TServerSocket(port));
  shared_ptr<TTransportFactory> transportFactory(new TBufferedTransportFactory());
  shared_ptr<TProtocolFactory> protocolFactory(new TBinaryProtocolFactory());

  TSimpleServer server(processor, serverTransport, transportFactory, protocolFactory);
  server.serve();
  return 0;
}

In that example, there is only one handler being created for the server, and the server is what accepts connections.

How is Facebook able to keep track of what clients are connected to the server, if all requests are routed through only one handler per server?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

奈何桥上唱咆哮 2024-09-05 02:18:42

尝试server.setServerEventHandler。当新客户端连接时,它会调用您的代码,使您有机会创建特定于连接的上下文对象。

Try server.setServerEventHandler. It'll call your code when a new client connects, giving you the opportunity to create a connection-specific context object.

﹉夏雨初晴づ 2024-09-05 02:18:42

因为 thrift 对每个连接使用唯一的线程,所以可以使用线程 id 将两者链接在一起。据我所知,仅靠节俭本身是无法做到这一点的。在我看来,如果 thrift 将上下文字段传递给每个处理函数,这是可能的。

下面是一个使用线程 id 的示例:

#include <map>
#include <thread>

typedef struct {
    // you would put your connection specific variables here
} ConnectionContext;

// You could reverse these types if you had more than one
// context per thrift connection, eg. your service involved opening
// or connecting to more than one thing per thrift connection
std::map<std::thread::id, ConnectionContext> threadContextMap;

class OurEventHandler : public server::TServerEventHandler {
public:
OurEventHandler() :
    NumClients_(0)
{}

//Called before the server begins -
//virtual void preServe() {}

//createContext may return a user-defined context to aid in cleaning
//up client connections upon disconnection. This example dispenses
//with contextual information and returns NULL.
virtual void* createContext(shared_ptr<protocol::TProtocol> input, 
shared_ptr<protocol::TProtocol> output)
{
    printf("Client connected (total %d)\n", ++NumClients_);

    auto this_id = std::this_thread::get_id();
    std::cout << "connected thread " << this_id << std::endl;

    return NULL;
}

//Called when an client has disconnected, either naturally or by error.
virtual void deleteContext(void* serverContext, 
shared_ptr<protocol::TProtocol>input, shared_ptr<protocol::TProtocol>output)
{
    printf("Client disconnected (total %d)\n", --NumClients_);

    auto this_id = std::this_thread::get_id();
    std::cout << "disconnected thread " << this_id << std::endl;

    auto context = threadContextMap[this_id];

    // TODO: Perform your context specific cleanup code here
}

protected:
    uint32_t NumClients_;
};


class yourRpcHandler : virtual public service_rpcIf
{
public:
    yourRpcHandler() :
    {
        // Your initialization goes here
    }

    void SomeMethod()
    {
        auto context = threadContextMap[std::this_thread::get_id()];

        // TODO: use the context as you see fit
    }
};

int main(int argc, char **argv)
{
    int port = 9090;
    printf("Listening on port %d\n", port);

    auto rpcHandler = new yourRpcHandler();
    shared_ptr<yourRpcHandler> handler(rpcHandler);
    shared_ptr<TProcessor> processor(new yourRpcProcessor(handler));
    shared_ptr<TServerTransport> serverTransport(new TServerSocket(port));
    shared_ptr<TTransportFactory> transportFactory(new TBufferedTransportFactory());
    shared_ptr<TProtocolFactory> protocolFactory(new TBinaryProtocolFactory());

    TThreadedServer server(processor, serverTransport, transportFactory, protocolFactory);
    shared_ptr<OurEventHandler> EventHandler(new OurEventHandler());

    server.setServerEventHandler(EventHandler);
    server.serve();
    return 0;
}

Because thrift uses a unique thread for each connection it is possible to use the thread id to link the two together. As far as I know there is no way to do this from thrift itself. Imo it would be possible if thrift passed a context field down to each handler function.

Here is an example that uses the thread id:

#include <map>
#include <thread>

typedef struct {
    // you would put your connection specific variables here
} ConnectionContext;

// You could reverse these types if you had more than one
// context per thrift connection, eg. your service involved opening
// or connecting to more than one thing per thrift connection
std::map<std::thread::id, ConnectionContext> threadContextMap;

class OurEventHandler : public server::TServerEventHandler {
public:
OurEventHandler() :
    NumClients_(0)
{}

//Called before the server begins -
//virtual void preServe() {}

//createContext may return a user-defined context to aid in cleaning
//up client connections upon disconnection. This example dispenses
//with contextual information and returns NULL.
virtual void* createContext(shared_ptr<protocol::TProtocol> input, 
shared_ptr<protocol::TProtocol> output)
{
    printf("Client connected (total %d)\n", ++NumClients_);

    auto this_id = std::this_thread::get_id();
    std::cout << "connected thread " << this_id << std::endl;

    return NULL;
}

//Called when an client has disconnected, either naturally or by error.
virtual void deleteContext(void* serverContext, 
shared_ptr<protocol::TProtocol>input, shared_ptr<protocol::TProtocol>output)
{
    printf("Client disconnected (total %d)\n", --NumClients_);

    auto this_id = std::this_thread::get_id();
    std::cout << "disconnected thread " << this_id << std::endl;

    auto context = threadContextMap[this_id];

    // TODO: Perform your context specific cleanup code here
}

protected:
    uint32_t NumClients_;
};


class yourRpcHandler : virtual public service_rpcIf
{
public:
    yourRpcHandler() :
    {
        // Your initialization goes here
    }

    void SomeMethod()
    {
        auto context = threadContextMap[std::this_thread::get_id()];

        // TODO: use the context as you see fit
    }
};

int main(int argc, char **argv)
{
    int port = 9090;
    printf("Listening on port %d\n", port);

    auto rpcHandler = new yourRpcHandler();
    shared_ptr<yourRpcHandler> handler(rpcHandler);
    shared_ptr<TProcessor> processor(new yourRpcProcessor(handler));
    shared_ptr<TServerTransport> serverTransport(new TServerSocket(port));
    shared_ptr<TTransportFactory> transportFactory(new TBufferedTransportFactory());
    shared_ptr<TProtocolFactory> protocolFactory(new TBinaryProtocolFactory());

    TThreadedServer server(processor, serverTransport, transportFactory, protocolFactory);
    shared_ptr<OurEventHandler> EventHandler(new OurEventHandler());

    server.setServerEventHandler(EventHandler);
    server.serve();
    return 0;
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文