返回介绍

3.2 Binder 本地层的整个函数/方法调用过程

发布于 2024-12-23 21:29:01 字数 15627 浏览 0 评论 0 收藏 0

Binder 本地函数调用图

1. Java 层 IRemoteService.Stub.Proxy 调用 android.os.IBinder (实现在 android.os.Binder.BinderProxy)transact() 发送 Stub.TRANSACTION_addUser 命令。

2. 由 BinderProxy.transact() 进入 native 层。

3. 由 jni 转到 android_os_BinderProxy_transact() 函数。

4. 调用 IBinder->transact 函数。

static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
    jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
{
  IBinder* target = (IBinder*)
    env->GetLongField(obj, gBinderProxyOffsets.mObject);
  status_t err = target->transact(code, *data, reply, flags);
}

gBinderProxyOffsets.mObject 则是在 java 层调用 IBinder.getContextObject() 时在 javaObjectForIBinder 函数中设置的

static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
  sp<IBinder> b = ProcessState::self()->getContextObject(NULL);
  return javaObjectForIBinder(env, b);
}

jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
  ...
  LOGDEATH("objectForBinder %p: created new proxy %p !\n", val.get(), object);
  // The proxy holds a reference to the native object.
  env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
  val->incStrong((void*)javaObjectForIBinder);
  ...
}

经过 ProcessState::getContextObject()ProcessState::getStrongProxyForHandle()

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
  return getStrongProxyForHandle(0);
}

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
  sp<IBinder> result;
  ...
  b = new BpBinder(handle); 
  result = b;
  ...
  return result;
}

可见 android_os_BinderProxy_transact() 函数实际上调用的是 BpBinder::transact() 函数。

5. BpBinder::transact() 则又调用了 IPCThreadState::self()->transact() 函数。

status_t IPCThreadState::transact(int32_t handle,
                  uint32_t code, const Parcel& data,
                  Parcel* reply, uint32_t flags)
{
  status_t err = data.errorCheck();

  flags |= TF_ACCEPT_FDS;
  
  if (err == NO_ERROR) {
    LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
      (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
    err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
  }
  
  if ((flags & TF_ONE_WAY) == 0) {
    if (reply) {
      err = waitForResponse(reply);
    } else {
      Parcel fakeReply;
      err = waitForResponse(&fakeReply);
    }
  } else {
    err = waitForResponse(NULL, NULL);
  }
  
  return err;
}

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
  int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
  binder_transaction_data tr;

  tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
  tr.target.handle = handle;
  tr.code = code;
  ...
  
  mOut.writeInt32(cmd);
  mOut.write(&tr, sizeof(tr));
  
  return NO_ERROR;
}

由函数内容可以看出, 数据再一次通过 writeTransactionData() 传递给 mOut 进行写入操作。 mOut 是一个 Parcel 对象, 声明在 IPCThreadState.h 文件中。之后则调用 waitForResponse() 函数。

6. IPCThreadState::waitForResponse() 在一个 while 循环里不断的调用 talkWithDriver() 并检查是否有数据返回。

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
  uint32_t cmd;
  int32_t err;

  while (1) {
    if ((err=talkWithDriver()) < NO_ERROR) break;
    ...
    
    cmd = (uint32_t)mIn.readInt32();

    switch (cmd) {
    case BR_TRANSACTION_COMPLETE:
      ...
    
    case BR_REPLY:
      {
        binder_transaction_data tr;
        err = mIn.read(&tr, sizeof(tr));
        ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
        if (err != NO_ERROR) goto finish;

        if (reply) {
          if ((tr.flags & TF_STATUS_CODE) == 0) {
            reply->ipcSetDataReference(
              reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
              tr.data_size,
              reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
              tr.offsets_size/sizeof(binder_size_t),
              freeBuffer, this);
          } else {
            err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
            freeBuffer(NULL,
              reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
              tr.data_size,
              reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
              tr.offsets_size/sizeof(binder_size_t), this);
          }
        } else {
          freeBuffer(NULL,
            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
            tr.data_size,
            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
            tr.offsets_size/sizeof(binder_size_t), this);
          continue;
        }
      }
      goto finish;
    }
    
    default:
      err = executeCommand(cmd);
      if (err != NO_ERROR) goto finish;
      break;
    }
  }
  ...
}

7. IPCThreadState::talkWithDriver() 函数是真正与 binder 驱动交互的实现。 ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) 就是使用系统调用函数 ioctl 向 binder 设备文件 /dev/binder 发送 BINDER_WRITE_READ 命令。

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
  if (mProcess->mDriverFD <= 0) {
    return -EBADF;
  }
  
  binder_write_read bwr;
  
  // Is the read buffer empty?
  const bool needRead = mIn.dataPosition() >= mIn.dataSize();
  
  // We don't want to write anything if we are still reading
  // from data left in the input buffer and the caller
  // has requested to read the next data.
  const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
  
  bwr.write_size = outAvail;
  bwr.write_buffer = (uintptr_t)mOut.data();

  // This is what we'll read.
  if (doReceive && needRead) {
    bwr.read_size = mIn.dataCapacity();
    bwr.read_buffer = (uintptr_t)mIn.data();
  } else {
    bwr.read_size = 0;
    bwr.read_buffer = 0;
  }
  
  // Return immediately if there is nothing to do.
  if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

  bwr.write_consumed = 0;
  bwr.read_consumed = 0;
  status_t err;
  
#if defined(HAVE_ANDROID_OS)
    // 使用系统调用 ioctl 向 /dev/binder 发送 BINDER_WRITE_READ 命令
    if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
      err = NO_ERROR;
    else
      err = -errno;
#else
    err = INVALID_OPERATION;
#endif
  
  do {
    if (mProcess->mDriverFD <= 0) {
      err = -EBADF;
    }
  } while (err == -EINTR);

  if (err >= NO_ERROR) {
    if (bwr.write_consumed > 0) {
      if (bwr.write_consumed < mOut.dataSize())
        mOut.remove(0, bwr.write_consumed);
      else
        mOut.setDataSize(0);
    }
    if (bwr.read_consumed > 0) {
      mIn.setDataSize(bwr.read_consumed);
      mIn.setDataPosition(0);
    }
    return NO_ERROR;
  }
  
  return err;
}

经过 IPCThreadState::talkWithDriver() ,就将数据发送给了 Binder 驱动。

继续追踪 IPCThreadState::waitForResponse() ,可以从 第 6 步 发现 IPCThreadState 不断的循环读取 Binder 驱动返回,获取到返回命令后执行了 executeCommand(cmd) 函数。

8. IPCThreadState::executeCommand() 处理 Binder 驱动返回命令

status_t IPCThreadState::executeCommand(int32_t cmd)
{
  BBinder* obj;
  RefBase::weakref_type* refs;
  status_t result = NO_ERROR;
  
  switch ((uint32_t)cmd) {
  ...
  
  case BR_TRANSACTION:
    {
      binder_transaction_data tr;
      result = mIn.read(&tr, sizeof(tr));
      ...
      Parcel buffer;
      buffer.ipcSetDataReference(
        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
        tr.data_size,
        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
        tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);
      ...

      Parcel reply;
      status_t error;
      if (tr.target.ptr) {
        sp<BBinder> b((BBinder*)tr.cookie);
        error = b->transact(tr.code, buffer, &reply, tr.flags);

      } else {
        error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
      }
      ...
    }
    break;
  ...
}

9. 可以看出其调用了 BBinder::transact() 函数,将数据返回给上层。

status_t BBinder::transact(
  uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
  data.setDataPosition(0);

  status_t err = NO_ERROR;
  switch (code) {
    case PING_TRANSACTION:
      reply->writeInt32(pingBinder());
      break;
    default:
      err = onTransact(code, data, reply, flags);
      break;
  }

  if (reply != NULL) {
    reply->setDataPosition(0);
  }

  return err;
}

10. 而这里的 b->transact(tr.code, buffer, &reply, tr.flags) 中的 b (BBinder)JavaBBinder 的实例,所以会调用 JavaBBinder::onTransact() 函数

// frameworks/base/core/jni/android_util_Binder.cpp
virtual status_t onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0)
  {
    JNIEnv* env = javavm_to_jnienv(mVM);
    ...
    jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,
      code, reinterpret_cast<jlong>(&data), reinterpret_cast<jlong>(reply), flags);
  }
  
static int int_register_android_os_Binder(JNIEnv* env)
{
  ...
  gBinderOffsets.mExecTransact = GetMethodIDOrDie(env, clazz, "execTransact", "(IJJI)Z");
  ...
}

11. 可见 JNI 通过 gBinderOffsets.mExecTransact 最后执行了 android.os.BinderexecTransact() 方法。

execTransact() 方法是 jni 回调的入口。

// Entry point from android_util_Binder.cpp's onTransact
  private boolean execTransact(int code, long dataObj, long replyObj,
      int flags) {
    Parcel data = Parcel.obtain(dataObj);
    Parcel reply = Parcel.obtain(replyObj);
    ...
    try {
      res = onTransact(code, data, reply, flags);
    } 
    ...
  }

12. 而我们则在服务端 IRemoteService.Stub 重载了 onTransact() 方法,所以数据最后会回到我们的服务端并执行服务端实现的 addUser() 方法。

public static abstract class Stub extends android.os.Binder
    implements org.xdty.remoteservice.IRemoteService {
  ...
  @Override
  public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply,
      int flags) throws android.os.RemoteException {
    switch (code) {
      case INTERFACE_TRANSACTION: {
        reply.writeString(DESCRIPTOR);
        return true;
      }
      case TRANSACTION_basicTypes: {
        ...
        return true;
      }
      case TRANSACTION_addUser: {
        data.enforceInterface(DESCRIPTOR);
        org.xdty.remoteservice.User _arg0;
        if ((0 != data.readInt())) {
          _arg0 = org.xdty.remoteservice.User.CREATOR.createFromParcel(data);
        } else {
          _arg0 = null;
        }
        this.addUser(_arg0);
        reply.writeNoException();
        return true;
      }
    }
    return super.onTransact(code, data, reply, flags);
  }
}

上述过程就是所有的 Native 层客户端到服务端的调用过程,总结下来就是 客户端进程发送 BC_TRANSACTION 到 Binder 驱动,服务端进程监听返回的 BR_TRANSACTION 命令并处理。如果是服务端向客户端返回数据,类似的是服务端发送 BC_REPLY 命令, 客户端监听 BR_REPLY 命令。

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文