如何在 Objective-C 中在运行时创建函数

发布于 2024-10-27 09:03:56 字数 2460 浏览 5 评论 0原文

现在已经很晚了,我的谷歌技能似乎让我失望了。我之前(一次又一次)发现了一些很好的回应,我想你们可以提供帮助。

我有一个神经网络,我试图在本机 Objective-C 中运行。它有效,但速度太慢。这些网络不是经常出现的。每个网络我运行大约 20,000 次(128x80 次,或大约这个数字)。问题是这些网络实际上只是归结为数学函数(每个网络都是一个 4 维函数,以 x,y,dist(x,y) 和偏差作为输入,并输出 3 个值)。

我想要做的是将每个网络(仅一次)转换为函数调用,或者在 Objective-C 中运行时的代码块。

我该怎么做?我可以制作一个需要执行的数学运算的大字符串,但是如何执行该字符串,或者将字符串转换为要执行的代码块?

再次,我深夜的搜索失败了,如果这个问题已经得到解答,我很抱歉。非常感谢任何帮助。

-保罗

编辑:啊哈!伟大的成功!近 24 小时后,我有了工作代码,可以将最多 4 个输入的神经网络转换为单个 4 维函数。我在答案中使用了 Dave DeLong 建议的块方法。

对于任何想在未来追随我所做的事情的人,这里是我所做事情的(快速)细分(如果这是 stackoverflow 上的不正确礼仪,请原谅): 首先,我为不同的块函数创建了一些 typedef:

typedef CGFloat (^oneDFunction)(CGFloat x);
typedef CGFloat (^twoDFunction)(CGFloat x, CGFloat y);
typedef CGFloat (^threeDFunction)(CGFloat x, CGFloat y, CGFloat z);
typedef CGFloat (^fourDFunction)(CGFloat x, CGFloat y, CGFloat z, CGFloat w);

oneDFunction 采用 f(x) 的形式,twoD 为 f(x,y) 等。然后我创建了一些函数来组合两个 fourDFunction 块(以及 2 个 oneD、2二维等,尽管这些不是必需的)。

fourDFunction (^combineFourD) (fourDFunction f1, fourDFunction f2) =
  ^(fourDFunction f1,     fourDFunction f2){
    fourDFunction blockToCopy = ^(CGFloat x, CGFloat y, CGFloat z, CGFloat w){
        return f1(x,y,z,w) + f2(x,y,z,w);
    };
    fourDFunction act = [blockToCopy copy];
    [f1 release];
    [f2 release];
    //Need to release act at some point
    return act;            
};

当然,我需要将激活函数应用于每个节点的 fourD 函数,并且对于每个节点,我需要乘以连接它的权重

//for applying the activation function
fourDFunction (^applyOneToFourD)( oneDFunction f1, fourDFunction f2) = 
^(oneDFunction f1, fourDFunction f2){
    fourDFunction blockToCopy = ^(CGFloat x, CGFloat y, CGFloat z, CGFloat w){
        return f1(f2(x,y,z,w));
    };    

    fourDFunction act = [blockToCopy copy];
    [f1 release];
    [f2 release];

    //Need to release act at some point
    return act; 

};

//For applying the weight to the function
fourDFunction (^weightCombineFour) (CGFloat x, fourDFunction f1) =
 ^(CGFloat weight, fourDFunction f1)
{
    fourDFunction blockToCopy = ^(CGFloat x, CGFloat y, CGFloat z, CGFloat w){

        return weight*f1(x,y,z,w);
    };

    fourDFunction act = [blockToCopy copy];
    [f1 release];
    //[act release];
    //Need to release act at some point
   return act;

然后,对于网络中的每个节点,我简单地将激活函数应用于源神经元的 fourD 函数乘以它们的连接权重的总和。 组成所有这些块后,我从每个输出中获取了最终的函数。因此,我的输出是输入的单独 4D 函数。

感谢您的帮助,这非常酷。

So it's late here, and my google skills seem to be failing me. I've found some great responses on SO before (time and time again), I thought you guys could help.

I have a neural network I'm trying to run in native objective-c. It works, but it's too slow. These networks are not recurrent. Each network I run about 20,000 times ( 128x80 times, or around that). The problem is these networks really just boil down to math functions (each network is a 4 dimensional function, taking x,y,dist(x,y),and bias as inputs, and outputting 3 values).

What I want to do is convert each network (only once) into a function call, or a block of code at runtime in objective-c.

How do I do this? I could make a big string of the math operations that need to be performed, but how do I go about executing that string, or converting the string into a block of code for execution?

Again, my late night search failed me, so sorry if this has already been answered. Any help is greatly appreciated.

-Paul

Edit: Aha! Great success! Nearly 24 hours later, I have working code to turn a neural network with up to 4 inputs into a single 4 dimensional function. I used the block method suggested by Dave DeLong in the answers.

For anybody who ever wants to follow what I've done in the future, here is a (quick) breakdown of what I did (excuse me if this is incorrect etiquette on stackoverflow):
First, I made a few typedef's for the different block functions:

typedef CGFloat (^oneDFunction)(CGFloat x);
typedef CGFloat (^twoDFunction)(CGFloat x, CGFloat y);
typedef CGFloat (^threeDFunction)(CGFloat x, CGFloat y, CGFloat z);
typedef CGFloat (^fourDFunction)(CGFloat x, CGFloat y, CGFloat z, CGFloat w);

A oneDFunction takes the form of f(x), twoD is f(x,y), etc. Then I made functions to combine two fourDFunction blocks (and 2 oneD, 2 twoD, etc, although these were not necessary).

fourDFunction (^combineFourD) (fourDFunction f1, fourDFunction f2) =
  ^(fourDFunction f1,     fourDFunction f2){
    fourDFunction blockToCopy = ^(CGFloat x, CGFloat y, CGFloat z, CGFloat w){
        return f1(x,y,z,w) + f2(x,y,z,w);
    };
    fourDFunction act = [blockToCopy copy];
    [f1 release];
    [f2 release];
    //Need to release act at some point
    return act;            
};

And, of course, I needed to apply the activation function to the fourD function for every node, and for each node, I would need to multiply by the weight connecting it:

//for applying the activation function
fourDFunction (^applyOneToFourD)( oneDFunction f1, fourDFunction f2) = 
^(oneDFunction f1, fourDFunction f2){
    fourDFunction blockToCopy = ^(CGFloat x, CGFloat y, CGFloat z, CGFloat w){
        return f1(f2(x,y,z,w));
    };    

    fourDFunction act = [blockToCopy copy];
    [f1 release];
    [f2 release];

    //Need to release act at some point
    return act; 

};

//For applying the weight to the function
fourDFunction (^weightCombineFour) (CGFloat x, fourDFunction f1) =
 ^(CGFloat weight, fourDFunction f1)
{
    fourDFunction blockToCopy = ^(CGFloat x, CGFloat y, CGFloat z, CGFloat w){

        return weight*f1(x,y,z,w);
    };

    fourDFunction act = [blockToCopy copy];
    [f1 release];
    //[act release];
    //Need to release act at some point
   return act;

};

Then, for each node in the network, I simply applied the activation function to the sum of the fourD functions from the source neurons multiplied by their connection weight.
After composing all those blocks, I took the final functions from each output. Therefore, my outputs are separate 4D functions of the inputs.

Thanks for the help, this was very cool.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

鹤仙姿 2024-11-03 09:03:56

您可以使用 来完成此操作。类似于:

//specify some parameters
int parameter1 = 42;
int parameter2 = 54;
//create your block
int (^myBlock)(int) = ^(int parameter3){
  return parameter1 * parameter2 * parameter3;
};
//copy the block off the stack
myBlock = [myBlock copy];
//stash the block somewhere so that you can pull it out later
[self saveBlockOffSomewhereElse:myBlock underName:@"myBlock"];
//balance the call to -copy
[myBlock release];

然后在其他地方...

int (^retrievedBlock)(int) = [self retrieveBlockWithName:@"myBlock"];
int theAnswer = retrievedBlock(2);  //theAnswer is 4536

如果您有一个表示要评估的数学的字符串,您可以查看 GCMathParser (快速但不可扩展)或我自己的 DDMathParser (较慢但可扩展)。

You can do this with blocks. Something like:

//specify some parameters
int parameter1 = 42;
int parameter2 = 54;
//create your block
int (^myBlock)(int) = ^(int parameter3){
  return parameter1 * parameter2 * parameter3;
};
//copy the block off the stack
myBlock = [myBlock copy];
//stash the block somewhere so that you can pull it out later
[self saveBlockOffSomewhereElse:myBlock underName:@"myBlock"];
//balance the call to -copy
[myBlock release];

And then elsewhere...

int (^retrievedBlock)(int) = [self retrieveBlockWithName:@"myBlock"];
int theAnswer = retrievedBlock(2);  //theAnswer is 4536

If you have a string representing some math to evaluate, you could check out GCMathParser (fast but not extensible) or my own DDMathParser (slower but extensible).

奈何桥上唱咆哮 2024-11-03 09:03:56

你的想法并不愚蠢。事实上,LLVM 的设计正是为了执行此类操作(生成代码、编译、链接、加载和运行),它甚至还具有可链接的库和可使用的 API。

虽然您可以尝试将一堆块或基元拼凑在一起(一种您自己的虚拟机),但它会更慢并且可能需要更多维护。您最终将不得不编写某种解析器,编写所有原始块,然后将它们拼凑在一起。

对于代码生成,显然,您可能仍然需要解析器,但是生成的代码将会快得多,因为您可以在编译器上启动优化器,并且只要您生成只要一个非常大的代码文件,编译器的优化器就会更加有效。

不过,我建议您生成程序,然后在您的应用程序外部运行它。这将阻止试图动态卸载代码的麻烦。这也意味着,如果生成的代码崩溃,它不会删除您的应用程序。

LLVM.org 有很多额外的细节。

(历史记录——皮克斯建模环境的一种早期形式是基于 TCL 的系统,该系统实际上会发出数十万行高度模板化的 C++ 代码。)

Your idea isn't very stupid. As a matter of fact, LLVM is designed to do exactly that kind of thing (generate code, compile, link, load and run) and it even has libraries to link against and APIs to use.

While you could go down a path of trying to piece together a bunch of blocks or primitives -- a sort of VM of your own -- it'll be slower and probably more maintenance. You'll end up having to write some kind of a parser, write all the primitive blocks, and then piecing it all together.

For code generation, you'll probably still need a parser, obviously, but the resulting code is going to be much much faster because you can crank the optimizer on the compiler up and, as long as you generate just one really big file of code, the compiler's optimizer will be even more effective.

I would suggest, though, that you generate your program and then run it externally to your app. That will prevent the hell that is trying to dynamically unload code. It also means that if the generated code crashes, it doesn't take out your application.

LLVM.org has a bunch of additional details.

(Historical note -- one early form of Pixar's modeling environment was a TCL based system that would emit, literally, hundreds of thousands of lines of heavily templated C++ code.)

谁的新欢旧爱 2024-11-03 09:03:56

这是另一种可能性:使用 OpenGL。

在神经网络中执行的功能类型与 GPU 执行的功能非常相似。乘法/缩放、距离、S 型曲线等...您可以在位图中对您的状态进行编码,生成 ASCII 形式的像素整形器,编译并编译它。使用提供的库调用链接它,然后生成具有新状态的输出“位图”。然后交换两个位图并再次迭代。

编写像素整形器并不像您想象的那么难。在基本情况下,您会从输入位图/缓冲区中获得一个像素,然后计算一个值以放入输出缓冲区中。您还可以访问输入和输出缓冲区中的所有其他像素,如墙和您设置的全局任意参数,包括可能充当任意数据向量的“纹理”位图。

现代 GPU 具有多个管道,因此您可能会获得比本机 CPU 机器代码更好的性能。

Here's another possibility: Use OpenGL.

The sorts of functions you are executing in a neural network are very similar to those performed by GPU's. multiplication/scaling, distance, sigmoids, etc... You could encode your state in a bitmap, generate a pixel shaper as ASCII, compile & link it using the provided library calls, then generate an output "bitmap" with the new state. Then switch the two bitmaps and iterate again.

Writing a pixel shaper is not as hard as you might imagine. In the basic case you are given a pixel from the input bitmap/buffer and you compute a value to put in the output buffer. You also have access to all the other pixels in the input and output buffers, as wall as arbitrary parameters you set global, including "texture" bitmaps which might serve as just an arbitrary data vector.

Modern GPU's have multiple pipelines so you'd probably get much better performance than even native CPU machine code.

安稳善良 2024-11-03 09:03:56

再次投票支持区块。如果您从一堆表示原始操作的块开始,您可以将它们组合成表示复杂函数的更大块。例如,您可以编写一个函数,该函数将多个块作为参数,依次复制每个块并将其用作下一个块的第一个参数。该函数的结果可以是表示数学函数的块。

也许由于时间太晚,我在这里说得有点疯狂,但似乎块引用其他块并维护状态的能力应该使它们非常适合组装操作。

Another vote for blocks. If you start with a bunch of blocks representing primitive operations, you could compose those into larger blocks that represent complex functions. For example, you might write a function that takes a number of blocks as parameters, copies each one in turn and uses it as the first parameter to the next block. The result of the function could be a block that represents a mathematical function.

Perhaps I'm talking crazy here due to the late hour, but it seems like the ability of blocks to refer to other blocks and to maintain state should make them very good for assembling operations.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文