People always compare labview with C++ and day "oh labview is high level and it has so much built in functionality try acquiring data doing a dfft and displaying the data its so easy in labview try it in C++".
Myth 1: It's hard to get anything done with C++ its because its so low level and labview has many things already implemented.
The problem is if you are developing a robotic system in C++ you MUST use libraries like opencv , pcl .. ect and you would be even more smarter if you use a software framework designed for building robotic systems like ROS (robot operating system). Therefore you need to use a full set of tools. Infact there are more high level tools available when you use, ROS + python/c++ with libraries such as opencv and pcl. I have used labview robotics and frankly commonly used algorithms like ICP are not there and its not like you can use other libraries easily now.
Myth2: Is it easier to understand graphical programming languages
It depends on the situation. When you are coding a complicated algorithm the graphical elements will take up valuable screen space and it will be difficult to understand the method. To understand labview code you have to read over an area that is O(n^2) complexity in code you just read top to bottom.
What if you have parallel systems. ROS implements a graph based architecture based on subcriber/publisher messages implemented using callback and its pretty easy to have multiple programs running and communicating. Having individual parallel components separated makes it easier to debug. For instance stepping through parallel labview code is a nightmare because control flow jumps form one place to another. In ROS you don't explicitly 'draw out your archietecture like in labview, however you can still see it my running the command ros run rqt_graph ( which will show all connected nodes)
"The future of programming is graphical." (Think so?)
I hope not, the current implementation of labview does not allow coding using text-based methods and graphical methods. ( there is mathscript , however this is incredibly slow)
Its hard to debug because you cant hide the parallelism easily.
Its hard to read labview code because there you have to look over so much area.
Labview is great for data aq and signal processing but not experimental robotics, because most of the high level components like SLAM (simultaneous localisation and mapping), point cloud registration, point cloud processing ect are missing. Even if they do add these components and they are easy to integrate like in ROS, because labview is proprietary and expensive they will never keep up with the open source community.
In summary if labview is the future for mechatronics i am changing my career path to investment banking... If i can't enjoy my work i may as well make some money and retire early...
My gripe against Labview (and Matlab in this respect) is that if you plan on embedding the code in anything other than x86 (and Labview has tools to put Labview VIs on ARMs) then you'll have to throw as much horsepower at the problem as you can because it's inefficient.
Labview is a great prototyping tool: lots of libraries, easy to string together blocks, maybe a little difficult to do advanced algorithms but there's probably a block for what you want to do. You can get functionality done quickly. But if you think you can take that VI and just put it on a device you're wrong. For instance, if you make an adder block in Labview you have two inputs and one output. What is the memory usage for that? Three variables worth of data? Two? In C or C++ you know, because you can either write z=x+y or x+=y and you know exactly what your code is doing and what the memory situation is. Memory usage can spike quickly especially because (as others have pointed out) Labview is highly parallel. So be prepared to throw more RAM than you thought at the problem. And more processing power.
In short, Labview is great for rapid prototyping but you lose too much control in other situations. If you're working with large amounts of data or limited memory/processing power then use a text-based programming language so you can control what goes on.
There are definitely merits to both choices; however, since your domain is an educational experience I think a C/C++ solution would most benefit the students. Graphical programming will always be an option but simply does not provide the functionality in an elegant manner that would make it more efficient to use than textual programming for low-level programming. This is not a bad thing - the whole point of abstraction is to allow a new understanding and view of a problem domain. The reason I believe many may be disappointed with graphical programming though is that, for any particular program, the incremental gain in going from programming in C to graphical is not nearly the same as going from assembly to C.
Knowledge of graphical programming would benefit any future programmer for sure. There will probably be opportunities in the future that only require knowledge of graphical programming and perhaps some of your students could benefit from some early experience with it. On the other hand, a solid foundation in fundamental programming concepts afforded by a textual approach will benefit all of your students and surely must be the better answer.
有些语言可以同时提供这两种功能。 我最近为 Lua 制作了一个多线程库(Lanes),它可以用于在其他命令式环境中进行基于需求的编程。 我知道有一些成功的机器人主要在 Lua 中运行(Crazy Ivan at Lua Oh六)。
The team captain thinks that LabVIEW
is better for its ease of learning and
teaching. Is that true?
I doubt that would be true for any single language, or paradigm. LabView could surely be easier for people with electronics engineering background; making programs in it is "simply" drawing wires. Then again, such people might already be exposed to programming, as well.
One essential difference - apart from from the graphic - is that LV is demand based (flow) programming. You start from the outcome and tell, what is needed to get to it. Traditional programming tends to be imperative (going the other way round).
Some languages can provide the both. I crafted a multithreading library for Lua recently (Lanes) and it can be used for demand-based programming in an otherwise imperative environment. I know there are successful robots run mostly in Lua out there (Crazy Ivan at Lua Oh Six).
It seems that if you are trying to prepare our team for a future in programming that C(++) ma be the better route. The promise of general programming languages that are built with visual building blocks has never seemed to materialize and I am beginning to wonder if they ever will. It seems that while it can be done for specific problem domains, once you get into trying to solve many general problems a text based programming language is hard to beat.
At one time I had sort of bought into the idea of executable UML but it seems that once you get past the object relationships and some of the process flows UML would be a pretty miserable way to build an app. Imagine trying to wire it all up to a GUI. I wouldn't mind being proven wrong but so far it seems unlikely we'll be point and click programming anytime soon.
我认为编程的过程比实际的编码语言更重要,您应该遵循图形编程语言的风格指南。 LabVIEW 框图显示了数据流(数据流编程),因此应该很容易看到潜力比赛条件虽然我从来没有遇到过任何问题。 如果您有 C 代码库,那么将其构建到 dll 中将允许 LabVIEW 直接调用它。
I started with LabVIEW about 2 years ago and now use it all the time so may be biased but find it ideal for applications where data acquisition and control are involved.
We use LabVIEW mainly for testing where we take continuous measurements and control gas valves and ATE enclosures. This involves both digital and analogue input and outputs with signal analysis routines and process control all running from a GUI. By breaking down each part into subVIs we are able to reconfigure the tests with the click and drag of the mouse.
Not exactly the same as C/C++ but a similar implementation of measurement, control and analysis using Visual BASIC appears complex and hard to maintain by comparision.
I think the process of programming is more important than the actual coding language and you should follow the style guidelines for a graphical programming language. LabVIEW block diagrams show the flow of data (Dataflow programming) so it should be easy to see potential race conditions although I've never had any problems. If you have a C codebase then building it into a dll will allow LabVIEW to call it directly.
更多与机器人相关的内容,甚至像 LOGO 这样的东西也会是一个很好的介绍 - 你输入“FORWARD 1”,乌龟会向前移动一个盒子。输入“LEFT 90”,它会旋转 90 度。这与现实非常简单。 您可以想象每条指令将做什么,然后尝试并确认它确实如此。
向他们展示看起来闪亮的东西,他们会从中学到从 C 语言中学到的有用的东西,如果他们感兴趣或进展到需要“真正的”语言的程度,他们将拥有所有这些知识,而不是遇到语法错误并编译砖墙..
I don't anything about LabView (or much about C/C++), but..
Do you think that graphical languages such as LabVEIW are the future of programming?
No...
Is a graphical language easier to learn than a textual language? I think that they should be about equally challenging to learn.
Easier to learn? No, but they are easier to explain and understand.
To explain a programming language you have to explain what a variable is (which is surprisingly difficult). This isn't a problem with flowgraph/nodal coding tools, like the LEGO Mindstroms programming interface, or Quartz Composer..
For example, in this is a fairly complicated LEGO Mindstroms program - it's very easy to understand what is going in... but what if you want the robot to run the INCREASEJITTER block 5 times, then drive forward for 10 seconds, then try the INCREASEJITTER loop again? Things start getting messy very quickly..
Quartz Composer is a great exmaple of this, and why I don't think graphical languages will ever "be the future"
It makes it very easy to really cool stuff (3D particle effects, with a camera controlled by the average brightness of pixels from a webcam).. but incredibly difficult to do easy things, like iterate over the elements from an XML file, or store that average pixel value into a file.
Seeing as we are partailly rooted in helping people learn, how much should we rely on prewritten modules, and how much should we try to write on our own? ("Good programmers write good code, great programmers copy great code." But isn't it worth being a good programmer, first?)
For learning, it's so much easier to both explain and understand a graphical language..
That said, I would recommend a specialised text-based language language as a progression. For example, for graphics something like Processing or NodeBox. They are "normal" languages (Processing is Java, NodeBox is Python) with very specialised, easy to use (but absurdly powerful) frameworks ingrained into them..
Importantly, they are very interactive languages, you don't have to write hundreds of lines just to get a circle onscreen.. You just type oval(100, 200, 10, 10) and press the run button, and it appears! This also makes them very easy to demonstrate and explain.
More robotics-related, even something like LOGO would be a good introduction - you type "FORWARD 1" and the turtle drives forward one box.. Type "LEFT 90" and it turns 90 degrees.. This relates to reality very simply. You can visualise what each instruction will do, then try it out and confirm it really works that way.
Show them shiney looking things, they will pickup the useful stuff they'd learn from C along the way, if they are interested or progress to the point where they need a "real" language, they'll have all that knowledge, rather than run into the syntax-error and compiling brick-wall..
I am using LabVIEW since about 20 years now and did quite a large kind of jobs, from simple DAQ to very complex visualization, from device controls to test sequencers. If it was not good enough, I for sure would have switched. That said, I started coding Fortran with punchcards and used a whole lot of programming languages on 8-bit 'machines', starting with Z80-based ones. The languages ranged from Assembler to BASIC, from Turbo-Pascal to C.
LabVIEW was a major improvement because of its extensive libraries for data acqusition and analysis. One has, however, to learn a different paradigma. And you definitely need a trackball ;-))
I would suggest you use LabVIEW as you can get down to making the robot what you want to do faster and easier. LabVIEW has been designed with this mind. OfCourse C(++) are great languages, but LabVIEW does what it is supposed to do better than anything else.
People can write really good software in LabVIEW as it provides ample scope and support for that.
我现在使用文本语言,并且我在维护一切方面都做得更好。 如果你将 C++ 与 LabVIEW 进行比较,我会使用 LabVIEW,但与 C# 相比,它并没有获胜
There is one huge thing I found negative in using LabVIEW for my applications: Organize design complexity. As a physisist I find Labview great for prototyping, instrument control and mathematical analysis. There is no language in which you get faster and better a result then in LabVIEW. I used LabView since 1997. Since 2005 I switched completely to the .NET framework, since it is easier to design and maintain.
In LabVIEW a simple 'if' structure has to be drawn and uses a lot of space on your graphical design. I just found out that many of our commercial applications were hard to maintain. The more complex the application became, the more difficult it was to read.
I now use text laguages and I am much better in maintaining everything. If you would compare C++ to LabVIEW I would use LabVIEW, but compared to C# it does not win
I think that graphical languages might be the language of the future..... for all those adhoc MS Access developers out there. There will always be a spot for the purely textual coders.
Personally, I've got to ask what is the real fun of building a robot if it's all done for you? If you just drop a 'find the red ball' module in there and watch it go? What sense of pride will you have for your accomplishment? Personally, I wouldn't have much. Plus, what will it teach you of coding, or of the (very important) aspect of the software/hardware interface that is critical in robotics?
I don't claim to be an expert in the field, but ask yourself one thing: Do you think that NASA used LabVIEW to code the Mars Rovers? Do you think that anyone truly prominent in robotics is using LabView?
Really, if you ask me, the only thing using cookie cutter things like LabVIEW to build this is going to prepare you for is to be some backyard robot builder and nothing more. If you want something that will give you something more like industry experience, build your own 'LabVIEW'-type system. Build your own find-the-ball module, or your own 'follow-the-line' module. It will be far more difficult, but it will also be way more cool too. :D
You're in High School. How much time do you have to work on this program? How many people are in your group? Do they know C++ or LabView already?
From your question, I see that you know C++ and most of the group does not. I also suspect that the group leader is perceptive enough to notice that some members of the team may be intimidated by a text based programming language. This is acceptable, you're in high school, and these people are normies. I feel as though normal high schoolers will be able to understand LabView more intuitively than C++. I'm guessing most high school students, like the population in general, are scared of a command line. For you there is much less of a difference, but for them, it is night and day.
You are correct that the same concepts may be applied to LabView as C++. Each has its strengths and weaknesses. The key is selecting the right tool for the job. LabView was designed for this kind of application. C++ is much more generic and can be applied to many other kinds of problems.
I am going to recommend LabView. Given the right hardware, you can be up and running almost out-of-the-box. Your team can spend more time getting the robot to do what you want, which is what the focus of this activity should be.
Graphical Languages are not the future of programming; they have been one of the choices available, created to solve certain types of problems, for many years. The future of programming is layer upon layer of abstraction away from machine code. In the future, we'll be wondering why we wasted all this time programming "semantics" over and over.
how much should we rely on prewritten modules, and how much should we try to write on our own?
You shouldn't waste time reinventing the wheel. If there are device drivers available in Labview, use them. You can learn a lot by copying code that is similar in function and tailoring it to your needs - you get to see how other people solved similar problems, and have to interpret their solution before you can properly apply it to your problem. If you blindly copy code, chances of getting it to work are slim. You have to be good, even if you copy code.
如果您的机器人没有脚本选项,并且您的团队中有一个 C++ 极客,那么请考虑让该极客编写绑定以将您的 C++ 库映射到脚本语言。 这将使具有其他专业知识的人更容易地对机器人进行编程。 这些装订将成为送给社区的一份好礼物。
如果 LabVIEW 允许,可以使用其图形语言将用文本语言编写的模块组合在一起。
Disclaimer: I've not witnessed LabVIEW, but I have used a few other graphical languages including WebMethods Flow and Modeller, dynamic simulation languages at university and, er, MIT's Scratch :).
My experience is that graphical languages can do a good job of the 'plumbing' part of programming, but the ones I've used actively get in the way of algorithmics. If your algorithms are very simple, that might be OK.
On the other hand, I don't think C++ is great for your situation either. You'll spend more time tracking down pointer and memory management issues than you do in useful work.
If your robot can be controlled using a scripting language (Python, Ruby, Perl, whatever), then I think that would be a much better choice.
Then there's hybrid options:
If there's no scripting option for your robot, and you have a C++ geek on your team, then consider having that geek write bindings to map your C++ library to a scripting language. This would allow people with other specialities to program the robot more easily. The bindings would make a good gift to the community.
If LabVIEW allows it, use its graphical language to plumb together modules written in a textual language.
I love LabVIEW. I would highly recommend it especially if the other remembers have used something similar. It takes a while for normal programmers to get used to it, but the result's are much better if you already know how to program.
C/C++ equals manage your own memory. You'll be swimming in memory links and worrying about them. Go with LabVIEW and make sure you read the documentation that comes with LabVIEW and watch out for race conditions.
Learning a language is easy. Learning how to program is not. This doesn't change even if it's a graphical language. The advantage of Graphical languages is that it is easier to visual what the code will do rather than sit there and decipher a bunch of text.
The important thing is not the language but the programming concepts. It shouldn't matter what language you learn to program in, because with a little effort you should be able to program well in any language. Languages come and go.
I have programmed embedded systems for 10 years, and I can say that without at least a couple months of infrastructure (very careful infrastructure!), you will not be as productive as you are on day 1 with LabView.
If you are designing a robot to be sold and used for the military, go ahead and start with C - it's a good call.
Otherwise, use the system that allows you to try out the most variety in the shortest amount of time. That's LabView.
I come from an embedded background in the automotive industry and now i'm in the defense industry. I can tell you from experience that C/C++ and LabVIEW are really different beasts with different purposes in mind. C/C++ was always used for the embedded work on microcontrollers because it was compact and compilers/tools were easy to come by. LabVIEW on the other hand was used to drive the test system (along with test stand as a sequencer). Most of the test equipment we used were from NI so LabVIEW provided an environment where we had the tools and the drivers required for the job, along with the support we wanted ..
In terms of ease of learning, there are many many resources out there for C/C++ and many websites that lay out design considerations and example algorithms on pretty much anything you're after freely available. For LabVIEW, the user community's probably not as diverse as C/C++, and it takes a little bit more effort to inspect and compare example code (have to have the right version of LabVIEW etc) ... I found LabVIEW pretty easy to pick up and learn, but there a nuisances as some have mentioned here to do with parallelism and various other things that require a bit of experience before you become aware of them.
So the conclusion after all that? I'd say that BOTH languages are worthwhile in learning because they really do represent two different styles of programming and it is certainly worthwhile to be aware and proficient at both.
LabVIEW lets you get started quickly, and (as others have already said) has a massive library of code for doing various test, measurement & control related things.
The single biggest downfall of LabVIEW, though, is that you lose all the tools that programmers write for themselves.
Your code is stored as VIs. These are opaque, binary files. This means that your code really isn't yours, it's LabVIEW's. You can't write your own parser, you can't write a code generator, you can't do automated changes via macros or scripts.
This sucks when you have a 5000 VI app that needs some minor tweak applied universally. Your only option is to go through every VI manually, and heaven help you if you miss a change in one VI off in a corner somewhere.
And yes, since it's binary, you can't do diff/merge/patch like you can with textual languages. This does indeed make working with more than one version of the code a horrific nightmare of maintainability.
By all means, use LabVIEW if you're doing something simple, or need to prototype, or don't plan to maintain your code.
If you want to do real, maintainable programming, use a textual language. You might be slower getting started, but you'll be faster in the long run.
(Oh, and if you need DAQ libraries, NI's got C++ and .Net versions of those, too.)
I think that graphical languages wil always be limited in expressivity compared to textual ones. Compare trying to communicate in visual symbols (e.g., REBUS or sign language) to communicating using words.
For simple tasks, using a graphical language is usually easier but for more intricate logic, I find that graphical languages get in the way.
Another debate implied in this argument, though, is declarative programming vs. imperative. Declarative is usually better for anything where you really don't need the fine-grained control over how something is done. You can use C++ in a declarative way but you would need more work up front to make it so, whereas LABView is designed as a declarative language.
A picture is worth a thousand words but if a picture represents a thousand words that you don't need and you can't change that, then in that case a picture is worthless. Whereas, you can create thousands of pictures using words, specifying every detail and even leading the viewer's focus explicitly.
This doesn't answer you question directly, but you may want to consider a third option of mixing in an interpreted language. Lua, for example, is alreadyused in the robotics field. It's fast, light-weight and can be configured to run with fixed-point numbers instead of floating-point since most microcontrollers don't have an FPU. Forth is another alternative with similar usage.
It should be pretty easy to write a thin interface layer in C and then let the students loose with interpreted scripts. You could even set it up to allow code to be loaded dynamically without recompiling and flashing a chip. This should reduce the iteration cycle and allow students to learn better by seeing results more quickly.
I'm biased against using visual tools like LabVIEW. I always seem to hit something that doesn't or won't work quite like I want it to do. So, I prefer the absolute control you get with textual code.
LabVIEW's other strength (besides libraries) is concurrency. It's a dataflow language, which means that the runtime can handle concurrency for you. So if you're doing something highly concurrent and don't want to have to do traditional synchronization, LabVIEW can help you there.
The future doesn't belong to graphical languages as they stand today. It belongs to whoever can come up with a representation of dataflow (or another concurrency-friendly type of programming) that's as straightforward as the graphical approach is, but is also parsable by the programmer's own tools.
目前获取更多关于 LabVIEW 的信息和帮助的最活跃的地方,除了 National Instruments 自己的网站和论坛之外,似乎是 熔岩。
I think the choice of LabVIEW or not comes down to whether you want to learn to program in a commonly used language as a marketable skill, or just want to get stuff done. LabVIEW enables you to Get Stuff Done very quickly and productively. As others have observed, it doesn't magically free you from having to understand what you're doing, and it's quite possible to create an unholy mess if you don't - although anecdotally, the worst examples of bad coding style in LabVIEW are generally perpetrated by people who are experienced in a text language and refuse to adapt to how LabVIEW works because they 'already know how to program, dammit!'
That's not to imply that LabVIEW programming isn't a marketable skill, of course; just that it's not as mass-market as C++.
LabVIEW makes it extremely easy to manage different things going on in parallel, which you may well have in a robot control situation. Race conditions in code that should be sequential shouldn't be a problem either (i.e. if they are, you're doing it wrong): there are simple techniques for making sure that stuff happens in the right order where necessary - chaining subVI's using the error wire or other data, using notifiers or queues, building a state machine structure, even using LabVIEW's sequence structure if necessary. Again, this is simply a case of taking the time to understand the tools available in LabVIEW and how they work. I don't think the gripe about having to make subVI icons is very well directed; you can very quickly create one containing a few words of text, maybe with a background colour, and that will be fine for most purposes.
'Are graphical languages the way of the future' is a red herring based on a false dichotomy. Some things are well suited to graphical languages (parallel code, for instance); other things suit text languages much better. I don't expect LabVIEW and graphical programming to either go away, or take over the world.
Incidentally, I would be very surprised if NASA didn't use LabVIEW in the space program. Someone recently described on the Info-LabVIEW mailing list how they had used LabVIEW to develop and test the closed loop control of flight surfaces actuated by electric motors on the Boeing 787, and gave the impression that LabVIEW was used extensively in the plane's development. It's also used for real-time control in the Large Hadron Collider!
The most active place currently for getting further information and help with LabVIEW, apart from National Instruments' own site and forums, seems to be LAVA.
Before I arrived, our group (PhD scientists, with little programming background) had been trying to implement a LabVIEW application on-and-off for nearly a year. The code was untidy, too complex (front and back-end) and most importantly, did not work. I am a keen programmer but had never used LabVIEW. With a little help from a LabVIEW guru who could help translate the textual progamming paradigms I knew into LabVIEW concepts it was possible to code the app in a week. The point here is that the basic coding concepts still have to be learnt, the language, even one like LabVIEW, is just a different way of expressing them.
LabVIEW is great to use for what it was originally designed for. i.e. to take data from DAQ cards and display it on-screen perhaps with some minor manipulations in-between. However, programming algorithms is no easier and I would even suggest that it is more difficult. For example, in most procedural languages execution order is generally followed line by line, using pseudo mathematical notation (i.e. y = x*x + x + 1) whereas LabVIEW would implement this using a series of VI's which don't necessarily follow from each other (i.e. left-to-right) on the canvas.
Moreover programming as a career is more than knowing the technicalities of coding. Being able to effectively ask for help/search for answers, write readable code and work with legacy code are all key skills which are undeniably more difficult in a graphical language such as LabVIEW.
I believe some aspects of graphical programming may become mainstream - the use of sub-VIs perfectly embodies the 'black-box' principal of programming and is also used in other language abstractions such as Yahoo Pipes and the Apple Automator - and perhaps some future graphical language will revolutionise the way we program but LabVIEW itself is not a massive paradigm shift in language design, we still have while, for, if flow control, typecasting, event driven programming, even objects. If the future really will be written in LabVIEW, C++ programmer won't have much trouble crossing over.
As a postcript I'd say that C/C++ is more suited to robotics since the students will no doubt have to deal with embedded systems and FPGAs at some point. Low level programming knowledge (bits, registers etc.) would be invaluable for this kind of thing.
@mendicant Actually LabVIEW is used a lot in industry, especially for control systems. Granted NASA unlikely use it for on-board satellite systems but then software developement for space-systems is a whole different ball game...
如果您使用子 VI 来执行小任务(就像创建执行小任务且适合一个屏幕的函数是一个很好的做法一样),您不能只给它们命名,而是必须绘制它们他们每个人的图标。 这在短短几分钟内就会变得非常烦人和麻烦,因此您很想不将东西放入子VI中。 实在是太麻烦了。 顺便说一句:制作一个真正好的图标可能需要专业的时间。 尝试为您编写的每个子 VI 制作一个独特的、立即可理解的、可识别的图标:)
I've encountered a somewhat similar situation in the research group I'm currently working in. It's a biophysics group, and we're using LabVIEW all over the place to control our instruments. That works absolutely great: it's easy to assemble a UI to control all aspects of your instruments, to view its status and to save your data.
And now I have to stop myself from writing a 5 page rant, because for me LabVIEW has been a nightmare. Let me instead try to summarize some pros and cons:
Disclaimer I'm not a LabVIEW expert, I might say things that are biased, out-of-date or just plain wrong :)
LabVIEW pros
Yes, it's easy to learn. Many PhD's in our group seem to have acquired enough skills to hack away within a few weeks, or even less.
Libraries. This is a major point. You'd have to carefully investigate this for your own situation (I don't know what you need, if there are good LabVIEW libraries for it, or if there are alternatives in other languages). In my case, finding, e.g., a good, fast charting library in Python has been a major problem, that has prevented me from rewriting some of our programs in Python.
Your school may already have it installed and running.
LabVIEW cons
It's perhaps too easy to learn. In any case, it seems no one really bothers to learn best practices, so programs quickly become a complete, irreparable mess. Sure, that's also bound to happen with text-based languages if you're not careful, but IMO it's much more difficult to do things right in LabVIEW.
There tend to be major issues in LabVIEW with finding sub-VIs (even up to version 8.2, I think). LabVIEW has its own way of knowing where to find libraries and sub-VIs, which makes it very easy to completely break your software. This makes large projects a pain if you don't have someone around who knows how to handle this.
Getting LabVIEW to work with version control is a pain. Sure, it can be done, but in any case I'd refrain from using the built-in VC. Check out LVDiff for a LabVIEW diff tool, but don't even think about merging.
(The last two points make working in a team on one project difficult. That's probably important in your case)
This is personal, but I find that many algorithms just don't work when programmed visually. It's a mess.
One example is stuff that is strictly sequential; that gets cumbersome pretty quickly.
It's difficult to have an overview of the code.
If you use sub-VI's for small tasks (just like it's a good practice to make functions that perform a small task, and that fit on one screen), you can't just give them names, but you have to draw icons for each of them. That gets very annoying and cumbersome within only a few minutes, so you become very tempted not to put stuff in a sub-VI. It's just too much of a hassle. Btw: making a really good icon can take a professional hours. Go try to make a unique, immediately understandable, recognizable icon for every sub-VI you write :)
You'll have carpal tunnel within a week. Guaranteed.
@Brendan: hear, hear!
Concluding remarks
As for your "should I write my own modules" question: I'm not sure. Depends on your time constraints. Don't spend time on reinventing the wheel if you don't have to. It's too easy to spend days on writing low-level code and then realize you've run out of time. If that means you choose LabVIEW, go for it.
If there'd be easy ways to combine LabVIEW and, e.g., C++, I'd love to hear about it: that may give you the best of both worlds, but I doubt there are.
But make sure you and your team spend time on learning best practices. Looking at each other's code. Learning from each other. Writing usable, understandable code. And having fun!
And please forgive me for sounding edgy and perhaps somewhat pedantic. It's just that LabVIEW has been a real nightmare for me :)
发布评论
评论(25)
人们总是将labview与C++进行比较,然后“哦labview是高水平的,它有很多内置功能,尝试获取数据进行dfft并显示数据,在labview中很容易尝试在C++中尝试”。
误区一:用 C++ 很难完成任何事情,因为它的级别太低,而 Labview 已经实现了很多东西。
问题是,如果您正在用 C++ 开发机器人系统,则必须使用 opencv 、 pcl 等库,如果您使用专为构建 ROS(机器人操作系统)等机器人系统而设计的软件框架,您会更加聪明。 因此你需要使用一整套工具。 事实上,当您使用 ROS + python/c++ 以及 opencv 和 pcl 等库时,还有更多高级工具可用。 我使用过labview机器人技术,坦率地说,像ICP这样的常用算法不存在,而且你现在不能轻松使用其他库。
误区2:图形编程语言是否更容易理解
这取决于具体情况。 当您编写复杂的算法时,图形元素将占用宝贵的屏幕空间,并且很难理解该方法。 要理解labview代码,您必须阅读复杂度为O(n^2)的代码区域,您只需从上到下阅读。
如果你有并行系统怎么办? ROS 实现了基于图形的架构,该架构基于使用回调实现的订阅者/发布者消息,并且很容易让多个程序运行和通信。 将各个并行组件分开可以更轻松地进行调试。 例如,单步执行并行labview代码是一场噩梦,因为控制流从一个地方跳到另一个地方。 在ROS中,你不会像在labview中那样明确地“绘制出你的架构”,但是你仍然可以通过运行命令ros run rqt_graph(它将显示所有连接的节点)来看到它
“编程的未来是图形化的。” (是这样认为的吗?)
我希望不是,目前labview的实现不允许使用基于文本的方法和图形方法进行编码。 (有 mathscript ,但是这非常慢)
它很难调试,因为你无法轻松隐藏并行性。
阅读labview代码很困难,因为你必须查看这么多区域。
Labview 非常适合数据采集和信号处理,但不适用于实验机器人,因为大多数高级组件(例如 SLAM(同步定位和建图)、点云配准、点云处理等)都缺失。 即使他们确实添加了这些组件并且它们很容易像 ROS 中那样集成,但由于 Labview 是专有的且昂贵,他们永远无法跟上开源社区的步伐。
总而言之,如果labview是机电一体化的未来,我将改变我的职业道路,转向投资银行业务...如果我不能享受我的工作,我不妨赚一些钱并提前退休...
People always compare labview with C++ and day "oh labview is high level and it has so much built in functionality try acquiring data doing a dfft and displaying the data its so easy in labview try it in C++".
Myth 1: It's hard to get anything done with C++ its because its so low level and labview has many things already implemented.
The problem is if you are developing a robotic system in C++ you MUST use libraries like opencv , pcl .. ect and you would be even more smarter if you use a software framework designed for building robotic systems like ROS (robot operating system). Therefore you need to use a full set of tools. Infact there are more high level tools available when you use, ROS + python/c++ with libraries such as opencv and pcl. I have used labview robotics and frankly commonly used algorithms like ICP are not there and its not like you can use other libraries easily now.
Myth2: Is it easier to understand graphical programming languages
It depends on the situation. When you are coding a complicated algorithm the graphical elements will take up valuable screen space and it will be difficult to understand the method. To understand labview code you have to read over an area that is O(n^2) complexity in code you just read top to bottom.
What if you have parallel systems. ROS implements a graph based architecture based on subcriber/publisher messages implemented using callback and its pretty easy to have multiple programs running and communicating. Having individual parallel components separated makes it easier to debug. For instance stepping through parallel labview code is a nightmare because control flow jumps form one place to another. In ROS you don't explicitly 'draw out your archietecture like in labview, however you can still see it my running the command ros run rqt_graph ( which will show all connected nodes)
"The future of programming is graphical." (Think so?)
I hope not, the current implementation of labview does not allow coding using text-based methods and graphical methods. ( there is mathscript , however this is incredibly slow)
Its hard to debug because you cant hide the parallelism easily.
Its hard to read labview code because there you have to look over so much area.
Labview is great for data aq and signal processing but not experimental robotics, because most of the high level components like SLAM (simultaneous localisation and mapping), point cloud registration, point cloud processing ect are missing. Even if they do add these components and they are easy to integrate like in ROS, because labview is proprietary and expensive they will never keep up with the open source community.
In summary if labview is the future for mechatronics i am changing my career path to investment banking... If i can't enjoy my work i may as well make some money and retire early...
我对 Labview(以及在这方面的 Matlab)的抱怨是,如果您计划将代码嵌入 x86 以外的任何系统(并且 Labview 有工具将 Labview VI 放在 ARM 上),那么您将不得不投入尽可能多的马力来解决这个问题尽你所能,因为它效率低下。
Labview 是一个很棒的原型设计工具:有很多库,很容易将块串在一起,执行高级算法可能有点困难,但可能有一个块可以满足您想要做的事情。 您可以快速完成功能。 但如果您认为可以将该 VI 放在设备上,那您就错了。 例如,如果您在 Labview 中创建一个加法器模块,您将有两个输入和一个输出。 其内存使用量是多少? 三个变量的数据价值? 二? 在 C 或 C++ 中你知道,因为你可以编写
z=x+y
或x+=y
并且你确切地知道你的代码在做什么以及内存情况如何。 内存使用量可能会迅速飙升,特别是因为(正如其他人指出的)Labview 是高度并行的。 因此,请准备好投入比您想象的更多的内存来解决问题。 以及更强的处理能力。简而言之,Labview 非常适合快速原型设计,但在其他情况下您会失去太多控制权。 如果您正在处理大量数据或有限的内存/处理能力,那么请使用基于文本的编程语言,以便您可以控制发生的情况。
My gripe against Labview (and Matlab in this respect) is that if you plan on embedding the code in anything other than x86 (and Labview has tools to put Labview VIs on ARMs) then you'll have to throw as much horsepower at the problem as you can because it's inefficient.
Labview is a great prototyping tool: lots of libraries, easy to string together blocks, maybe a little difficult to do advanced algorithms but there's probably a block for what you want to do. You can get functionality done quickly. But if you think you can take that VI and just put it on a device you're wrong. For instance, if you make an adder block in Labview you have two inputs and one output. What is the memory usage for that? Three variables worth of data? Two? In C or C++ you know, because you can either write
z=x+y
orx+=y
and you know exactly what your code is doing and what the memory situation is. Memory usage can spike quickly especially because (as others have pointed out) Labview is highly parallel. So be prepared to throw more RAM than you thought at the problem. And more processing power.In short, Labview is great for rapid prototyping but you lose too much control in other situations. If you're working with large amounts of data or limited memory/processing power then use a text-based programming language so you can control what goes on.
您看过微软机器人工作室吗?
http://msdn.microsoft.com/en-us/robotics/default。 aspx
它允许可视化编程(VPL):
http://msdn.microsoft.com/en-us/library/bb483047。 ASPX
以及 C# 等现代语言。
我鼓励您至少看一下教程。
Have you had a look at the Microsoft Robotics Studio?
http://msdn.microsoft.com/en-us/robotics/default.aspx
It allows for visual programming (VPL):
http://msdn.microsoft.com/en-us/library/bb483047.aspx
as well as modern languages such as C#.
I encourage you to at least take a look at the tutorials.
这两种选择肯定都有优点; 然而,由于您的领域是一种教育体验,我认为 C/C++ 解决方案最有利于学生。 图形编程始终是一种选择,但根本无法以优雅的方式提供功能,从而使其比文本编程更有效地用于低级编程。 这并不是一件坏事——抽象的全部意义在于允许对问题领域有新的理解和看法。 我相信许多人可能对图形化编程感到失望的原因是,对于任何特定的程序,从 C 语言编程到图形化编程的增量收益与从汇编语言到 C 语言的增量增益几乎不一样。
图形化编程的知识将使任何人受益。未来的程序员肯定是这样的。 未来可能会有只需要图形编程知识的机会,也许您的一些学生可以从一些早期的经验中受益。 另一方面,通过文本方法提供的基本编程概念的坚实基础将使所有学生受益,并且肯定是更好的答案。
There are definitely merits to both choices; however, since your domain is an educational experience I think a C/C++ solution would most benefit the students. Graphical programming will always be an option but simply does not provide the functionality in an elegant manner that would make it more efficient to use than textual programming for low-level programming. This is not a bad thing - the whole point of abstraction is to allow a new understanding and view of a problem domain. The reason I believe many may be disappointed with graphical programming though is that, for any particular program, the incremental gain in going from programming in C to graphical is not nearly the same as going from assembly to C.
Knowledge of graphical programming would benefit any future programmer for sure. There will probably be opportunities in the future that only require knowledge of graphical programming and perhaps some of your students could benefit from some early experience with it. On the other hand, a solid foundation in fundamental programming concepts afforded by a textual approach will benefit all of your students and surely must be the better answer.
我怀疑对于任何单一语言或范式来说都是如此。 对于具有电子工程背景的人来说,LabView 肯定会更容易; 在其中制作程序“简单”就是画线。 话又说回来,这些人可能也已经接触过编程。
除了图形之外,一个本质区别是 LV 是基于需求(流)的编程。 你从结果开始,然后告诉我们需要什么才能达到这个结果。 传统编程往往是命令式的(反之亦然)。
有些语言可以同时提供这两种功能。 我最近为 Lua 制作了一个多线程库(Lanes),它可以用于在其他命令式环境中进行基于需求的编程。 我知道有一些成功的机器人主要在 Lua 中运行(Crazy Ivan at Lua Oh六)。
I doubt that would be true for any single language, or paradigm. LabView could surely be easier for people with electronics engineering background; making programs in it is "simply" drawing wires. Then again, such people might already be exposed to programming, as well.
One essential difference - apart from from the graphic - is that LV is demand based (flow) programming. You start from the outcome and tell, what is needed to get to it. Traditional programming tends to be imperative (going the other way round).
Some languages can provide the both. I crafted a multithreading library for Lua recently (Lanes) and it can be used for demand-based programming in an otherwise imperative environment. I know there are successful robots run mostly in Lua out there (Crazy Ivan at Lua Oh Six).
看来,如果您想让我们的团队为未来的编程做好准备,C(++) 可能是更好的途径。 使用可视化构建块构建的通用编程语言的承诺似乎从未实现,我开始怀疑它们是否会实现。 看起来,虽然它可以针对特定的问题领域完成,但一旦您尝试解决许多一般问题,基于文本的编程语言就很难被击败。
有一次,我有点接受了可执行 UML 的想法,但似乎一旦你超越了对象关系和某些流程,UML 将是一种非常糟糕的构建应用程序的方式。 想象一下尝试将其全部连接到 GUI。 我不介意被证明是错误的,但到目前为止,我们似乎不太可能很快就进行点击式编程。
It seems that if you are trying to prepare our team for a future in programming that C(++) ma be the better route. The promise of general programming languages that are built with visual building blocks has never seemed to materialize and I am beginning to wonder if they ever will. It seems that while it can be done for specific problem domains, once you get into trying to solve many general problems a text based programming language is hard to beat.
At one time I had sort of bought into the idea of executable UML but it seems that once you get past the object relationships and some of the process flows UML would be a pretty miserable way to build an app. Imagine trying to wire it all up to a GUI. I wouldn't mind being proven wrong but so far it seems unlikely we'll be point and click programming anytime soon.
我大约两年前开始使用 LabVIEW,现在一直在使用它,因此可能有偏见,但发现它非常适合涉及数据采集和控制的应用。
我们主要使用 LabVIEW 进行测试,进行连续测量并控制气阀和 ATE 外壳。 这涉及数字和模拟输入和输出以及信号分析例程和过程控制,所有这些都从 GUI 运行。 通过将每个部分分解为子VI,我们可以通过单击并拖动鼠标来重新配置测试。
与 C/C++ 不完全相同,但使用 Visual BASIC 进行类似的测量、控制和分析实现,相比之下显得复杂且难以维护。
我认为编程的过程比实际的编码语言更重要,您应该遵循图形编程语言的风格指南。 LabVIEW 框图显示了数据流(数据流编程),因此应该很容易看到潜力比赛条件虽然我从来没有遇到过任何问题。 如果您有 C 代码库,那么将其构建到 dll 中将允许 LabVIEW 直接调用它。
I started with LabVIEW about 2 years ago and now use it all the time so may be biased but find it ideal for applications where data acquisition and control are involved.
We use LabVIEW mainly for testing where we take continuous measurements and control gas valves and ATE enclosures. This involves both digital and analogue input and outputs with signal analysis routines and process control all running from a GUI. By breaking down each part into subVIs we are able to reconfigure the tests with the click and drag of the mouse.
Not exactly the same as C/C++ but a similar implementation of measurement, control and analysis using Visual BASIC appears complex and hard to maintain by comparision.
I think the process of programming is more important than the actual coding language and you should follow the style guidelines for a graphical programming language. LabVIEW block diagrams show the flow of data (Dataflow programming) so it should be easy to see potential race conditions although I've never had any problems. If you have a C codebase then building it into a dll will allow LabVIEW to call it directly.
我对 LabView(或者 C/C++)一无所知,但是..
不...
比较容易学? 不,但是它们更容易解释和理解。
要解释编程语言,您必须解释变量是什么(这是非常困难的)。 对于流程图/节点编码工具(例如 LEGO Mindstroms 编程接口或 Quartz Composer)来说,这不是问题。
例如,这是一个相当复杂的 LEGO Mindstroms 程序 - 很容易理解正在发生的事情......但是如果您希望机器人运行
INCREASEJITTER 该怎么办
阻塞 5 次,然后向前行驶 10 秒,然后再次尝试 INCREASEJITTER 循环? 事情很快就会变得混乱。QuartzComposer 就是一个很好的例子,这也是为什么我认为图形语言永远不会“成为未来”的原因。
它使得很容易实现真正酷的东西(3D 粒子效果,带有相机)由网络摄像头像素的平均亮度控制)......但是做简单的事情非常困难,例如迭代 XML 文件中的元素,或将平均像素值存储到文件中。
对于学习来说,解释和理解图形语言要容易得多。
也就是说,我将推荐一种专门的基于文本的语言作为进展。 例如,对于图形,例如 Processing 或 NodeBox。 它们是“普通”语言(Processing 是 Java,NodeBox 是 Python),具有非常专业、易于使用(但功能强大得离谱)的框架。
重要的是,它们是交互性很强的语言,您不必编写数百个语言线条只是为了在屏幕上显示一个圆圈。您只需输入
oval(100, 200, 10, 10)
并按运行按钮,它就会出现! 这也使得它们非常容易演示和解释。更多与机器人相关的内容,甚至像 LOGO 这样的东西也会是一个很好的介绍 - 你输入“FORWARD 1”,乌龟会向前移动一个盒子。输入“LEFT 90”,它会旋转 90 度。这与现实非常简单。 您可以想象每条指令将做什么,然后尝试并确认它确实如此。
向他们展示看起来闪亮的东西,他们会从中学到从 C 语言中学到的有用的东西,如果他们感兴趣或进展到需要“真正的”语言的程度,他们将拥有所有这些知识,而不是遇到语法错误并编译砖墙..
I don't anything about LabView (or much about C/C++), but..
No...
Easier to learn? No, but they are easier to explain and understand.
To explain a programming language you have to explain what a variable is (which is surprisingly difficult). This isn't a problem with flowgraph/nodal coding tools, like the LEGO Mindstroms programming interface, or Quartz Composer..
For example, in this is a fairly complicated LEGO Mindstroms program - it's very easy to understand what is going in... but what if you want the robot to run the
INCREASEJITTER
block 5 times, then drive forward for 10 seconds, then try the INCREASEJITTER loop again? Things start getting messy very quickly..Quartz Composer is a great exmaple of this, and why I don't think graphical languages will ever "be the future"
It makes it very easy to really cool stuff (3D particle effects, with a camera controlled by the average brightness of pixels from a webcam).. but incredibly difficult to do easy things, like iterate over the elements from an XML file, or store that average pixel value into a file.
For learning, it's so much easier to both explain and understand a graphical language..
That said, I would recommend a specialised text-based language language as a progression. For example, for graphics something like Processing or NodeBox. They are "normal" languages (Processing is Java, NodeBox is Python) with very specialised, easy to use (but absurdly powerful) frameworks ingrained into them..
Importantly, they are very interactive languages, you don't have to write hundreds of lines just to get a circle onscreen.. You just type
oval(100, 200, 10, 10)
and press the run button, and it appears! This also makes them very easy to demonstrate and explain.More robotics-related, even something like LOGO would be a good introduction - you type "FORWARD 1" and the turtle drives forward one box.. Type "LEFT 90" and it turns 90 degrees.. This relates to reality very simply. You can visualise what each instruction will do, then try it out and confirm it really works that way.
Show them shiney looking things, they will pickup the useful stuff they'd learn from C along the way, if they are interested or progress to the point where they need a "real" language, they'll have all that knowledge, rather than run into the syntax-error and compiling brick-wall..
一如既往,这取决于。
我使用 LabVIEW 已有大约 20 年了,做过相当多的工作,从简单的 DAQ 到非常复杂的可视化,从设备控制到测试定序器。 如果还不够好,我肯定会换。 也就是说,我开始使用打孔卡编写 Fortran 代码,并在 8 位“机器”上使用大量编程语言,从基于 Z80 的机器开始。 语言范围从汇编语言到 BASIC,从 Turbo-Pascal 到 C。LabVIEW
是一项重大改进,因为它具有广泛的数据采集和分析库。 然而,人们必须学习一种不同的范式。 你肯定需要一个轨迹球;-))
As allways, it depends.
I am using LabVIEW since about 20 years now and did quite a large kind of jobs, from simple DAQ to very complex visualization, from device controls to test sequencers. If it was not good enough, I for sure would have switched. That said, I started coding Fortran with punchcards and used a whole lot of programming languages on 8-bit 'machines', starting with Z80-based ones. The languages ranged from Assembler to BASIC, from Turbo-Pascal to C.
LabVIEW was a major improvement because of its extensive libraries for data acqusition and analysis. One has, however, to learn a different paradigma. And you definitely need a trackball ;-))
我建议您使用 LabVIEW,因为您可以更快、更轻松地让机器人做您想做的事情。 LabVIEW 的设计就是考虑到这一点。 当然,C(++) 是很棒的语言,但 LabVIEW 比其他任何语言都做得更好。
人们可以在 LabVIEW 中编写非常好的软件,因为它提供了充足的范围和支持。
I would suggest you use LabVIEW as you can get down to making the robot what you want to do faster and easier. LabVIEW has been designed with this mind. OfCourse C(++) are great languages, but LabVIEW does what it is supposed to do better than anything else.
People can write really good software in LabVIEW as it provides ample scope and support for that.
我发现在我的应用程序中使用 LabVIEW 有一个很大的缺点:组织设计复杂性。 作为一名物理学家,我发现 Labview 非常适合原型设计、仪器控制和数学分析。 没有任何语言能比 LabVIEW 更快、更好地获得结果。 我从 1997 年开始使用 LabView。从 2005 年开始,我完全切换到 .NET 框架,因为它更容易设计和维护。
在 LabVIEW 中,必须绘制一个简单的“if”结构,并且会占用图形设计的大量空间。 我刚刚发现我们的许多商业应用程序很难维护。 应用程序越复杂,阅读起来就越困难。
我现在使用文本语言,并且我在维护一切方面都做得更好。 如果你将 C++ 与 LabVIEW 进行比较,我会使用 LabVIEW,但与 C# 相比,它并没有获胜
There is one huge thing I found negative in using LabVIEW for my applications: Organize design complexity. As a physisist I find Labview great for prototyping, instrument control and mathematical analysis. There is no language in which you get faster and better a result then in LabVIEW. I used LabView since 1997. Since 2005 I switched completely to the .NET framework, since it is easier to design and maintain.
In LabVIEW a simple 'if' structure has to be drawn and uses a lot of space on your graphical design. I just found out that many of our commercial applications were hard to maintain. The more complex the application became, the more difficult it was to read.
I now use text laguages and I am much better in maintaining everything. If you would compare C++ to LabVIEW I would use LabVIEW, but compared to C# it does not win
我认为图形语言可能是未来的语言......对于所有那些临时 MS Access 开发人员来说。 纯文本编码人员总会有一席之地。
就我个人而言,我不得不问,如果这一切都是为你完成的,那么建造机器人的真正乐趣是什么? 如果您只是将“找到红球”模块放入其中并看着它移动? 你会对自己的成就有什么自豪感? 就我个人而言,我不会有太多。 另外,它会教您什么关于编码或机器人技术中至关重要的软件/硬件接口(非常重要)方面的知识?
我并不声称自己是该领域的专家,但问问自己一件事:您认为 NASA 使用 LabVIEW 来对火星漫游者进行编码吗? 您认为机器人技术领域真正杰出的人都在使用 LabView 吗?
真的,如果你问我,使用像 LabVIEW 这样的千篇一律的东西来构建这个,唯一能让你做好准备的就是成为一些后院机器人建造者,仅此而已。 如果您想要一些能给您更多行业经验的东西,请构建您自己的“LabVIEW”类型系统。 构建您自己的找球模块,或您自己的“跟随路线”模块。 这将会困难得多,但也会更酷。 :D
I think that graphical languages might be the language of the future..... for all those adhoc MS Access developers out there. There will always be a spot for the purely textual coders.
Personally, I've got to ask what is the real fun of building a robot if it's all done for you? If you just drop a 'find the red ball' module in there and watch it go? What sense of pride will you have for your accomplishment? Personally, I wouldn't have much. Plus, what will it teach you of coding, or of the (very important) aspect of the software/hardware interface that is critical in robotics?
I don't claim to be an expert in the field, but ask yourself one thing: Do you think that NASA used LabVIEW to code the Mars Rovers? Do you think that anyone truly prominent in robotics is using LabView?
Really, if you ask me, the only thing using cookie cutter things like LabVIEW to build this is going to prepare you for is to be some backyard robot builder and nothing more. If you want something that will give you something more like industry experience, build your own 'LabVIEW'-type system. Build your own find-the-ball module, or your own 'follow-the-line' module. It will be far more difficult, but it will also be way more cool too. :D
你在读高中。 您需要花多少时间来完成该计划? 你们组里有多少人? 他们已经了解 C++ 或 LabView 了吗?
从你的问题来看,我发现你了解 C++,而小组中的大多数人都不懂。 我还怀疑小组领导者有足够的洞察力,注意到团队中的一些成员可能会被基于文本的编程语言吓倒。 这是可以接受的,你在高中,而这些人都是规范。 我觉得一般高中生对LabView的理解会比C++更直观。 我猜大多数高中生,就像一般人一样,都害怕命令行。 对你来说,差别要小得多,但对他们来说,那就是白天和黑夜。
您是对的,与 C++ 相同的概念可以应用于 LabView。 每个都有其优点和缺点。 关键是为工作选择正确的工具。 LabView专为此类应用而设计。 C++ 更加通用,可以应用于许多其他类型的问题。
我会推荐LabView。 只要有合适的硬件,您几乎可以开箱即用。 您的团队可以花更多的时间让机器人做您想做的事情,这才是本次活动的重点。
图形语言不是编程的未来,而是编程的未来。 多年来,它们一直是可用的选择之一,旨在解决某些类型的问题。 编程的未来是远离机器代码的一层又一层的抽象。 将来,我们会想知道为什么我们浪费所有时间一遍又一遍地编程“语义”。
我们应该在多大程度上依赖预先编写的模块,以及我们应该在多大程度上尝试自己编写?
你不应该浪费时间重新发明轮子。 如果 Labview 中有可用的设备驱动程序,请使用它们。 通过复制功能相似的代码并根据您的需求进行定制,您可以学到很多东西 - 您可以看到其他人如何解决类似的问题,并且必须先解释他们的解决方案,然后才能将其正确应用于您的问题。 如果你盲目地复制代码,让它发挥作用的机会就很小。 即使你复制代码,你也必须表现出色。
祝你好运!
You're in High School. How much time do you have to work on this program? How many people are in your group? Do they know C++ or LabView already?
From your question, I see that you know C++ and most of the group does not. I also suspect that the group leader is perceptive enough to notice that some members of the team may be intimidated by a text based programming language. This is acceptable, you're in high school, and these people are normies. I feel as though normal high schoolers will be able to understand LabView more intuitively than C++. I'm guessing most high school students, like the population in general, are scared of a command line. For you there is much less of a difference, but for them, it is night and day.
You are correct that the same concepts may be applied to LabView as C++. Each has its strengths and weaknesses. The key is selecting the right tool for the job. LabView was designed for this kind of application. C++ is much more generic and can be applied to many other kinds of problems.
I am going to recommend LabView. Given the right hardware, you can be up and running almost out-of-the-box. Your team can spend more time getting the robot to do what you want, which is what the focus of this activity should be.
Graphical Languages are not the future of programming; they have been one of the choices available, created to solve certain types of problems, for many years. The future of programming is layer upon layer of abstraction away from machine code. In the future, we'll be wondering why we wasted all this time programming "semantics" over and over.
how much should we rely on prewritten modules, and how much should we try to write on our own?
You shouldn't waste time reinventing the wheel. If there are device drivers available in Labview, use them. You can learn a lot by copying code that is similar in function and tailoring it to your needs - you get to see how other people solved similar problems, and have to interpret their solution before you can properly apply it to your problem. If you blindly copy code, chances of getting it to work are slim. You have to be good, even if you copy code.
Best of luck!
免责声明:我没有亲眼见过 LabVIEW,但我使用过其他一些图形语言,包括 WebMethods Flow 和 Modeller、大学的动态仿真语言以及,呃,麻省理工学院的 Scratch :)。
我的经验是,图形语言可以很好地完成编程的“管道”部分,但我积极使用的语言却妨碍了算法。 如果您的算法非常简单,那可能没问题。
另一方面,我认为 C++ 也不适合您的情况。 与有用的工作相比,您将花费更多的时间来跟踪指针和内存管理问题。
如果你的机器人可以使用脚本语言(Python、Ruby、Perl 等)进行控制,那么我认为这将是一个更好的选择。
然后是混合选项:
如果您的机器人没有脚本选项,并且您的团队中有一个 C++ 极客,那么请考虑让该极客编写绑定以将您的 C++ 库映射到脚本语言。 这将使具有其他专业知识的人更容易地对机器人进行编程。 这些装订将成为送给社区的一份好礼物。
如果 LabVIEW 允许,可以使用其图形语言将用文本语言编写的模块组合在一起。
Disclaimer: I've not witnessed LabVIEW, but I have used a few other graphical languages including WebMethods Flow and Modeller, dynamic simulation languages at university and, er, MIT's Scratch :).
My experience is that graphical languages can do a good job of the 'plumbing' part of programming, but the ones I've used actively get in the way of algorithmics. If your algorithms are very simple, that might be OK.
On the other hand, I don't think C++ is great for your situation either. You'll spend more time tracking down pointer and memory management issues than you do in useful work.
If your robot can be controlled using a scripting language (Python, Ruby, Perl, whatever), then I think that would be a much better choice.
Then there's hybrid options:
If there's no scripting option for your robot, and you have a C++ geek on your team, then consider having that geek write bindings to map your C++ library to a scripting language. This would allow people with other specialities to program the robot more easily. The bindings would make a good gift to the community.
If LabVIEW allows it, use its graphical language to plumb together modules written in a textual language.
我喜欢 LabVIEW。 我强烈推荐它,特别是如果其他人记得使用过类似的东西。 普通程序员需要一段时间才能适应它,但如果您已经知道如何编程,结果会好得多。
C/C++ 等于管理自己的内存。 你会沉浸在记忆链接中并担心它们。 使用 LabVIEW 并确保阅读 LabVIEW 附带的文档并注意竞争条件。
学习语言很容易。 学习如何编程则不然。 即使它是图形语言,这一点也不会改变。 图形语言的优点是更容易直观地看到代码将做什么,而不是坐在那儿破译一堆文本。
重要的不是语言,而是编程概念。 你学习用什么语言编程并不重要,因为只要付出一点努力,你就应该能够用任何语言很好地编程。 语言来来去去。
I love LabVIEW. I would highly recommend it especially if the other remembers have used something similar. It takes a while for normal programmers to get used to it, but the result's are much better if you already know how to program.
C/C++ equals manage your own memory. You'll be swimming in memory links and worrying about them. Go with LabVIEW and make sure you read the documentation that comes with LabVIEW and watch out for race conditions.
Learning a language is easy. Learning how to program is not. This doesn't change even if it's a graphical language. The advantage of Graphical languages is that it is easier to visual what the code will do rather than sit there and decipher a bunch of text.
The important thing is not the language but the programming concepts. It shouldn't matter what language you learn to program in, because with a little effort you should be able to program well in any language. Languages come and go.
天哪,答案是如此简单。 使用LabView。
我从事嵌入式系统编程已经有 10 年了,我可以说,如果没有至少几个月的基础设施(非常仔细的基础设施!),您将不会像第一天使用 LabView 那样高效。
如果您正在设计一款出售并用于军队的机器人,请从 C 开始——这是一个不错的选择。
否则,请使用允许您在最短的时间内尝试最多品种的系统。 这就是LabView。
Oh my God, the answer is so simple. Use LabView.
I have programmed embedded systems for 10 years, and I can say that without at least a couple months of infrastructure (very careful infrastructure!), you will not be as productive as you are on day 1 with LabView.
If you are designing a robot to be sold and used for the military, go ahead and start with C - it's a good call.
Otherwise, use the system that allows you to try out the most variety in the shortest amount of time. That's LabView.
我在这里发表的第一篇文章:)温柔一点……
我来自汽车行业,现在我在国防行业。 我可以根据经验告诉您,C/C++ 和 LabVIEW 确实是不同的野兽,有着不同的目的。 C/C++ 始终用于微控制器上的嵌入式工作,因为它结构紧凑并且编译器/工具很容易获得。 另一方面,LabVIEW 用于驱动测试系统(以及作为定序器的测试台)。 我们使用的大多数测试设备都来自 NI,因此 LabVIEW 提供了一个环境,让我们拥有工作所需的工具和驱动程序,以及我们想要的支持。
在易于学习方面,有很多资源适用于 C/C++ 和许多网站,其中列出了几乎所有您想要的免费内容的设计注意事项和示例算法。 对于 LabVIEW,用户社区可能不像 C/C++ 那样多样化,并且需要花费更多的精力来检查和比较示例代码(必须有正确版本的 LabVIEW 等)...我发现 LabVIEW 很容易选择起来学习,但是正如有些人在这里提到的,与并行性和其他各种事情有关的麻烦事需要一些经验才能意识到它们。
那么结论是什么呢? 我想说这两种语言都值得学习,因为它们确实代表了两种不同的编程风格,并且了解并精通这两种语言当然是值得的。
My first post here :) be gentle ...
I come from an embedded background in the automotive industry and now i'm in the defense industry. I can tell you from experience that C/C++ and LabVIEW are really different beasts with different purposes in mind. C/C++ was always used for the embedded work on microcontrollers because it was compact and compilers/tools were easy to come by. LabVIEW on the other hand was used to drive the test system (along with test stand as a sequencer). Most of the test equipment we used were from NI so LabVIEW provided an environment where we had the tools and the drivers required for the job, along with the support we wanted ..
In terms of ease of learning, there are many many resources out there for C/C++ and many websites that lay out design considerations and example algorithms on pretty much anything you're after freely available. For LabVIEW, the user community's probably not as diverse as C/C++, and it takes a little bit more effort to inspect and compare example code (have to have the right version of LabVIEW etc) ... I found LabVIEW pretty easy to pick up and learn, but there a nuisances as some have mentioned here to do with parallelism and various other things that require a bit of experience before you become aware of them.
So the conclusion after all that? I'd say that BOTH languages are worthwhile in learning because they really do represent two different styles of programming and it is certainly worthwhile to be aware and proficient at both.
LabVIEW 可让您快速入门,并且(正如其他人已经说过的那样)拥有庞大的代码库,可用于执行各种测试、测量和测试。 控制相关的事情。
然而,LabVIEW 最大的缺点是你失去了程序员为自己编写的所有工具。
您的代码存储为 VI。 这些是不透明的二进制文件。 这意味着您的代码实际上不是您的,而是 LabVIEW 的。 您无法编写自己的解析器,无法编写代码生成器,无法通过宏或脚本进行自动更改。
当您有一个 5000 VI 应用程序需要普遍应用一些小调整时,这很糟糕。 您唯一的选择是手动检查每个VI,如果您错过某个角落中某个VI的更改,天堂会帮助您。
是的,因为它是二进制的,所以你不能像使用文本语言那样进行差异/合并/修补。 这确实使得使用多个版本的代码成为可维护性的可怕噩梦。
如果您正在做一些简单的事情,或者需要原型设计,或者不打算维护您的代码,请务必使用 LabVIEW。
如果您想做真正的、可维护的编程,请使用文本语言。 刚开始你可能会比较慢,但从长远来看你会更快。
(哦,如果您需要 DAQ 库,NI 也提供 C++ 和 .Net 版本。)
LabVIEW lets you get started quickly, and (as others have already said) has a massive library of code for doing various test, measurement & control related things.
The single biggest downfall of LabVIEW, though, is that you lose all the tools that programmers write for themselves.
Your code is stored as VIs. These are opaque, binary files. This means that your code really isn't yours, it's LabVIEW's. You can't write your own parser, you can't write a code generator, you can't do automated changes via macros or scripts.
This sucks when you have a 5000 VI app that needs some minor tweak applied universally. Your only option is to go through every VI manually, and heaven help you if you miss a change in one VI off in a corner somewhere.
And yes, since it's binary, you can't do diff/merge/patch like you can with textual languages. This does indeed make working with more than one version of the code a horrific nightmare of maintainability.
By all means, use LabVIEW if you're doing something simple, or need to prototype, or don't plan to maintain your code.
If you want to do real, maintainable programming, use a textual language. You might be slower getting started, but you'll be faster in the long run.
(Oh, and if you need DAQ libraries, NI's got C++ and .Net versions of those, too.)
我认为与文本语言相比,图形语言的表达能力总是受到限制。 将尝试使用视觉符号(例如 REBUS 或手语)进行交流与使用文字进行交流进行比较。
对于简单的任务,使用图形语言通常更容易,但对于更复杂的逻辑,我发现图形语言会妨碍。
然而,这个论点中隐含的另一个争论是声明式编程与命令式编程。 对于您确实不需要对某件事的完成方式进行细粒度控制的任何事情,声明式通常更好。 您可以以声明性方式使用 C++,但需要预先进行更多工作才能实现,而 LABView 被设计为声明性语言。
一张图片胜过一千个文字,但如果一张图片代表了一千个你不需要的文字,而且你无法改变它,那么在这种情况下,一张图片就毫无价值。 然而,您可以使用文字创建数千张图片,指定每个细节,甚至明确引导观看者的焦点。
I think that graphical languages wil always be limited in expressivity compared to textual ones. Compare trying to communicate in visual symbols (e.g., REBUS or sign language) to communicating using words.
For simple tasks, using a graphical language is usually easier but for more intricate logic, I find that graphical languages get in the way.
Another debate implied in this argument, though, is declarative programming vs. imperative. Declarative is usually better for anything where you really don't need the fine-grained control over how something is done. You can use C++ in a declarative way but you would need more work up front to make it so, whereas LABView is designed as a declarative language.
A picture is worth a thousand words but if a picture represents a thousand words that you don't need and you can't change that, then in that case a picture is worthless. Whereas, you can create thousands of pictures using words, specifying every detail and even leading the viewer's focus explicitly.
美国国家仪器公司 (National Instruments) 主持了一项已发表的关于该主题的研究:
一项研究DSP 教学中的图形与文本编程
它专门研究了 LabVIEW 与 MATLAB(而不是 C)。
There is a published study of the topic hosted by National Instruments:
A Study of Graphical vs. Textual Programming for Teaching DSP
It specifically looks at LabVIEW versus MATLAB (as opposed to C).
这不会直接回答您的问题,但您可能需要考虑混合解释语言的第三种选择。 例如, Lua 是 已经用于机器人领域。 它速度快、重量轻,并且可以配置为使用定点数而不是浮点数运行,因为大多数微控制器没有 FPU。 Forth 是具有类似用法的另一种选择。
用 C 语言编写一个薄接口层,然后让学生自由地使用解释脚本应该很容易。 您甚至可以将其设置为允许动态加载代码,而无需重新编译和刷新芯片。 这应该会减少迭代周期,让学生更快地看到结果,从而更好地学习。
我反对使用像 LabVIEW 这样的可视化工具。 我似乎总是遇到一些不能或不会像我想要的那样发挥作用的东西。 因此,我更喜欢通过文本代码获得的绝对控制。
This doesn't answer you question directly, but you may want to consider a third option of mixing in an interpreted language. Lua, for example, is already used in the robotics field. It's fast, light-weight and can be configured to run with fixed-point numbers instead of floating-point since most microcontrollers don't have an FPU. Forth is another alternative with similar usage.
It should be pretty easy to write a thin interface layer in C and then let the students loose with interpreted scripts. You could even set it up to allow code to be loaded dynamically without recompiling and flashing a chip. This should reduce the iteration cycle and allow students to learn better by seeing results more quickly.
I'm biased against using visual tools like LabVIEW. I always seem to hit something that doesn't or won't work quite like I want it to do. So, I prefer the absolute control you get with textual code.
LabVIEW 的其他优势(除了库之外)是并发性。 它是一种数据流语言,这意味着运行时可以为您处理并发性。 因此,如果您正在做一些高度并发的事情并且不想进行传统的同步,那么 LabVIEW 可以为您提供帮助。
未来并不属于今天的图形语言。 它属于能够提出数据流表示(或另一种并发友好的编程类型)的人,该表示与图形方法一样简单,但也可以由程序员自己的工具进行解析。
LabVIEW's other strength (besides libraries) is concurrency. It's a dataflow language, which means that the runtime can handle concurrency for you. So if you're doing something highly concurrent and don't want to have to do traditional synchronization, LabVIEW can help you there.
The future doesn't belong to graphical languages as they stand today. It belongs to whoever can come up with a representation of dataflow (or another concurrency-friendly type of programming) that's as straightforward as the graphical approach is, but is also parsable by the programmer's own tools.
我认为是否选择 LabVIEW 取决于您是否想学习常用语言编程作为一项适销对路的技能,或者只是想完成工作。 LabVIEW 使您能够快速高效地完成工作。 正如其他人所观察到的那样,它并不能神奇地让您不必理解自己在做什么,如果您不理解,很可能会造成一团糟——尽管有传言说,LabVIEW 中糟糕的编码风格最糟糕的例子是通常是由那些在文本语言方面有经验并且拒绝适应 LabVIEW 工作方式的人犯下的,因为他们“已经知道如何编程了,该死!”
当然,这并不是说 LabVIEW 编程不是一项有市场价值的技能。 只是它不像 C++ 那样面向大众市场。
LabVIEW 可以非常轻松地管理并行发生的不同事情,这在机器人控制情况下很可能会发生。 代码中应该是连续的竞争条件也不应该成为问题(即,如果是的话,你就做错了):有一些简单的技术可以确保事情在必要时以正确的顺序发生 - 使用错误线或其他数据,使用通知程序或队列,构建状态机结构,甚至在必要时使用 LabVIEW 的序列结构。 同样,这只是花时间了解 LabVIEW 中可用的工具及其工作原理的情况。 我不认为对必须制作 subVI 图标的抱怨是有针对性的; 您可以非常快速地创建一个包含几个文本单词的文本,也许还带有背景颜色,这对于大多数用途来说都可以。
“图形语言是未来的道路吗”是基于错误二分法的转移注意力的说法。 有些东西非常适合图形语言(例如并行代码); 其他东西更适合文本语言。 我不认为 LabVIEW 和图形化编程会消失,也不会占领世界。
顺便说一句,如果 NASA 没有在太空计划中使用 LabVIEW,我会感到非常惊讶。 最近,Info-LabVIEW 邮件列表上有人描述了他们如何使用 LabVIEW 来开发和测试波音 787 上由电动机驱动的飞行表面的闭环控制,并给人留下了 LabVIEW 在飞机开发中广泛使用的印象。 它还用于大型强子对撞机中的实时控制!< /a>
目前获取更多关于 LabVIEW 的信息和帮助的最活跃的地方,除了 National Instruments 自己的网站和论坛之外,似乎是 熔岩。
I think the choice of LabVIEW or not comes down to whether you want to learn to program in a commonly used language as a marketable skill, or just want to get stuff done. LabVIEW enables you to Get Stuff Done very quickly and productively. As others have observed, it doesn't magically free you from having to understand what you're doing, and it's quite possible to create an unholy mess if you don't - although anecdotally, the worst examples of bad coding style in LabVIEW are generally perpetrated by people who are experienced in a text language and refuse to adapt to how LabVIEW works because they 'already know how to program, dammit!'
That's not to imply that LabVIEW programming isn't a marketable skill, of course; just that it's not as mass-market as C++.
LabVIEW makes it extremely easy to manage different things going on in parallel, which you may well have in a robot control situation. Race conditions in code that should be sequential shouldn't be a problem either (i.e. if they are, you're doing it wrong): there are simple techniques for making sure that stuff happens in the right order where necessary - chaining subVI's using the error wire or other data, using notifiers or queues, building a state machine structure, even using LabVIEW's sequence structure if necessary. Again, this is simply a case of taking the time to understand the tools available in LabVIEW and how they work. I don't think the gripe about having to make subVI icons is very well directed; you can very quickly create one containing a few words of text, maybe with a background colour, and that will be fine for most purposes.
'Are graphical languages the way of the future' is a red herring based on a false dichotomy. Some things are well suited to graphical languages (parallel code, for instance); other things suit text languages much better. I don't expect LabVIEW and graphical programming to either go away, or take over the world.
Incidentally, I would be very surprised if NASA didn't use LabVIEW in the space program. Someone recently described on the Info-LabVIEW mailing list how they had used LabVIEW to develop and test the closed loop control of flight surfaces actuated by electric motors on the Boeing 787, and gave the impression that LabVIEW was used extensively in the plane's development. It's also used for real-time control in the Large Hadron Collider!
The most active place currently for getting further information and help with LabVIEW, apart from National Instruments' own site and forums, seems to be LAVA.
在我到达之前,我们的团队(博士科学家,几乎没有编程背景)已经断断续续地尝试实现 LabVIEW 应用程序近一年了。 代码不整洁,太复杂(前端和后端),最重要的是,不起作用。 我是一名热心的程序员,但从未使用过 LabVIEW。 在 LabVIEW 专家的帮助下,我可以将我所知道的文本编程范例转化为 LabVIEW 概念,一周内就可以编写出应用程序。 这里的要点是,仍然需要学习基本的编码概念,语言,即使是像 LabVIEW 这样的语言,也只是表达它们的一种不同方式。
LabVIEW 非常适合用于其最初设计的用途。 即从DAQ卡获取数据并将其显示在屏幕上,中间可能需要进行一些小的操作。 然而,编程算法并不容易,我什至认为它更困难。 例如,在大多数过程语言中,执行顺序通常是使用伪数学符号(即
y = x*x + x + 1
)逐行执行,而LabVIEW将使用一系列VI来实现这一点,不一定在画布上彼此遵循(即从左到右)。此外,编程作为一种职业不仅仅是了解编码的技术细节。 能够有效地寻求帮助/搜索答案、编写可读代码以及处理遗留代码都是关键技能,而这些技能在诸如 LabVIEW 等图形语言中无疑更加困难。
我相信图形编程的某些方面可能会成为主流 - 子 VI 的使用完美体现了编程的“黑盒”原理,并且也用于其他语言抽象,例如 Yahoo Pipes 和 Apple Automator - 也许未来的某种图形语言将彻底改变我们的编程方式,但 LabVIEW 本身并不是语言设计中的巨大范式转变,我们仍然有
while、for、if
流控制、类型转换、事件驱动编程,甚至对象。 如果未来真的是用LabVIEW编写的话,C++程序员跨界就不会有太大困难了。作为后记,我想说 C/C++ 更适合机器人技术,因为毫无疑问,学生在某些时候必须处理嵌入式系统和 FPGA。 低级编程知识(位、寄存器等)对于这种事情来说是无价的。
@mendicant 实际上,LabVIEW 在工业中应用广泛,尤其是控制系统。 当然,美国宇航局不太可能将其用于机载卫星系统,但空间系统的软件开发是完全不同的球赛...
Before I arrived, our group (PhD scientists, with little programming background) had been trying to implement a LabVIEW application on-and-off for nearly a year. The code was untidy, too complex (front and back-end) and most importantly, did not work. I am a keen programmer but had never used LabVIEW. With a little help from a LabVIEW guru who could help translate the textual progamming paradigms I knew into LabVIEW concepts it was possible to code the app in a week. The point here is that the basic coding concepts still have to be learnt, the language, even one like LabVIEW, is just a different way of expressing them.
LabVIEW is great to use for what it was originally designed for. i.e. to take data from DAQ cards and display it on-screen perhaps with some minor manipulations in-between. However, programming algorithms is no easier and I would even suggest that it is more difficult. For example, in most procedural languages execution order is generally followed line by line, using pseudo mathematical notation (i.e.
y = x*x + x + 1
) whereas LabVIEW would implement this using a series of VI's which don't necessarily follow from each other (i.e. left-to-right) on the canvas.Moreover programming as a career is more than knowing the technicalities of coding. Being able to effectively ask for help/search for answers, write readable code and work with legacy code are all key skills which are undeniably more difficult in a graphical language such as LabVIEW.
I believe some aspects of graphical programming may become mainstream - the use of sub-VIs perfectly embodies the 'black-box' principal of programming and is also used in other language abstractions such as Yahoo Pipes and the Apple Automator - and perhaps some future graphical language will revolutionise the way we program but LabVIEW itself is not a massive paradigm shift in language design, we still have
while, for, if
flow control, typecasting, event driven programming, even objects. If the future really will be written in LabVIEW, C++ programmer won't have much trouble crossing over.As a postcript I'd say that C/C++ is more suited to robotics since the students will no doubt have to deal with embedded systems and FPGAs at some point. Low level programming knowledge (bits, registers etc.) would be invaluable for this kind of thing.
@mendicant Actually LabVIEW is used a lot in industry, especially for control systems. Granted NASA unlikely use it for on-board satellite systems but then software developement for space-systems is a whole different ball game...
我在目前工作的研究小组中也遇到过类似的情况。这是一个生物物理学小组,我们到处都使用 LabVIEW 来控制我们的仪器。 这效果绝对很棒:可以轻松组装 UI 来控制仪器的各个方面、查看其状态并保存数据。
现在我必须阻止自己写一篇长达 5 页的长篇大论,因为对我来说 LabVIEW 就是一场噩梦。 让我尝试总结一些优点和缺点:
免责声明 我不是 LabVIEW 专家,我可能会说一些有偏见、过时或完全错误的话:)
LabVIEW 优点
LabVIEW 缺点
(最后两点使得在一个项目上进行团队合作变得困难。这对于您的情况可能很重要)
结束语
至于您的“我应该编写自己的模块”问题:我不确定。 取决于你的时间限制。 如果没有必要,不要花时间重新发明轮子。 您很容易花几天时间编写低级代码,然后意识到自己已经没有时间了。 如果这意味着您选择 LabVIEW,那就选择吧。
如果有简单的方法将 LabVIEW 和 C++ 结合起来,我很想听听:这可能会给您带来两全其美的效果,但我怀疑是否存在。
但请确保您和您的团队花时间学习最佳实践。 互相看代码。 互相学习。 编写可用的、易于理解的代码。 玩得开心!
请原谅我听起来有些急躁,甚至有些迂腐。 只是 LabVIEW 对我来说是一场真正的噩梦:)
I've encountered a somewhat similar situation in the research group I'm currently working in. It's a biophysics group, and we're using LabVIEW all over the place to control our instruments. That works absolutely great: it's easy to assemble a UI to control all aspects of your instruments, to view its status and to save your data.
And now I have to stop myself from writing a 5 page rant, because for me LabVIEW has been a nightmare. Let me instead try to summarize some pros and cons:
Disclaimer I'm not a LabVIEW expert, I might say things that are biased, out-of-date or just plain wrong :)
LabVIEW pros
LabVIEW cons
(The last two points make working in a team on one project difficult. That's probably important in your case)
Concluding remarks
As for your "should I write my own modules" question: I'm not sure. Depends on your time constraints. Don't spend time on reinventing the wheel if you don't have to. It's too easy to spend days on writing low-level code and then realize you've run out of time. If that means you choose LabVIEW, go for it.
If there'd be easy ways to combine LabVIEW and, e.g., C++, I'd love to hear about it: that may give you the best of both worlds, but I doubt there are.
But make sure you and your team spend time on learning best practices. Looking at each other's code. Learning from each other. Writing usable, understandable code. And having fun!
And please forgive me for sounding edgy and perhaps somewhat pedantic. It's just that LabVIEW has been a real nightmare for me :)