Android离线语音识别使用PocketSphinx
尝试使用 PocketSphinx 在没有互联网的情况下进行语音识别,参考该网站,
http ://swathiep.blogspot.com/2011/02/offline-speech-recognition-with.html
遵循相同它是什么。
在模拟器中运行该程序,因为它不支持音频,会崩溃(不是强制关闭)。但是当尝试在手机上运行该程序时,应用程序只是打开并关闭(不是强制关闭)。是否需要添加更多库运行这个应用程序????????? 请快速回复任何人......
Trying to do Voice Recognition without internet using PocketSphinx referrring to the site,
http://swathiep.blogspot.com/2011/02/offline-speech-recognition-with.html
Followed the same as what it is.
Run the program in emulator,since it will not support audio, get crashed(not Force Close).but while trying to run this on phone,the application just opened and closed(not Force Close).Do need to add any more libraries to run this application?????????
pls reply fast anyone........
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
CMUSphinx wiki 上提供了有关 Android 上的 pocketsphinx 的最新文档。
基本上你需要从 Github 中提取演示,将其导入 android studio 并运行,这您可以测试基本功能的方式。
要开始集成到您自己的应用程序中,请执行以下操作:
在 Android 项目中引用库
库作为独立于体系结构的 pocketsphinx-android-5prealpha-nolib.jar 和适用于不同硬件体系结构的二进制 .so 文件进行分发。
在 Android Studio 中,您需要将 jar 文件放在 app/libs 文件夹中,将 jni .so 文件放入 app/src/main/jniLibs 文件夹中。
包含资源文件
在 Android 中随应用程序传送资源文件的标准方法是将它们放在项目的 asset/ 目录中。但为了使它们可用于 pocketsphinx 文件应该有物理路径,只要它们位于 .apk 内,它们就没有物理路径。 pocketsphinx-android 中的 Assets 类提供了一种自动将资源文件复制到目标设备外部存储的方法。
edu.cmu.pocketsphinx.Assets#syncAssets
同步从位于顶部assets/
的 asset.lst 文件中读取项目的资源。在复制之前,它会匹配资产和外部存储上同名文件(如果存在)的 MD5 校验和。仅当信息不完整(外部存储上没有文件,两个 .md5 文件中没有任何一个)或存在哈希不匹配时,它才会实际进行复制。 PocketSphinxAndroidDemo 包含生成 asset.lst 和 .md5 文件的 ant 脚本,查找 asset.xml。请注意,如果 ant 构建脚本在构建过程中无法正常运行,资产可能会不同步。确保脚本运行或自己创建 md5 文件和 asset.lst。
要将资产同步集成到您的应用程序中,请执行以下操作:
将 app/asset.xml 构建文件包含到您的应用程序中
编辑 build.gradle 构建文件以运行 asset.xml:
这应该可以解决问题
示例应用程序
pocketsphinx-android 的类和方法被设计为类似于 pocketsphinx 中使用的相同工作流程,除了将基本数据结构组织成与它们一起使用的类和函数之外它们被转化为相应类的方法。因此,如果您熟悉 pocketsphinx,您也应该对 pocketsphinx-android 感到满意。
SpeechRecognizer 是访问解码器功能的主类。它是在
SpeechRecognizerSetup
构建器的帮助下创建的。 SpeechRecognizerBuilder 允许配置解码器的主要属性以及其他参数。参数键和值与在命令行中传递到 pocketsphinx 二进制文件的参数键和值相同。了解有关调整 pocketsphinx 性能的更多信息。解码器配置是一个漫长的过程,包含IO操作,因此建议在异步任务内部运行。
解码器支持多个命名搜索,您可以在运行时切换这些搜索
一旦您设置了解码器并添加了可以开始识别的所有搜索,
您将在识别器侦听器的 onEndOfSpeech 回调中收到有关语音结束事件的通知。然后你可以调用 recognizer.stop 或 recognizer.cancel()。后者将取消识别,前者将导致最终结果在 onResult 回调中传递给您。
在识别过程中,您将在 onPartialResult 回调中获得部分结果。
您还可以在 swig 中访问用 Java 类包装的其他 Pocketsphinx 方法,查看详细信息 Decoder、Hypothesis、Segment 和 NBest 类。
Latest documentation on pocketsphinx on android is provided on CMUSphinx wiki.
Basically you need to pull the demo from Github, import it into android studio and run, this way you can test basic functionality.
To start integration into your own application do the following:
Referencing the library in Android project
Library is distributed as architecture-independent pocketsphinx-android-5prealpha-nolib.jar and binary .so files for different hardware architectures.
In Android Studio you need to place jar file in app/libs folder and jni .so files into app/src/main/jniLibs folder.
Including resource files
The standard way to ship resource files with your application in Android is to put them in assets/ directory of your project. But in order to make them available for pocketsphinx files should have physical path, as long as they are within .apk they don't have one. Assets class from pocketsphinx-android provides a method to automatically copy asset files to external storage of the target device.
edu.cmu.pocketsphinx.Assets#syncAssets
synchronizes resources reading items from assets.lst file located on the topassets/
. Before copying it matches MD5 checksums of an asset and a file on external storage with the same name if such exists. It only does actualy copying if there is incomplete information (no file on external storage, no any of two .md5 files) or there is hash mismatch. PocketSphinxAndroidDemo contains ant script that generates assets.lst as well as .md5 files, look for assets.xml.Please note that if ant build script doesn't run properly in your build process, assets might be out of sync. Make sure that script runs or create md5 files and assets.lst yourself.
To integrate assets sync in your application do the following
Include app/asset.xml build file into your application
Edit build.gradle build file to run assets.xml:
That should do the trick
Sample application
The classes and methods of pocketsphinx-android were designed to resemble the same workflow used in pocketsphinx, except that basic data structures organized into classes and functions working with them are turned into methods of the corresponding classes. So if you are familiar with pocketsphinx you should feel comfortable with pocketsphinx-android too.
SpeechRecognizer is the main class to access decoder functionality. It is created with the help of
SpeechRecognizerSetup
builder. SpeechRecognizerBuilder allows to configure main properties as well as other parameters of teh decoder. The parameters keys and values are the same as those are passed in command-line to pocketsphinx binaries. Read more about tweaking pocketsphinx performance.Decoder configuration is lengthy process that contains IO operation, so it's recommended to run in inside async task.
Decoder supports multiple named searches which you can switch in runtime
Once you setup the decoder and add all the searches you can start recognition with
You will get notified on speech end event in onEndOfSpeech callback of the recognizer listener. Then you could call recognizer.stop or recognizer.cancel(). Latter will cancel the recognition, former will cause the final result be passed you in onResult callback.
During the recognition you will get partial results in onPartialResult callback.
You can also access other Pocketsphinx method wrapped with Java classes in swig, check for details Decoder, Hypothesis, Segment and NBest classes.